Framework · AI Governance in Regulated Environments

The Judgment Layer™

Context, data, and function produce outputs that look correct. Judgment is what makes those outputs defensible. It is the fourth layer of AI in regulated environments — and the only one FDA will enforce against when it is missing.

FDA Warning Letter 320-26-58 established the enforcement precedent. Read the full framework on Medium →
The Judgment Layer™ — AI Governance Framework by ParanoiaIQ

Three layers produce outputs that look correct.

Everyone is talking about the three layers of AI: context, data, and function. FDA just told you they are not enough. Purolea Cosmetics Lab had all three. The warning letter came anyway.

Layer 01
Context
What the AI knows about your environment, product type, and regulatory framework.
Layer 02
Data
What it was trained on and retrieves. Training data, public sources, internal documents.
Layer 03
Function
What it executes. Generate, classify, summarize, draft, calculate.
Together, they produce outputs that look correct. That is precisely the problem. The output looks finished. The reviewer stops thinking.
Layer 04 — What FDA Requires

Judgment is accountability applied to a decision under regulatory obligation.

  • A named individual who can be interviewed by an investigator
  • Accountability that survives the output
  • The ability to recognize what the system does not know it does not know

Most believe they are in Zone 1. Most are in Zone 2.

The difference is not intention. It is whether a named person can articulate what they verified and why. If they cannot articulate it, judgment did not happen.

Zone 1
Judgment Present
AI assists. A qualified person reviews the output, evaluates it against regulatory requirements, identifies what the AI could not know, and approves with documented accountability.
Defensible
Zone 2
Judgment Delegated
AI generates. A human reviews quickly, without independent verification. Approval is a formality. Accountability cannot be demonstrated under inspection.
Exposure
Zone 3
Judgment Absent
AI generates. No qualified human exercises judgment. Gaps in the model's knowledge become gaps in the compliance program. This is the Purolea fact pattern.
Enforcement
Most organizations believe they are in Zone 1. Most are in Zone 2. If a named person cannot articulate what they verified and why, judgment did not happen. Without it, you are in Zone 2 regardless of what you believe about your review process.

This is not an individual failure. It is structural.

The organization never defined what judgment looks like, who exercises it, or how it gets documented. In that vacuum, behavior fills the gap. Sometimes well. Usually not. A structural judgment framework requires three things.

1
Defined Role
Not any employee. An authorized person: role documented, qualifications established, responsibility assigned in writing by document type. For pharma: 21 CFR 211.22. For devices: 21 CFR Part 820 (QMSR).
2
Defined Standard
The reviewer asks: what regulatory requirement could this AI have missed? What does my expertise add that the model cannot supply? That evaluation must be defined, repeatable, and documented.
3
Defined Evidence
"We reviewed it" is not a record. The review must produce evidence: who reviewed, when, what they verified, what they approved. If you cannot pull that record in five minutes, it does not exist.

Three words. Each one carries weight.

"Any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative of your firm's QU in accordance with section 501(a)(2)(B) of the FD&C Act."

FDA Warning Letter 320-26-58 · Purolea Cosmetics Lab · April 2, 2026
Authorized.
Not any employee. A person whose role, qualifications, and responsibility are defined in writing.
Human.
Not a secondary AI validation layer. A person who can be named, interviewed by an investigator, and held accountable for the content they approved.
Representative of the QU.
The Quality Unit, operating independently. Not the group that generated the document. Not the project owner who needs it released.

AI can generate.
Only judgment makes it defensible.

You can validate context, data, and function. You cannot validate judgment into existence. The fourth layer is not a technology problem. It is a governance problem. And it is the layer FDA just enforced against.