Auditing in the AI era
How assurance shifts when systems include LLMs, agents, and probabilistic outputs.
What I cover here
- How assurance shifts when systems include LLMs, agents, and probabilistic outputs.
- What “good” looks like (evidence, ownership, cadence).
- What usually breaks (manual steps, missing provenance, unclear exceptions).
No client specifics. No tracking. No cookies by default.
Artifacts (coming in v1)
- Short write-up + core checks
- Evidence expectations (inputs/outputs)
- “Failure modes” checklist
If you want code-first examples today, start with Python Encounters.
Collaborate
If you want to help make these notes more testable and reusable: share corrections, counterexamples, or suggest an artifact to build next.