Russell Parrott
Analyst
Researcher
Author
Researcher
Author
AI accountability fails when evidence is required.
I analyse why AI governance and accountability claims fail when outcomes are challenged and evidence is required. Most organisations claim to have AI governance. They publish policies, assign responsibility, conduct audits and issue compliance statements. Yet when AI-driven systems cause harm or produce disputed outcomes those assurances often fail to show what actually happened. The problem is rarely bad faith. It is structural.
Most governance material describes intent rather than capability. Policies state what should occur, but they do not establish what a system can technically record, retain, reconstruct or demonstrate once an outcome is questioned.
When an AI-related decision is challenged, accountability depends on evidence: whether the system can show how an outcome was produced, what constraints applied at the time and where responsibility was technically anchored.
Without that evidence, governance claims cannot be relied upon regardless of how complete the documentation appears.
I do not advise organisations, certify systems, interpret law, or provide compliance opinions. I publish independent analysis that explains where AI accountability fails in practice and why those failures persist despite audits, frameworks and regulatory claims.
What I do
Most AI governance work assumes that accountability can be exercised when needed. My work asks a prior question: can accountability still exist once evidence is actually demanded?
I approach AI accountability as an evidentiary problem; not a policy or ethics one.
My core work is the development of public accountability standards. These set out the minimum conditions that must already be in place for AI accountability to be possible once an outcome is challenged. I work solely from publicly observable material without relying on access, cooperation or trust; a self-imposed constraint that exists because accountability is only real if it can be exercised by someone who does not control the system.
Courts, regulators, journalists, insurers and affected individuals all have to rely on what is already observable in public because disputes arise precisely when trust has broken down and cooperation can no longer be assumed. In those moments, accountability does not depend on what a system owner is willing to disclose after the fact, only on evidence that exists independently of access, permission or goodwill.
The standards do not assess compliance, recommend governance practices or propose solutions. They define boundaries: the point at which responsibility, evidence or procedure can no longer operate, regardless of intent or policy.
Alongside these standards, I publish long-form explainers of around 1,500–2,000 words. Each unpacks specific accountability failures, assumptions or structural limits in plain terms. They are neither standards or case studies, rather they explain why and where accountability breaks when scrutiny begins.
In parallel, I write short posts on Linkedin that surface a single idea, boundary or failure point. These posts exist to signal, illustrate or provoke attention to an accountability issue, often pointing back to the longer explainers or to the underlying standards.
In short:
- My Standards define what must exist for accountability to be possible at all.
- Explainers clarify how and why accountability fails in practice.
- Short posts highlight specific points of failure or assumption.
AI Accountability
Public AI Accountability: A Reference Standard for Third-Party Reliance
Defines the minimum publicly visible conditions that must already exist for AI accountability to be possible in practice.
https://www.amazon.com/dp/B0GH7QNL46
Structural Governance Failure: A Recognition Guide
Identifies recurring structural conditions that prevent accountability, focusing on recognising failures without internal access or proposed remedies.
https://www.amazon.com/dp/B0GCT2WR9W
The Structural Governance Standard for AI: Exposing what AI Systems really do
The first operational framework to expose whether AI systems uphold real safeguards using 15 enforceable checks.
https://www.amazon.com/dp/B0FNJQQQ9K
The AI World Order: Intelligence, Power, and the New Global Conflict
Examines AI as a weapon and economic force, arguing that control of algorithms will determine the next global superpower.
https://www.amazon.com/dp/B0F1ZVYBYL
Below 40: The Hidden Fragility of AI Systems
Reveals how most AI systems cannot prove how they work and introduces the Evidential Resilience Ratio to measure traceability and resilience.
https://www.amazon.com/dp/B0G5LK173F