parrott.russell@gmail.com +44(0) 785 7148 389 +44(0) 785 7148 389
Russell Parrott
Russell Parrott Analyst
Researcher
Author

AI accountability fails when evidence is required.

I analyse why AI governance and accountability claims fail when outcomes are challenged and evidence is required.

Most organisations claim to have AI governance. They publish policies, assign responsibility, conduct audits and issue compliance statements. Yet when AI-driven systems cause harm or produce disputed outcomes those assurances often fail to show what actually happened.

The problem is rarely bad faith. It is structural.

Most governance material describes intent rather than capability. Policies state what should occur, but they do not establish what a system can technically record, retain, reconstruct or demonstrate once an outcome is questioned.

When an AI-related decision is challenged, accountability depends on evidence: whether the system can show how an outcome was produced, what constraints applied at the time and where responsibility was technically anchored.

Without that evidence, governance claims cannot be relied upon regardless of how complete the documentation appears.

I do not advise organisations, certify systems, interpret law, or provide compliance opinions. I publish independent analysis that explains where AI accountability fails in practice and why those failures persist despite audits, frameworks and regulatory claims.

What I do

Most AI governance work assumes that accountability can be exercised when needed. My work asks a prior question: can accountability still exist once evidence is actually demanded?

I approach AI accountability as an evidentiary problem; not a policy or ethics one.

My core work is the development of public accountability standards. These set out the minimum conditions that must already be in place for AI accountability to be possible once an outcome is challenged. I work solely from publicly observable material without relying on access, cooperation or trust; a self-imposed constraint that exists because accountability is only real if it can be exercised by someone who does not control the system.

Courts, regulators, journalists, insurers and affected individuals all have to rely on what is already observable in public because disputes arise precisely when trust has broken down and cooperation can no longer be assumed. In those moments, accountability does not depend on what a system owner is willing to disclose after the fact, only on evidence that exists independently of access, permission or goodwill.

The standards do not assess compliance, recommend governance practices or propose solutions. They define boundaries: the point at which responsibility, evidence or procedure can no longer operate, regardless of intent or policy.

Alongside these standards, I publish long-form explainers of around 1,500–2,000 words. Each unpacks specific accountability failures, assumptions or structural limits in plain terms. They are neither standards or case studies, rather they explain why and where accountability breaks when scrutiny begins.

In parallel, I write short posts on Linkedin that surface a single idea, boundary or failure point. These posts exist to signal, illustrate or provoke attention to an accountability issue, often pointing back to the longer explainers or to the underlying standards.

In short:

  • My Standards define what must exist for accountability to be possible at all.
  • Explainers clarify how and why accountability fails in practice.
  • Short posts highlight specific points of failure or assumption.
All three address the same problem from different distances, none replace the others.

My work is not about how AI governance should look. It is about what can still be proven once something has gone wrong, and what cannot.

AI Accountability

Get in touch

You’re welcome to contact me about my writing, AI Accountability Evidence Reports or for press and media enquiries. By submitting this form you consent to the processing of your information for the purpose of responding to your enquiry.