Most AI governance proves nothing. I show you what can actually be shown.

I write plain-English explainers that focus on one question: what can be proven after an automated decision is challenged.

If you rely on an AI system, the issue is not whether you have policies or processes. The issue is whether you can show what actually happened in a specific decision before anyone asks.

When an outcome is disputed, the expectation is simple. You will be asked to show what the system did, why it did it, who allowed it and on what basis it was allowed at the time.

In many cases, that cannot be shown. This is not usually bad intent. It is because the system was not set up to produce that proof.

These explainers set out, in plain terms, what must be shown and why. They do not provide legal advice or recommendations.

Commission an explainer Example explainer

Russell Parrott

The Governance question

Take a single decision made by your system and assume it is challenged tomorrow and answer the following question. Note: each part of the question must be supported by evidence, not just described:
Without accessing the system's internal constructs can you show the decision that was actually made and what happened as a result. Who, as a named person or organisation, authorised it. Where that authority is recorded, what it was based on at the time the decision occurred, and what legal, regulatory or contractual basis applied and allowed it to go ahead?
This is what regulators, courts, insurers and affected individuals require.

Under laws such as the EU AI Act, the Product Liability Directive, and applicable UK and US disclosure rules if you cannot show the evidence, you cannot prove how the decision was made, who was responsible for it or why it was allowed to happen.

Simply you cannot explain or justify the decision in a way that will hold under challenge.

At that point, you cannot rely on the decision. It may be rejected, reversed or treated as unsupported. Responsibility will be reassessed, and the question will move from what your system did to why you allowed it to operate without a provable basis.

Different laws ask for different kinds of evidence.

Having what one law requires does not mean you have what another will expect.

You may have logs and system records because one law requires traceability. But if a decision is challenged, another may expect you to show what actually happened, how that outcome came about and how the records connect.

Or

You may rely on defined roles or organisational responsibility because another one law assigns accountability. But elsewhere the question becomes who allowed this specific outcome and what supports that link.

The difficulty may not be meeting one law, it is knowing which laws apply and what evidence each ask for.

For example, the table below shows what happens if a decision is questioned under different laws. It covers the EU AI Act, the Product Liability Directive, the UK GDPR, the Civil Procedure Rules, the FTC Act, Title VII and US product liability law.

Read it like this: depending on which law applies, you may be expected to already have records in place or you may be required to produce them after the event. In some cases responsibility sits with an organisation or role rather than a named person. In others, the focus is on what can be shown about the decision at the time it was made.

The point is simple: when a decision is challenged the outcome depends on what you can actually show.

Legal Instrument Decision & Outcome Traceability Identifiable Authorisation Recorded Authority Basis at Time of Decision Legal / Contractual Permission
EU AI Act Requires logging and traceability of system operation and outputs Defines roles (provider, deployer) but not decision-level named authorisation Requires technical documentation and logs Requires risk management and intended purpose defined ex ante Permits use if conformity and obligations satisfied
Product Liability Directive (EU) Focus on causation between product and damage; evidence can be compelled Liability attaches to economic operators, not named individuals Courts can order disclosure of relevant evidence Assesses defect based on expectations at time of placing on market Liability framework determines permissibility post-harm
UK GDPR Requires ability to explain automated decisions and effects Controller responsible; no named decision author required Requires records demonstrating compliance Requires lawful basis at time of processing Decision must align with lawful basis (consent, contract, etc.)
Civil Procedure Rules (UK) Requires disclosure of documents showing what occurred Not predefined; emerges through evidence Requires disclosure of all relevant records Evaluated through contemporaneous documents Determined through legal argument in proceedings
FTC Act (US) Requires substantiation of claims about system behaviour Organisation responsible for representations No fixed structure; evidence must support claims Judged against truth at time claims were made Prohibits unfair or deceptive practices
Title VII (US) Requires justification of employment decisions and outcomes Employer accountable; no named decision author required Evidence produced via litigation/discovery Must rely on non-discriminatory basis at time Legality depends on absence of discrimination
US Product Liability (State Law) Requires proof of defect and causation Liability attaches to manufacturer/seller Evidence obtained via discovery Judged against expectations at time of distribution Determines whether harm-producing use was acceptable

What I do

I write plain-English explainers about AI accountability.

Each piece defines a concept, explains how it works and sets out what can and cannot be proven when a decision is challenged. The question is always the same: if a decision is disputed tomorrow, what evidence can you actually show?

I explain what you need to know if you are asked to demonstrate to a regulator, court or affected individual how a specific decision was made, authorised, and justified.

My readers include lawyers, regulators, consultants, insurers, government bodies and the UN.

These explainers do not advise, instruct, or provide solutions. They help you see whether your systems can prove what happened and if not, why that is a design issue, not a policy gap.

Get in touch

You’re welcome to contact me about my writing (AI Accountability Evidence Reports) or for press and media enquiries. By submitting this form you consent to the processing of your information for the purpose of responding to your enquiry.