For Every Scale

For Every Scale

Who Is Accountable When AI Decides?

As AI performs expert work, responsibility remains human. Institutions are already feeling the strain.

Josh Rowe's avatar
Josh Rowe
Mar 03, 2026
∙ Paid
  • AI is now performing parts of expert-level work in regulated industries.

  • Courts and regulators are already confronting accountability gaps.

  • Institutions redesigning tasks without redesigning liability frameworks are building silent risk.

Jessica Rusu, Chief Data, Information and Intelligence Officer, Financial Conduct Authority

AI is no longer confined to drafting emails and summarising documents.

It is:

analysing financial statements
flagging compliance risks
recommending credit decisions
interpreting contracts
triaging logistics flows

In many cases, it is performing components of what was historically considered expert work.

But something fundamental has not changed.

Responsibility.

The model can generate the recommendation.

The human signs the decision.

That gap is where institutional stress begins.

AI Is Crossing Into Professional Territory

For decades, professional authority rested on three pillars:

training
accreditation
liability

Accountants sign audits.
Lawyers sign opinions.
Doctors sign treatment plans.
Executives sign off on risk exposure.

The signature carries responsibility.

AI can now perform parts of the analysis behind those signatures at speed and scale.

But it cannot carry the liability.

That asymmetry is no longer theoretical.

The First Fractures Are Already Visible

Courts are beginning to confront this tension directly.

In the United States, lawyers have been fined and sanctioned after submitting legal filings containing AI-generated citations that did not exist. Judges made clear that the technology may assist the work, but responsibility for accuracy remains entirely human.

In Australia, a lawyer was formally penalised after filing court documents that included fabricated case references produced by AI. Again, the ruling was explicit: professional accountability does not transfer to software.

The pattern is consistent:

AI contributes.
Humans remain liable.

That is not a minor compliance issue.

It is a structural one.

Regulators Are Signalling the Same

In its formal AI update, the UK Financial Conduct Authority stated:

“We are focused on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets.” - UK Financial Conduct Authority

The language is deliberate.

Adoption is encouraged.
Responsibility remains central.

AI does not dilute accountability.
It increases scrutiny.

Strategic truth

AI can scale judgment-like outputs.
It cannot assume accountability.

The Healthcare Parallel

The same tension is emerging in medicine.

AI systems increasingly support diagnosis, triage and treatment planning. But if an AI-assisted recommendation contributes to patient harm, traditional negligence frameworks do not clearly distribute responsibility between clinician, institution and developer.

Regulators are reviewing how liability should operate in AI-assisted environments.

Again, the issue is not capability.

It is accountability architecture.

The Hidden Institutional Risk

As AI embeds deeper into expert workflows, a subtle shift occurs.

Humans move from primary actors to supervisors.
Review cycles compress.
Volume increases.
Cognitive load changes.

Over time, the human signature risks becoming a validation step on machine-generated analysis.

When that happens, two risks rise simultaneously:

Operational opacity.
Liability concentration.

Boards may believe AI is reducing error.

Regulators may see increased systemic fragility.

Those perspectives will eventually collide.

The Institutional Stress Test

Every significant AI deployment in a regulated environment now forces a deeper question:

When something goes wrong, who is responsible?

Not the vendor.
Not the model provider.
Not the algorithm.

Which named individual stands behind the decision?

And can they demonstrate meaningful oversight?

Courts are already asking that question in narrow cases.

Regulators will expand it.

Boards should get there first.

Most Organisations Are Redesigning The Wrong Layer

Enterprises are currently focused on:

productivity gains
automation metrics
cost reduction
task decomposition

Very few are stress-testing:

signature risk
oversight integrity
liability concentration
insurance exposure

That gap is widening.

And it will not remain invisible.

What Leadership Teams Must Redesign Now

For boards and CEOs, this is not an abstract ethics debate.

It is a structural governance decision.

Below, I outline:

  • The three accountability failure patterns already emerging inside large organisations

  • The specific oversight metrics boards should demand before approving AI-linked workflow changes

  • The redesign principles institutions must implement before regulators force them to

If you are deploying AI into revenue-generating or regulated workflows, these questions are not optional.

They are imminent.

The Three Accountability Failure Patterns

Across regulated industries, three structural weaknesses are emerging.

User's avatar

Continue reading this post for free, courtesy of Josh Rowe.

Or purchase a paid subscription.
© 2026 Josh Rowe · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture