AI Didn’t Get It Wrong. You Did.
AI doesn’t replace expertise. It amplifies it, and exposes when organisations lack it.
Generative AI amplifies expertise rather than replacing it.
When users lack domain knowledge, plausible AI outputs become difficult to evaluate.
Organisations deploying AI must ensure the human in the loop can recognise errors.
Blaming AI Is Like Blaming Excel
“AI is great, except when it gets everything wrong all the time”
That is the headline of a recent column.
The argument rests on a simple experience.
A user asked AI to identify a band from a partial description.
The system failed.
From that anecdote comes a broader conclusion:
“AI is great, except when it gets everything wrong all the time.”
This is an understandable reaction.
It is also a misunderstanding.
Not because AI never gets things wrong.
It clearly does.
But because the example reveals something more important than the technology’s limits.
It reveals how people misunderstand the tool.
The Excel Problem
Imagine opening Excel for the first time.
You type a formula incorrectly.
The spreadsheet produces the wrong answer.
You then conclude spreadsheets are unreliable.
That sounds absurd.
But it is essentially the same reasoning that often appears in commentary about generative AI.
These systems are not encyclopaedias.
They are probabilistic reasoning engines.
And like any tool, the quality of the output depends heavily on the skill of the person using it.
AI Is Not Google
The column describes using AI the way many casual users do:
as a replacement for search.
That works sometimes.
But it is not where the technology creates its greatest value.
Generative systems are far more powerful when used to:
structure analysis
draft documents
generate code
summarise complex material
iterate ideas rapidly
Used this way, they compress hours of work into minutes.
That is why people who use AI every day often report dramatically different experiences from those who try it occasionally.
The tool has not changed.
The workflow has.
The Real Constraint
In the column, the writer describes asking the system why it failed to identify the band.
The AI suggests the problem might be the description.
The writer responds that a human would have asked that question earlier.
That observation is actually the key insight.
Human expertise matters.
Because generative AI does not know when it is wrong.
It produces outputs that are statistically plausible.
Which means something important follows.
If the human operator cannot evaluate the answer, the system becomes unreliable.
Not because the model failed.
Because the supervision failed.
The Leadership Blind Spot
This is where the issue becomes strategic.
Many organisations talk about “human-in-the-loop” AI systems.
But the phrase is often misunderstood.
The human role is not symbolic oversight.
It is expert judgment.
If the person reviewing the output lacks domain knowledge, the loop is effectively broken.
The system is operating without meaningful supervision.
That is the real operational risk.
AI Amplifies Judgment
There is a historical precedent.
When calculators appeared in classrooms, critics argued they would destroy mathematical understanding.
The opposite happened.
Students who understood the concepts moved faster.
Students who did not still struggled.
Calculators did not eliminate expertise.
They exposed it.
Generative AI works the same way.
It amplifies judgment.
And it amplifies mistakes.
At exactly the same speed.
The Question Leaders Should Ask
So when someone concludes that AI “gets everything wrong all the time,” the response should not be defensive.
It should be analytical.
What exactly failed?
The model?
Or the workflow surrounding it?
Because generative AI does not democratise expertise.
It rewards it.
For CEOs deploying these systems across their organisations, that distinction matters.
The most important question is not whether the technology works.
It is whether the people supervising it can recognise when it doesn’t.
If they cannot, the problem is not the AI system.
It is the operating model.
And AI will scale that weakness faster than any previous technology.



