Lead the Wizard: AI for CEOs
A follow-up to Prompting for Leaders. CEOs must guide AI with structure, not rely on wizardry.
Ethan Mollick says we should work with AI like wizards.
CEOs need to lead AI, not admire it.
Control, structure, and judgment beat magic every time.
In On Working With Wizards, academic
argues that modern AI behaves like a “wizard.” You give it a prompt, and it conjures something extraordinary that you do not fully understand.It is a clever metaphor. But if you are a CEO, your role is not to marvel at magic. It is to direct it.
AI can be mysterious, but leadership means turning the spell into a system.
What Mollick Gets Right
Mollick’s warning is sound. AI systems often produce results that seem brilliant yet lack transparency. You might get a polished report full of confident statements that turn out to be wrong.
He also makes a good point about trust. Because these systems hide their reasoning, users must rely on what he calls “provisional trust”, assuming results are acceptable unless proven otherwise.
And he is right that we need new skills: the ability to curate, verify, and decide when to use AI versus human expertise.
That is the academic view. The business reality is different.
What CEOs Need Instead
CEOs cannot be passive users of magic.
You are accountable for every decision, not the algorithm. The question is not “how do I work with a wizard?” It is “how do I make the wizard work for me?”
Structure beats spontaneity.
Mollick’s curiosity-driven approach suits classrooms. Executives need repeatable results. That is why I use the CLEAR framework; Context, Lens, Expectation, Action, Refine to guide AI conversations. It keeps outputs deliberate and auditable.
Human judgment is non-negotiable.
In my previous article, I referenced Deloitte Australia’s AI mistake, where a 440,000-dollar government report contained fabricated citations and required a refund. The lesson is simple: you can automate drafting, but you cannot automate accountability.
Trust is not enough.
Mollick’s “provisional trust” makes sense in academia. In leadership, you need informed control; knowing limits, checking facts, and owning outcomes.
Voices from the Field: What the Wizards Miss
The best insights on Mollick’s article came from its readers. The comments section of On Working With Wizards offered a practical lens that every CEO can learn from.
The black box is a design choice.
As one reader noted, opacity in AI systems is not inevitable, it is a design decision. Leaders can demand transparency. Ask vendors for reasoning visibility, audit trails, and traceable outputs. If AI influences your decisions, you have the right to see how it got there.
Verification beats velocity.
Another commenter described breaking complex tasks into smaller, verifiable steps and using multiple AIs to cross-check results. This is smart delegation applied to machines. Instead of one big risky output, create smaller checkpoints you can validate.
Skill erosion is a leadership risk.
Readers warned that over-reliance on AI turns humans into “quality checkers for black box outputs”. CEOs should guard against that. Prompts are not only about speed, they are a way to keep thinking sharp while scaling capability.
Transparency builds trust.
One commenter called for “verifiable reasoning chains” and “audit trails”. That is corporate gold. Trust is not a feeling. It is a system of accountability.
Together, these insights highlight the frontier of AI leadership: control, clarity, and competence.
CEOs do not work with wizards. They lead them.
Leadership in the Age of AI
AI does not remove the need for leadership. It multiplies it.
The best leaders set intent, define quality, and maintain oversight. AI becomes valuable only when guided by human judgment and business context.
When you prompt with structure, check with discipline, and refine with care, AI becomes leverage. When you do not, it becomes risk.
The goal is not to work with wizards. It is to lead them with clarity and purpose.