Your Staff’s Secret AI Habit
Shadow AI is everywhere. Here’s why staff hide it, what leaders can do about it, and how to turn it into measured productivity.
Shadow AI is common because policy, training, and culture have not kept up.
Transparent, encouraged AI use delivers measurable gains when governed and measured.
Use the Economic Reform Roundtable moment to move from rhetoric to practice with a 90-day plan.

We are walking into the Australian government’s Economic Reform Roundtable, where productivity is the headline act. It is invite-only. The agenda explicitly includes using data, digital, and AI to lift living standards.
Business is already framing AI as a practical lever for growth. As the Business Council’s Bran Black put it, “AI can help us work smarter, not harder.” He also calls AI “our next big lever for economic growth.”
Shadow AI is already inside your company
Employees are adopting AI faster than governance can catch up. Software AG’s global study suggests that around half of employees are using unapproved tools. TELUS Digital’s 2025 survey found 68 per cent of enterprise AI users access assistants through personal accounts, and 57 per cent admit to entering sensitive data.
KPMG and the University of Melbourne report that 57 per cent of workers hide AI use, often presenting AI-generated content as their own. Only 47 per cent say they have received any AI training.
Why hide it? Slack’s Workforce Index points to stigma. Nearly half of desk workers are uncomfortable telling managers they used AI, citing “cheating,” “less competent,” and “lazy” perceptions as top reasons.
On the street, anxiety is real. ABC profiled Australians who say, “We all got AI-ed,” and “I don’t know what I’m going to do now.” These are small quotes with big signals for leaders.
The real upside when you bring it into the sunlight
Where companies intentionally encourage and measure AI use, the gains are tangible. A field experiment with 5,179 support agents found that generative AI increased issues resolved per hour by about 14 per cent on average, with the most significant boosts for novice workers.
Developers show similar effects. GitHub’s controlled studies report up to 55 per cent faster coding on specific tasks with AI pair programming, alongside higher satisfaction.
Large rollouts point the same way. The UK civil service Copilot trial reported about 26 minutes saved per user per day, roughly two weeks a year.
This is not a theory. It is a management choice. You either have hidden usage with unmanaged risk or open usage with policy, training, measurement, and compounding productivity.
What the Roundtable means for you
Treasury’s productivity brief puts data, digital, and support for AI adoption on the national to-do list. That is helpful cover for CEOs to move from pilots to scale. The brief lists “supporting the adoption of AI” as a work in progress.
At the same time, Australia’s public trust in AI is shaky, which explains the cultural friction you will see inside firms. KPMG’s 2025 study finds only 36 per cent of Australians are willing to trust AI, despite the growing use of AI.
Your job as CEO
Remove secrecy by making disclosure normal and safe.
Protect customers and intellectual property by steering staff to approved tools with proper data guardrails.
Demand measurement, not hype. If AI is saving time, it should show up in cycle times, throughput, or customer outcomes.
A final reminder from the regulator’s playbook. The Australian Information Commissioner advises organisations not to enter personal information into publicly available generative AI tools. Set the bar there, then provide secure alternatives.
“Attendance is by invitation only.” Use that line from the Roundtable page as a reminder. Most of us will read outcomes in the media. Do not wait for communiqués. Build your own AI productivity plan now.
Operationalising AI without the creep of Shadow AI
Publish a one-page AI Use Standard that staff can follow
Turn on safe defaults before you train anyone
Train for how work is done, not what AI is
Start where the evidence is most substantial and the risk is low
Measure it like any other transformation
Create a “no paste” perimeter for sensitive data
Stand up an AI Champions network
Replace ban lists with bright lines
Policy footers and signatures that normalise disclosure
30-60-90 day plan you can copy
1) Publish a one-page AI Use Standard that staff can follow
Plain language rules: what is allowed, what is never allowed, and how to disclose usage in deliverables.
Copy the OAIC principle in your words: “Do not paste personal or sensitive information into public AI tools.” Link to your approved alternatives.
Add a short disclosure line for docs and emails, for example, “Assisted by approved AI for drafting and summarisation.”
Reference frameworks without jargon. The NIST AI RMF and its Generative AI Profile give you a checklist of risk controls to adapt.
2) Turn on safe defaults before you train anyone
Keep reading with a 7-day free trial
Subscribe to For Every Scale to keep reading this post and get 7 days of free access to the full post archives.