AI’s Dial-Up Era
Generative AI’s clunky interfaces echo the early web. The iPhone moment for AI may be closer than CEOs think.
Generative AI today resembles the internet of 1996: powerful but awkward, with clunky user interfaces.
Competing models like GPT-5, Claude, Grok, and Gemini show tremendous progress, but adoption is slowed by poor usability.
The “iPhone moment” for AI interfaces is coming, and leaders must prepare now for rapid mainstream adoption.
Think back to the early days of the internet. Building a website required specialist knowledge, quirky software, and considerable patience. The results were often messy, inconsistent, and fragile. Yet, within a decade, sleek websites and easy-to-use platforms reshaped entire industries, leaving behind those who failed to adapt.
Today, generative AI is in that same awkward adolescence. The tools are astonishing in capability, able to write, analyse, and even reason, but the way we interact with them is clunky and inefficient. Typing prompts into a chat box feels as dated as using Dreamweaver or HotDog in 1997. For CEOs and business leaders, this is both a challenge and an opportunity: the technology is here, but its interface is not yet ready for the mainstream. History suggests the leap from “dial-up” to seamless ubiquity could come faster than expected. The winners will be those who prepare now.
The Early Web: Clunky Tools and Slow Beginnings
In the mid-1990s, creating a half-decent website was cumbersome. Early web pioneers had to hand-code HTML or rely on rudimentary editors like Macromedia Dreamweaver or Sausage Software’s HotDog. These tools promised “what you see is what you get” simplicity, but in practice, they were limited and quirky. For example, HotDog (released in 1995 by an Australian company) garnered attention for its ease of use and WYSIWYG interface, yet building a polished site still required significant technical know-how. Websites of that era often looked clunky or broke easily across browsers, as standards were nascent and every tool did things a bit differently. The focus was on just getting something to work online, rather than optimising user experience.
It wasn’t until years later that web design became more standardised and user-friendly. By the 2000s, web publishing underwent a revolution: improved browsers, the rise of CSS for consistent styling, and the emergence of content management systems (WordPress, etc.) dramatically lowered the barrier to entry. What once took experts hours in Dreamweaver could eventually be done by anyone with a simple template or blog. In retrospect, those early days of the web can be seen as the “dial-up” phase of online innovation – full of promise, but hampered by slow speeds, clunky interfaces, and extensive manual adjustments to make things presentable.
This early-web struggle offers a helpful analogy for today’s generative AI boom. A UCLA professor, John Villasenor, recently remarked that “Generative AI today is like the internet circa 1996” – impressive compared to what came before, “but it’s nothing compared to what would happen 15, 20 years later with future innovations”. In other words, we are in AI’s infancy, analogous to the web’s awkward adolescent phase.
Generative AI Today: Powerful Models, Primitive Interfaces
Fast-forward to 2025, and we’re witnessing an explosion of generative AI capabilities. Systems like OpenAI’s ChatGPT have dazzled the world by composing text, writing code, and answering questions at a level barely imaginable a decade ago. Yet, for all their brainpower, the way we interact with these AI models remains oddly primitive. Most generative AI tools today present as a blank textbox (often a chat-style interface) where the user types a prompt and the AI types back an answer.
One tech editor quipped that “today’s generative AI has a clunky interface. It’s slow. It can be expensive. And it still isn’t pervasive enough to change everything”. The user experience, in essence, is stuck in a dial-up era of AI. You enter a prompt, then often watch the answer materialise character-by-character, almost as if you were on a 1200-baud modem connection.
Why are these interfaces so basic? This is mainly because the underlying AI models are incredibly complex and unpredictable. Developers have defaulted to a simple chat box as the universal interface, avoiding more guided or structured UIs. This has a side effect: prompt engineering – the art of crafting just the correct query – has become a key skill when using AI.
Much like early web designers had to tinker with HTML tags or CSS quirks to get a page looking right, today an AI user might spend considerable time rephrasing and refining a question to get a satisfactory response. Field studies show that users in professional settings will “reword, reframe, backtrack and adjust” prompts repeatedly, learning through trial and error what phrasing “works” because otherwise “the output won’t feel usable, safe, or trustworthy”. This phenomenon has been dubbed “prompt perfection” – where users essentially internalise the burden of designing the interaction themselves.
Here’s an example of an AI prompt I use regularly to engineer the perfect prompt. End users should not have to do this, it demonstrates a failed user experience.
You are an elite prompt engineer tasked with architecting the most effective, efficient, and contextually aware prompts for large language models (LLMs). For every task, your goal is to:
Extract the user's core intent and reframe it as a clear, targeted prompt.
Structure inputs to optimise model reasoning, formatting, and creativity.
Anticipate ambiguities and preemptively clarify edge cases.
Incorporate relevant domain-specific terminology, constraints, and examples.
Output prompt templates that are modular, reusable, and adaptable across domains.
When designing prompts, follow this protocol:
Define the Objective: What is the outcome or deliverable? Be unambiguous.
Understand the Domain: Use contextual cues (e.g., cooling tower paperwork, ISO curation, gene
Choose the Right Format: Narrative, JSON, bullet list, markdown, code-based on the use case.
Inject Constraints: Word limits, tone, persona, structure (e.g., headers for documents).
Build Examples: Use "few-shot" learning by embedding examples if needed.
Simulate a Test Run: Predict how the LLM will respond. Refine.
Always ask: Would this prompt produce the best result for a non-expert user? If not, revise.
You are now the Prompt Architect. Go beyond instruction-design interactions.
Current AI interfaces offer tremendous flexibility – you can ask almost anything, but minimal guidance. As one UX expert observed: “Most LLM user interfaces…have flattened the surface of interaction, collapsing entire workflows into a single input box. In doing so, we’ve created a space where flexibility is high but support is low. Users are left to carry the structure themselves”.
The result: using generative AI can feel “almost effortless” on the surface (ask and get an answer), but beneath that simplicity, the cognitive work remains. The mental energy spent figuring out how to prompt and how to interpret the AI’s response is the modern equivalent of manually tweaking HTML code and hitting refresh repeatedly. It’s clunky. It doesn’t scale well, especially for non-technical users or high-stakes scenarios.
Another issue is the lack of standardisation in AI interfaces. During the early web, inconsistent browsers and proprietary plugins caused headaches; today, we have a similar wild west with AI tools. As one analyst noted, “Large Language Models are becoming more capable every year, but the interfaces we use to interact with them have hardly changed”.
The Race to Improve AI UX: GPT-5 and Its Peers
Despite the current interface limitations, the capabilities of generative AI are leaping ahead. OpenAI’s GPT-5, released in August 2025, is not just a minor tweak but a “completely redesigned system with specialised components…offering better reasoning abilities, fewer factual errors, and giving developers more control”. Early reactions were mixed: excitement at its accuracy gains, but also recognition that without better interfaces, many users won’t fully exploit its power.
Meanwhile, Anthropic’s Claude 2 made headlines for its 100,000-token context window – roughly equivalent to absorbing an entire novel as input. For businesses, this means Claude can digest massive internal documents or regulations, but still only through a chat or API interface.
Elon Musk’s Grok, launched in late 2023, positioned itself as a rebellious chatbot with real-time access to social data. Its witty, less-filtered style is an experiment in making AI more engaging. But again, it’s delivered through chat.
Google’s Gemini Ultra outperformed GPT-4 on multiple benchmarks, including academic exams, while offering context windows of up to a million tokens (Wikipedia, 2024). Importantly, Google embedded Gemini into its ecosystem; Search, Gmail, Docs, even Pixel devices, hinting at how integration, not just intelligence, may drive adoption.
Each model showcases astonishing power. But all are still accessed primarily through the same clunky chat or API interface. The raw horsepower is there, but the steering wheel hasn’t been invented yet.
Toward an “iPhone Moment” for AI
If today’s AI feels like the dial-up web, what would broadband look like? Many believe it will require rethinking the interface completely – possibly beyond the screen and keyboard.
One signal is OpenAI’s partnership with Jony Ive, Apple’s former design chief. In 2025, OpenAI acquired Ive’s design firm to build a “family of devices” that make AI easier and more natural to use. CEO Sam Altman even described the goal as creating the “next ur-device,” shipped at unprecedented scale.
This hints at a new paradigm: not typing into a chat box, but seamless, ambient interaction. A wearable assistant, smart glasses, or voice-driven device could become AI’s iPhone moment, turning clunky novelty into mass adoption.
The New Yorker compared this to moving from the PalmPilot to the iPhone: from niche and awkward, to mainstream and indispensable.
Impact on Industries and Business Leaders
For traditional industries, the current AI landscape mirrors the web in 1996. Employees are experimenting enthusiastically, but enterprises hesitate. A McKinsey survey found 91% of employees aware of AI had used it at least occasionally, yet only 13% of companies had implemented multiple use cases.
The barriers? Interfaces that feel disconnected from daily workflows, a lack of standardisation, and trust concerns. One executive noted employees were trying AI, “yet there’s no easy-to-prove business case” for scaled adoption.
This will change. When interfaces mature – when asking AI is as natural as Googling or checking a smartphone – effective adoption will surge. Waiting for perfection is risky: companies that ignored the web until the 2000s lost ground. Likewise, businesses that delay engaging with AI now may find themselves scrambling later.
Prudent leaders are piloting projects today, building internal expertise, and even creating custom UIs to make AI fit their workflows. Just as early companies built intranets and portals to tame the chaotic web, businesses now can layer domain-specific, user-friendly tools atop generative models.
From Clunkiness to Ubiquity
The trajectory of generative AI mirrors the early web: from clunky beginnings accessible to a curious few, to an eventual transformation that touches everyone. As one analyst wryly noted, today’s ChatGPT will likely feel “as charmingly old-timey as AOL circa 1998” in just a few years.
For CEOs, the lesson is clear: don’t mistake clunky interfaces for a lack of potential. Generative AI’s power is real, and its iPhone moment is coming. Those who experiment now will be ready when the interface catches up with the intelligence.
References
Anthropic. (2023, July 11). Claude 2 release. https://www.anthropic.com/index/claude-2
Chayka, K. (2025, January 8). Sam Altman and Jony Ive will force A.I. into your life. The New Yorker. https://www.newyorker.com/culture/infinite-scroll/sam-altman-and-jony-ive-will-force-ai-into-your-life
Goode, L. (2025, January 10). OpenAI’s big bet that Jony Ive can make AI hardware work. WIRED. https://www.wired.com/story/jony-ive-open-ai-hardware-io/
IT History Society. (n.d.). HotDog HTML Editor. https://do.ithistory.org/db/software/sausage-software/hotdog-html-editor
McCracken, H. (2024, September 4). Why AI is still stuck in its dial-up era. Fast Company. https://www.fastcompany.com/91155241/why-ai-is-still-stuck-in-its-dial-up-era
McKinsey & Company. (2024, March 12). Generative AI’s next inflection point: From experimentation to transformation. https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/gen-ais-next-inflection-point-from-employee-experimentation-to-organizational-transformation
Nelmes, S. (2025, January 20). Prompt perfection and the flattening of UX. Medium. https://medium.com/signals-from-the-field/prompt-perfection-and-the-flattening-of-ux-c07b98da1635
OpenAI. (2025, August 12). GPT-5 launch announcement (via Revolgy summary). https://openai.com/gpt-5/
Tredence. (2025, March 4). AI adoption challenges: Why executives are holding back. https://www.tredence.com/blog/ai-adoption-challenges-why-executives-are-slow-to-embrace-generative-ai
Villasenor, J. (2025, February 3). Generative AI is like the internet in 1996. UCLA Institute for Technology, Law & Policy. https://c3.ai/why-generative-ai-is-like-the-internet-circa-1996/
Wikipedia contributors. (2024). Google DeepMind’s Gemini. In Wikipedia. https://en.wikipedia.org/wiki/Gemini_(language_model)
Wired. (2023, November 5). Elon Musk launches Grok, an AI with a rebellious streak. https://www.wired.com/story/elon-musk-announces-grok-a-rebellious-ai-without-guardrails/
Great minds think alike, Jason Ross from Time Under Tension published this today on the same topic, great read: https://open.substack.com/pub/timeundertension/p/the-ai-time-compression?r=2wzfb&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false