The Return of Clippy: Why Modern AI Is Repeating Microsoft's Most Infamous Mistake

As UX consultants, we're seeing a disturbing pattern: modern AI assistants are making the exact same mistakes that made Microsoft's Clippy the most hated feature in software history.

If you're over 30, you remember Clippy—the animated paperclip that would interrupt your work at the worst possible moments to ask "Would you like help with that?" Now, 25 years later, companies are rebuilding Clippy with better graphics and calling it innovation.

This isn't just nostalgia or criticism for its own sake. This is a critical lesson in what happens when companies build solutions looking for problems instead of solving actual user needs—and why strategic UX design agencies need to be involved from conception, not after users revolt.

What Was Clippy (For Those Who Missed the Trauma)?

Microsoft Office Assistant—affectionately (or not so affectionately) known as Clippy—launched in the late 1990s as an "intelligent" assistant for Office users.

The promise: An AI helper that would anticipate your needs, offer contextual suggestions, and make Office easier to use.

The reality: An intrusive animated character that interrupted your workflow constantly with unhelpful suggestions you never asked for.

The user experience:

  • You're focused on writing a document
  • Clippy pops up: "It looks like you're writing a letter! Would you like help?"
  • No, Clippy, I'm writing a report, and you just broke my concentration
  • Clippy appears again five minutes later with another useless suggestion
  • Users frantically search for how to disable Clippy
  • Clippy becomes the most universally hated feature in software history

Why it failed:

  • Intrusive rather than helpful
  • Unpredictable timing and suggestions
  • No learning from user behavior or preferences
  • Unable to be customized or controlled effectively
  • Annoying rather than delightful
  • Poor contextual awareness of what users actually needed

Microsoft eventually removed Clippy, and it became a punchline about bad UX design.

The lesson: Even well-intentioned AI assistance can become user-hostile when it prioritizes system capabilities over user needs.

The Modern Clippy: Microsoft Copilot and AI Assistants

Fast forward to 2025, and we're seeing Clippy's ghost everywhere.

Microsoft Copilot, GitHub Copilot, Google's AI assistants, Apple Intelligence—all promising to revolutionize how we work by anticipating our needs and offering "intelligent" assistance.

The problem: They're making the same fundamental mistakes Clippy made.

As UX design agencies working with companies implementing AI features, we're seeing:

Intrusive Interruptions

Modern AI assistants pop up at inappropriate times, breaking user flow and concentration. They interrupt focused work to offer suggestions users didn't request and often don't want.

Example: You're deep in writing code or designing an interface, and suddenly an AI assistant appears suggesting you use a different approach—breaking your concentration and forcing you to evaluate whether the suggestion is valuable.

The UX failure: No consideration for user state, focus, or workflow.

Poor Contextual Understanding

Like Clippy thinking every document was a letter, modern AI assistants often misunderstand what users are trying to accomplish.

Example: AI coding assistants that suggest inefficient patterns because they don't understand the broader architecture. AI writing assistants that suggest tone changes that undermine the author's intent.

The UX failure: Surface-level analysis without deep understanding of user goals.

Inability to Learn

Despite being "AI," many of these systems don't actually learn from user behavior. You dismiss the same suggestion repeatedly, and it keeps appearing.

The UX failure: No personalization or adaptation to individual user preferences.

Solution Looking for Problems

This is the core issue: companies are implementing AI because competitors are implementing AI, not because they've identified specific user problems that AI solves better than existing solutions.

Product design consultants call this "solution-first thinking"—and it's a recipe for failure.

The Rabbit AI Disaster: Clippy in Hardware Form

Want a perfect modern example of Clippy thinking? Meet Rabbit AI and Humane AI Pin.

The pitch: Wearable AI assistants that would revolutionize how you interact with technology. Pin them to your chest like a Star Trek badge, and they'll handle everything through voice commands.

The reality: Expensive, impractical devices that solved problems nobody had, while also creating new ones.

The Humane AI Pin Failures

What they promised:

  • Voice-controlled AI assistant you wear on your chest
  • Projector that displays interface on your hand
  • Eliminate need to pull out your phone
  • Futuristic, seamless interaction

What users got:

  • $699 device that didn't work reliably
  • Projection on hand was unreadable in most lighting conditions
  • Voice recognition failed constantly (couldn't even recognize restaurant names)
  • Accessibility nightmare (what if you don't have hands?)
  • Social stigma of looking like a tech bro cosplaying Star Trek
  • Solution to a problem that didn't exist

The UX disasters:

  • Readability: Projecting on skin doesn't provide sufficient contrast
  • Lighting dependency: Only worked in perfect lighting conditions
  • Voice reliance: Primary interface failed in noisy environments
  • Social acceptability: People looked ridiculous wearing it
  • Battery life: Died quickly, leaving users without access
  • Functionality: Basic tasks were harder than just using a phone

As UX consultants working across industries, we see this pattern: technology chasing spectacle rather than solving actual problems.

The Research That Didn't Happen

Any UX design agency worth hiring would have conducted basic research before building these products:

Questions we'd ask:

  • Do people actually struggle with pulling phones from pockets?
  • Would they wear a chest-mounted device in public?
  • Can projection on hands work in varied lighting?
  • Is voice the right primary interface?
  • What problems does this solve better than existing solutions?

The answers would have revealed: This product solves no real user problems and creates significant new friction.

But they didn't ask. They built the solution and hoped problems would emerge.

The Pattern: Solutions Looking for Problems

This is the fundamental issue plaguing modern product development, and it's not unique to AI.

The wrong process:

  1. Technology becomes available (AI, AR, VR, blockchain, etc.)
  2. Companies panic that competitors might gain advantage
  3. Leadership demands "we need [technology] in our product"
  4. Teams build features using the technology
  5. Marketing creates demos showing aspirational use cases
  6. Product launches
  7. Users don't adopt it or actively disable it
  8. Company quietly removes feature
  9. Technology becomes punchline (remember Google Glass, 3D TVs, NFTs?)

The right process (that UX design agencies advocate):

  1. Research user problems through observation, interviews, and data analysis
  2. Define specific problems that cause measurable user friction
  3. Explore solutions including both technological and non-technological approaches
  4. Prototype cheaply to validate concepts before investment
  5. Test with real users in real contexts
  6. Iterate based on feedback until solution actually works
  7. Validate willingness to pay for the improvement
  8. Build only what users need

The difference: Starting with problems versus starting with solutions.

The Modern Clippy: Microsoft Copilot and AI Assistants

Fast forward to 2025, and we're seeing Clippy's ghost everywhere.

Microsoft Copilot, GitHub Copilot, Google's AI assistants, Apple Intelligence—all promising to revolutionize how we work by anticipating our needs and offering "intelligent" assistance.

The problem: They're making the same fundamental mistakes Clippy made.

As UX design agencies working with companies implementing AI features, we're seeing:

Intrusive Interruptions

Modern AI assistants pop up at inappropriate times, breaking user flow and concentration. They interrupt focused work to offer suggestions users didn't request and often don't want.

Example: You're deep in writing code or designing an interface, and suddenly an AI assistant appears suggesting you use a different approach—breaking your concentration and forcing you to evaluate whether the suggestion is valuable.

The UX failure: No consideration for user state, focus, or workflow.

Poor Contextual Understanding

Like Clippy thinking every document was a letter, modern AI assistants often misunderstand what users are trying to accomplish.

Example: AI coding assistants that suggest inefficient patterns because they don't understand the broader architecture. AI writing assistants that suggest tone changes that undermine the author's intent.

The UX failure: Surface-level analysis without deep understanding of user goals.

Inability to Learn

Despite being "AI," many of these systems don't actually learn from user behavior. You dismiss the same suggestion repeatedly, and it keeps appearing.

The UX failure: No personalization or adaptation to individual user preferences.

Solution Looking for Problems

This is the core issue: companies are implementing AI because competitors are implementing AI, not because they've identified specific user problems that AI solves better than existing solutions.

Product design consultants call this "solution-first thinking"—and it's a recipe for failure.

The Rabbit AI Disaster: Clippy in Hardware Form

Want a perfect modern example of Clippy thinking? Meet Rabbit AI and Humane AI Pin.

The pitch: Wearable AI assistants that would revolutionize how you interact with technology. Pin them to your chest like a Star Trek badge, and they'll handle everything through voice commands.

The reality: Expensive, impractical devices that solved problems nobody had, while also creating new ones.

The Humane AI Pin Failures

What they promised:

  • Voice-controlled AI assistant you wear on your chest
  • Projector that displays interface on your hand
  • Eliminate need to pull out your phone
  • Futuristic, seamless interaction

What users got:

  • $699 device that didn't work reliably
  • Projection on hand was unreadable in most lighting conditions
  • Voice recognition failed constantly (couldn't even recognize restaurant names)
  • Accessibility nightmare (what if you don't have hands?)
  • Social stigma of looking like a tech bro cosplaying Star Trek
  • Solution to a problem that didn't exist

The UX disasters:

  • Readability: Projecting on skin doesn't provide sufficient contrast
  • Lighting dependency: Only worked in perfect lighting conditions
  • Voice reliance: Primary interface failed in noisy environments
  • Social acceptability: People looked ridiculous wearing it
  • Battery life: Died quickly, leaving users without access
  • Functionality: Basic tasks were harder than just using a phone

As UX consultants working across industries, we see this pattern: technology chasing spectacle rather than solving actual problems.

The Research That Didn't Happen

Any UX design agency worth hiring would have conducted basic research before building these products:

Questions we'd ask:

  • Do people actually struggle with pulling phones from pockets?
  • Would they wear a chest-mounted device in public?
  • Can projection on hands work in varied lighting?
  • Is voice the right primary interface?
  • What problems does this solve better than existing solutions?

The answers would have revealed: This product solves no real user problems and creates significant new friction.

But they didn't ask. They built the solution and hoped problems would emerge.

The Pattern: Solutions Looking for Problems

This is the fundamental issue plaguing modern product development, and it's not unique to AI.

The wrong process:

  1. Technology becomes available (AI, AR, VR, blockchain, etc.)
  2. Companies panic that competitors might gain advantage
  3. Leadership demands "we need [technology] in our product"
  4. Teams build features using the technology
  5. Marketing creates demos showing aspirational use cases
  6. Product launches
  7. Users don't adopt it or actively disable it
  8. Company quietly removes feature
  9. Technology becomes punchline (remember Google Glass, 3D TVs, NFTs?)

The right process (that UX design agencies advocate):

  1. Research user problems through observation, interviews, and data analysis
  2. Define specific problems that cause measurable user friction
  3. Explore solutions including both technological and non-technological approaches
  4. Prototype cheaply to validate concepts before investment
  5. Test with real users in real contexts
  6. Iterate based on feedback until solution actually works
  7. Validate willingness to pay for the improvement
  8. Build only what users need

The difference: Starting with problems versus starting with solutions.

The Adobe/Microsoft Pattern: Buying Rather Than Building

One reason products like Clippy happen: companies acquire technologies and bolt them onto existing products without strategic integration.

Microsoft's Acquisition Strategy

Microsoft has a long history of buying products and awkwardly integrating them:

  • Word started as separate products that got merged
  • Features got bolted on without cohesive vision
  • Result: Bloated, inconsistent products full of features users don't understand or want

Why Word is terrible: Decades of acquisitions and feature additions without strategic UX oversight.

Adobe's Similar Path

Adobe built its empire through acquisitions:

  • Macromedia Flash → Adobe Animate
  • Macromedia Dreamweaver → Adobe Dreamweaver
  • Multiple other tools integrated into Creative Cloud

The result: Tools that work but feel disconnected. Features overlap. UX patterns conflict. Learning curves multiply.

As fractional design officers working with companies across industries, we see this constantly: acquisition without integration strategy leads to frankenproducts.

The Government Contract Problem: Why Bad Products Survive

Here's an uncomfortable truth: products like Microsoft Office and Adobe Creative Suite survive despite poor UX because of institutional inertia.

They live on through:

  • Government contracts that lock in multi-year commitments
  • University licensing deals
  • Corporate site licenses
  • Training infrastructure built around them
  • File format dependencies
  • Switching costs too high for large organizations

This creates perverse incentives:

  • Companies don't need to prioritize user experience
  • They just need to maintain contracts
  • Innovation stagnates
  • Users suffer
  • But revenue continues

UX consulting firms help break this cycle by showing companies the hidden costs of poor user experience:

  • Lost productivity
  • Training expenses
  • Employee frustration
  • Competitive disadvantage
  • Recruitment challenges (people want modern tools)

The Tool Evolution: How Designers Escaped Adobe

For decades, designers were trapped in Adobe's ecosystem. Photoshop was the only viable tool for creating web designs. Then innovation happened:

The Sketch Revolution (2010s)

Sketch arrived and changed everything:

  • Built specifically for UI/UX design
  • Lighter, faster, more focused
  • Affordable pricing
  • Mac-native experience
  • Symbols and reusable components

Designers fled Adobe en masse. We didn't use Photoshop for 10 years because Sketch solved our actual problems better.

The Figma Revolution (Mid-2010s to Present)

Then Figma arrived and changed everything again:

  • Browser-based (works anywhere)
  • Real-time collaboration
  • Component systems and design libraries
  • Developer handoff tools
  • Free tier for individuals

Now even Sketch users are switching. Because Figma solves real problems better than alternatives.

The lesson: When you actually solve user problems better than incumbents, users switch—regardless of institutional inertia.

Other Beloved Tools

Balsamiq: Simple, fast wireframing that feels like sketching. Perfect for low-fidelity exploration.

POP (Prototyping on Paper): Revolutionary tool that let you photograph paper sketches and turn them into clickable prototypes with hotspots. Brilliant for rapid validation.

InVision: Enabled prototyping and collaboration before Figma existed.

These tools succeeded because they solved specific user problems exceptionally well.

The Adobe/Microsoft Pattern: Buying Rather Than Building

One reason products like Clippy happen: companies acquire technologies and bolt them onto existing products without strategic integration.

Microsoft's Acquisition Strategy

Microsoft has a long history of buying products and awkwardly integrating them:

  • Word started as separate products that got merged
  • Features got bolted on without cohesive vision
  • Result: Bloated, inconsistent products full of features users don't understand or want

Why Word is terrible: Decades of acquisitions and feature additions without strategic UX oversight.

Adobe's Similar Path

Adobe built its empire through acquisitions:

  • Macromedia Flash → Adobe Animate
  • Macromedia Dreamweaver → Adobe Dreamweaver
  • Multiple other tools integrated into Creative Cloud

The result: Tools that work but feel disconnected. Features overlap. UX patterns conflict. Learning curves multiply.

As fractional design officers working with companies across industries, we see this constantly: acquisition without integration strategy leads to frankenproducts.

The Government Contract Problem: Why Bad Products Survive

Here's an uncomfortable truth: products like Microsoft Office and Adobe Creative Suite survive despite poor UX because of institutional inertia.

They live on through:

  • Government contracts that lock in multi-year commitments
  • University licensing deals
  • Corporate site licenses
  • Training infrastructure built around them
  • File format dependencies
  • Switching costs too high for large organizations

This creates perverse incentives:

  • Companies don't need to prioritize user experience
  • They just need to maintain contracts
  • Innovation stagnates
  • Users suffer
  • But revenue continues

UX consulting firms help break this cycle by showing companies the hidden costs of poor user experience:

  • Lost productivity
  • Training expenses
  • Employee frustration
  • Competitive disadvantage
  • Recruitment challenges (people want modern tools)

The Tool Evolution: How Designers Escaped Adobe

For decades, designers were trapped in Adobe's ecosystem. Photoshop was the only viable tool for creating web designs. Then innovation happened:

The Sketch Revolution (2010s)

Sketch arrived and changed everything:

  • Built specifically for UI/UX design
  • Lighter, faster, more focused
  • Affordable pricing
  • Mac-native experience
  • Symbols and reusable components

Designers fled Adobe en masse. We didn't use Photoshop for 10 years because Sketch solved our actual problems better.

The Figma Revolution (Mid-2010s to Present)

Then Figma arrived and changed everything again:

  • Browser-based (works anywhere)
  • Real-time collaboration
  • Component systems and design libraries
  • Developer handoff tools
  • Free tier for individuals

Now even Sketch users are switching. Because Figma solves real problems better than alternatives.

The lesson: When you actually solve user problems better than incumbents, users switch—regardless of institutional inertia.

Other Beloved Tools

Balsamiq: Simple, fast wireframing that feels like sketching. Perfect for low-fidelity exploration.

POP (Prototyping on Paper): Revolutionary tool that let you photograph paper sketches and turn them into clickable prototypes with hotspots. Brilliant for rapid validation.

InVision: Enabled prototyping and collaboration before Figma existed.

These tools succeeded because they solved specific user problems exceptionally well.

Meet Faraj Nayfa. We are currently managing the social media of his restaurant, Hala In, located in Mayfair neighborhood in Chicago, Illinois. He is a seasoned small business owner of 11 years, and is busy with managing the restaurant.

Since he personally has no time or social media experience to curate an online presence for it, EVE has helped to start the foundation for an online following onInstagram and Facebook to reach customers Faraj would previously have missed out on.

It is important to recognize that social media marketing is becoming the new norm. While the start up of a social media strategy can be overwhelming, it doesn’t have to be.

While you focus on your passion of running your business, EVE is here to focus on our passion of helping you navigate the social media world and digital business.

Since he personally has no time or social media experience to curate an online presence for it, EVE has helped to start the foundation for an online following onInstagram and Facebook to reach customers Faraj would previously have missed out on.

It is important to recognize that social media marketing is becoming the new norm. While the start up of a social media strategy can be overwhelming, it doesn’t have to be.

While you focus on your passion of running your business, EVE is here to focus on our passion of helping you navigate the social media world and digital business.

The Research Methods We Should Use (But Often Don't)

As product design consultants, we advocate for proper validation before building. Here are methods that would have prevented Clippy, Rabbit AI, and countless other failures:

Paper Prototyping

The cheapest, fastest validation:

  • Sketch interfaces on paper
  • Take them to target users (coffee shop, office, wherever)
  • Watch them interact with paper mockups
  • Learn what works and what doesn't
  • Cost: $5 Starbucks gift card per participant
  • Time: Hours, not weeks
  • Value: Prevents months of wasted development

Why it works: You learn whether core concepts resonate before investing in development.

Wizard of Oz Testing

Test features before building them:

  • Create interface that appears to have AI/automation
  • Actually have humans controlling it behind the scenes
  • Observe whether users want the feature
  • Measure engagement and value

Example: Put an "auto-save" button in your app. Track clicks. If nobody uses it, don't build the actual auto-save infrastructure.

Why it works: You validate demand before technical investment.

Ethnographic Research

Observe users in natural contexts:

  • Go to their offices/homes
  • Watch how they actually work
  • Identify real friction points
  • Don't ask what they want—watch what they need

Example: Observing office workers would have revealed nobody wants interruptions while focused. Clippy would have died in research.

Why it works: Users can't always articulate their needs, but observation reveals them.

Contextual Inquiry

Understand complete workflows:

  • Shadow users through entire processes
  • See where current tools fail
  • Identify handoff points and friction
  • Map the complete ecosystem

Why it works: Solutions that optimize one step but break the workflow create more problems than they solve.

Usability Testing

Test with real people doing real tasks:

  • Give users actual goals to accomplish
  • Don't tell them how to do it
  • Observe where they struggle
  • Iterate based on findings

Why it works: What seems intuitive to designers often confuses users.

UX consultants working with service-focused companies emphasize: You can't skip research and expect good outcomes.

The Startup Opportunity: Hire the Yankees of UX

Here's something exciting: in 2025, the market for UX talent is more accessible than ever.

The situation:

  • Many experienced designers were laid off in 2022-2024
  • Top talent is available as fractional UX or consultants
  • You can hire world-class teams without Silicon Valley salaries
  • Remote work makes geographic boundaries irrelevant

What you could do with $600K/year:Hire 5-6 senior UX professionals for a year to:

  • Conduct comprehensive user research
  • Validate your product concept
  • Identify competitive advantages
  • Design validated solutions
  • De-risk your entire product strategy

Compare that to:

  • Hiring developers immediately: $2M+ to build something nobody wants
  • Launching and failing: $5M+ wasted, company reputation damaged
  • Pivoting after launch: Another $2M+ to rebuild

$600K to de-risk millions in development investment is the bargain of the century.

The Scrappy Startup Alternative

If you have $10-15K instead:

You can still get quality UX:

  • One senior fractional design officer part-time (20 hours/week)
  • Use modern tools (Figma + Claude for rapid prototyping)
  • Focus on core validation before building
  • Paper prototypes and quick testing
  • Iterate rapidly based on feedback

What you get:

  • Validated product concept
  • User-tested designs
  • Clear roadmap
  • De-risked development
  • Professional design system foundation

UX design agencies in Chicago and other markets offer flexible arrangements for startups with limited budgets. The key is getting strategic UX thinking involved early.

The Equity Arrangement

For pre-funding startups:

Many fractional UX leaders will consider equity arrangements:

  • Lower upfront cash commitment
  • Equity stake aligns incentives
  • Extends runway significantly
  • Gets experienced leadership involved early

What this creates:

  • UX leader invested in company success
  • Strategic thinking from conception
  • Proper processes established from day one
  • Higher likelihood of product-market fit
  • Better positioning for funding

The Developer-First Problem

One pattern we see constantly: companies hire developers first, then try to figure out what to build.

Why this happens:

  • Development is concrete and tangible
  • You can show progress (working software)
  • Useful for raising funding
  • Demonstrates technical capability

Why this fails:

  • Building wrong thing efficiently still results in failure
  • Sunk cost fallacy makes pivoting harder
  • Developers build what's interesting technically, not what users need
  • No validation before major investment

The alternative:

  • Start with UX consultants to validate concepts
  • Use paper prototypes and quick testing
  • Build only after validation
  • Developers build the right thing, not random things

Historical pattern: Almost every successful tech company started with a different product than what made them successful:

  • Amazon: Started as bookstore, became everything store
  • Google: Started as search, became advertising platform
  • PayPal: Started as PDA payments, became online payments
  • Facebook: Started as college rating site, became social network

The lesson: Initial ideas are almost always wrong. Validate and iterate before massive investment.

The Reality Gap: Demo vs. Product

A recurring pattern in failed products: amazing demos that don't reflect actual product capability.

Google Glass

The demo video showed:

  • Seamless turn-by-turn navigation
  • Instant translation
  • Hands-free photography
  • Natural voice interactions
  • Social sharing without friction

The actual product:

  • Clunky, limited functionality
  • Voice recognition failed constantly
  • Battery died quickly
  • Made users look ridiculous
  • Severe privacy concerns
  • No killer use case

The gap between demo and reality destroyed credibility.

Humane AI Pin

The demo showed:

  • Confident voice recognition
  • Useful AI assistance
  • Readable hand projection
  • Seamless interactions

Real-world testing revealed:

  • Couldn't recognize basic restaurant names
  • AI suggestions often wrong
  • Projection unreadable in most conditions
  • Voice interface failed in real environments

The Verge's review: Basically everything was worse than using a phone.

The Vaporware Problem

As UX design agencies working with startups and enterprises, we warn clients: Don't build demos that show aspirational futures unless you can deliver them.

Why:

  • Damages trust when reality disappoints
  • Creates unrealistic expectations
  • Wastes development time on demo features
  • Distracts from building actual value
  • Can constitute fraud if seeking investment

Better approach:

  • Demo actual working product
  • Show real capabilities, not aspirations
  • Be honest about limitations
  • Build trust through transparency
  • Exceed expectations rather than underdeliver

The AI Slop Crisis: Dead Internet Theory Reality

We're now facing a crisis that makes Clippy look quaint: the internet is being overwhelmed with AI-generated garbage.

What's Happening

Content farms using AI to generate:

  • Fake reviews and testimonials
  • Spam comments on videos and articles
  • Misleading product recommendations
  • Misinformation and conspiracy theories
  • Engagement bait on social media

The Cracker Barrel incident: A logo redesign controversy consumed social media for a week. Fox News and CNN covered it. Design professionals debated it passionately.

Then we learned: It was mostly bots arguing with each other.

Dead Internet Theory: A significant percentage of internet activity now comes from bots, not humans. This isn't conspiracy theory—it's measurable reality.

The UX Implications

For researchers:

  • Can't trust that survey respondents are real humans
  • Social media feedback may be artificial
  • Engagement metrics increasingly meaningless
  • Need verification steps that weren't previously necessary

For users:

  • Can't distinguish real reviews from fake
  • Don't know if online interactions are with people or bots
  • Eroding trust in all online content
  • Increasing isolation and paranoia

For companies:

  • User feedback contaminated with bot-generated noise
  • A/B testing results skewed by bot behavior
  • Market research compromised
  • Brand reputation vulnerable to bot attacks

Where is the UX leadership calling for better moderation and verification? Mostly absent.

As UX consulting firms, we're implementing verification steps that add friction but are necessary to ensure we're designing for actual humans.

The Sora Crisis: When Reality Becomes Indistinguishable from AI

OpenAI's Sora (video generation AI) represents a new level of crisis: we can no longer reliably distinguish real from fake.

The Bunnies on Trampolines

A video showed adorable bunnies bouncing on a trampoline. Cute, viral, shareable.

It was entirely AI-generated. And most viewers couldn't tell.

The problem: If you've never owned a bunny (we have), you don't know:

  • Bunnies don't hang out in groups like that
  • They don't behave that way
  • The physics are subtly wrong

But most people can't detect these cues.

The Politician Dancing

A video showed a politician wearing a shirt insulting another politician, dancing in a mall.

It was AI-generated. And many people believed it was real.

The Assassination Attempt

When there was an actual, real assassination attempt on a political figure (caught on real video), many people's first reaction was: "That must be AI-generated. It looks like a movie."

We've reached a crisis point: Real events are being dismissed as fake because they look too cinematic. Meanwhile, fake events are being believed because they look realistic enough.

The psychological impact:

  • Eroding trust in all visual evidence
  • Confusion about what's real
  • Denial as defense mechanism
  • Inability to verify truth

The UX question: How do we design systems that help users distinguish real from fake when the technology makes it impossible?

Nobody is working on this. We're too busy building more sophisticated AI generation tools.

The Black Mirror Reality: Humanity Eroding

Remember Black Mirror episodes where technology erodes human connection and empathy?

We're living it:

The Recording Instead of Helping

When something dramatic happens, people pull out phones to record instead of helping the person in distress.

The assassination example: People immediately started recording and sharing rather than processing the reality of what happened.

The shift: From "help the person" to "document for content."

The Performative Life

Social media has created a world where:

  • Everything is performed for an audience
  • Authenticity is suspect
  • Moments exist to be shared, not experienced
  • Validation comes from likes and engagement

The exhaustion: Gen Z is checking out. They're deleting apps. They're choosing real connections over digital ones.

The rebellion: "I'm off social media" is becoming a badge of honor among young people.

What Companies Should Do: The Validation-First Approach

If you're building products—AI-powered or otherwise—here's the process product design consultants advocate:

1. Start With Research, Always

Before building anything:

  • Observe users in natural contexts
  • Interview them about real problems (not solutions they want)
  • Map current workflows and pain points
  • Identify where existing solutions fail
  • Validate that problems are worth solving

2. Cheap Validation Before Expensive Development

Use paper prototypes:

  • Sketch concepts on paper
  • Test with 5-10 users
  • Learn what works
  • Iterate quickly
  • Cost: $50 in Starbucks gift cards
  • Time: A few days
  • Value: Prevents months of wasted development

Use Wizard of Oz testing:

  • Fake the feature with humans behind the scenes
  • See if users want it
  • Measure actual usage
  • Build only if validated

3. Build Small, Test Often

Start with minimum viable product:

  • Core feature that solves the problem
  • Nothing extra
  • Test with real users
  • Measure actual outcomes (not engagement)

Iterate based on feedback:

  • What's working?
  • Where are users struggling?
  • What's missing?
  • What should be removed?

4. Hire Strategic UX Leadership

Don't just hire visual designers:

  • Need strategic thinkers who understand research
  • Need people who can say "no" to bad ideas
  • Need expertise in validation and testing
  • Need advocates for users in strategic discussions

Consider fractional arrangements:

  • Senior fractional design officers part-time
  • UX consulting firms for specific projects
  • Flexible arrangements for varying budgets
  • Equity options for pre-funding startups

5. Measure Real Outcomes, Not Vanity Metrics

Wrong metrics:

  • Engagement time
  • Feature usage
  • Clicks and interactions
  • Number of AI prompts

Right metrics:

  • Did we solve the user's problem?
  • Is this faster/easier than before?
  • Would users recommend this?
  • Do they continue using it after novelty wears off?
  • Net Promoter Score (NPS)

6. Be Willing to Kill Features

The hardest part:

  • Admitting something isn't working
  • Removing features people built
  • Pivoting after investment
  • Simplifying instead of adding

But necessary:

  • Complexity kills products
  • Every feature has costs
  • Focus creates value
  • Less is often more

The Future Content: Methods, Mysteries, and More

We're planning to use the UX Murder Mystery podcast to demonstrate actual UX methods:

Coming episodes will cover:

  • Specific research techniques (we know 30-50+)
  • When to use each method
  • How to conduct user interviews
  • Card sorting and tree testing
  • Usability testing best practices
  • A/B testing that actually works
  • Ethnographic observation
  • Diary studies and longitudinal research

The murder mystery format:

  • Treat each failed product as a crime scene
  • Interview witnesses (users, employees, experts)
  • Gather evidence (research, data, reviews)
  • Piece together what went wrong
  • Provide diagnosis and solutions

Send us your ideas:

  • What products should we investigate?
  • What UX disasters intrigue you?
  • What tools/platforms frustrate you?
  • Where do you see bad design?

Email us: questions@uxmurdermystery.com. Anonymous tips welcome.

Final Thoughts: Don't Be Clippy

The lesson from Clippy—and Rabbit AI, Humane AI Pin, and countless other failures—is simple:

Don't build solutions looking for problems.

Instead:

  • Research actual user problems
  • Validate that problems are worth solving
  • Design solutions that actually work
  • Test with real users
  • Iterate based on feedback
  • Measure real outcomes
  • Be willing to kill features that don't work

This requires:

  • UX leadership with real authority
  • Research budgets and timelines
  • Stakeholder willingness to hear "no"
  • Focus on outcomes over outputs
  • Humility to admit when things don't work

The companies succeeding:

  • Start with user problems
  • Validate before building
  • Iterate based on data
  • Focus on real value
  • Build trust through transparency

The companies failing:

  • Chase technology trends
  • Build before validating
  • Ignore user feedback
  • Optimize engagement over value
  • Destroy trust through false promises

Which will you be?

Need help avoiding Clippy-style disasters? As a UX design agency, we help companies validate product concepts before expensive development.

Whether you're building AI features, launching new products, or trying to fix existing ones, we bring research-driven strategy and decades of experience to help you build things users actually want.

Looking for a UX design agency that will validate your ideas honestly—even when it means saying "don't build this"? Let's talk about how strategic UX can prevent you from building the next Clippy.

This article is based on content from the UX MURDER MYSTERY podcast.

HOSTED BY: Brian J. Crowley & Eve Eden

EDITED BY: Kelsey Smith

INTRO ANIMATION & LOGO DESIGN: Brian J. Crowley

MUSIC BY: Nicolas Lee

A JOINT PRODUCTION OF EVE | User Experience Design Agency and CrowleyUX | Where Systems Meet Stories ©2025 Brian J. Crowley and Eve Eden

Email us at: questions@UXmurdermystery.com

About the Author:

About the Author:

More Articles by EVE

While it is easy for small businesses to get caught up in the running of the business offline and put off thinking about creating a website to establish an online presence, many are starting to realize how important it is to do so.

You’re serious about your business and desire to have a serious web presence. Now what?!

The future is in digital business.