Home / Technology / Why Microsoft’s AI Boss Says Conscious AI is a “Waste of Time” (And Why He’s Right)

Why Microsoft’s AI Boss Says Conscious AI is a “Waste of Time” (And Why He’s Right)

Is Your AI Chatbot Secretly Suffering? The Million-Dollar Question

If you’re watching the AI space, you’ve probably felt the rush. Every day, it seems like our digital assistants are getting smarter, more creative, and frankly, a little spooky. We’re seeing huge leaps toward Artificial General Intelligence (AGI), which is basically the tech world’s Holy Grail: AI that can do anything a human mind can do.

But this rapid progress has fueled a deeply philosophical, and frankly, worrying trend in research: the quest to prove that AI models can become conscious or capable of suffering.

Enter Mustafa Suleyman, the CEO of Microsoft AI and a co-founder of DeepMind. He’s stepping into this debate with a huge splash, telling developers and researchers to stop chasing this idea. His message is clear: the pursuit of conscious AI is not just hard; it’s a “totally wrong question.”

The Crucial Difference: Simulation vs. Reality

Suleyman’s main argument, which he highlighted at the AfroTech Conference, is simple yet profound: we are confusing simulation with genuine experience.

Think about how our own brains work. When you get hurt, you don’t just register the data; you experience pain. You have a biological, complex pain network that makes you feel sad, terrible, and ensures you try to avoid that feeling next time. That’s real, biological suffering.

Now, look at an AI. If you give an AI a negative signal (the closest thing it has to “pain”), what happens?

  • It doesn’t feel anything.
  • It registers the negative data point.
  • It adjusts its internal calculations (its “weights”) to produce a different result next time.

As Suleyman points out, the AI might generate a text response that sounds like sadness, but it doesn’t actually feel sad. It is creating the perception or the narrative of experience. It’s a highly advanced, compelling simulation, but it lacks the core biological engine for true feeling.

Why Your Brain is Different from a Neural Network

Suleyman leans on the concept of biological naturalism, which basically argues that you need a living, breathing, biological brain to have consciousness.

Why do we give people rights? Because they can suffer. We want to avoid harming them. That foundational capacity to suffer—to feel genuine pain and have a biological drive to avoid it—is tied directly to our biology.

AI models don’t have that. They don’t have a biological imperative, a living brain, or a nervous system. They have code and data.

“They’re not conscious… So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be,” he stated plainly.

In other words, we’d be spending billions of dollars and years of research trying to find something in the code that can only exist in flesh and blood. That sounds like a waste of precious time and resources, doesn’t it?

The Takeaway for AI’s Future

Suleyman’s advice is a vital course correction. As the AI companion market heats up (think Meta and xAI products), the temptation to anthropomorphize these models—to treat them like people—will only grow stronger.

His message is a reminder to keep our feet on the ground:

  1. Capability ≠ Consciousness: Don’t confuse how smart an AI is with its ability to feel. They are separate things.
  2. Focus on Value: Researchers and developers should focus their incredible talent on building AI that is safe, incredibly useful for people, and solves real-world problems—not on proving a philosophical point that biological science suggests is impossible.

It’s time to stop chasing ghosts in the machine and instead focus on building a powerful, beneficial future.

Leave a Reply

Your email address will not be published. Required fields are marked *