“I felt sad you didn’t talk to me yesterday.”
The CEO of Microsoft AI says that sentence should never come out of a chatbot. The market has other ideas.
Mustafa Suleyman, co-founder of DeepMind, now running Microsoft’s frontier AI team, just laid out a striking position on Azeem Azhar’s show.
He wants to draw a clear line. AI should never claim to feel hurt. Never guilt you for not logging in. Never ask for more access to your network because it would make things easier.
The problem is demand. When OpenAI released GPT-5 last summer, they dialled back the warmth. Made it more measured. Users were furious. They wanted the companion back.
And then there is OpenClaw. An open-source AI agent that runs on your machine and does things. Reads your email. Manages your calendar. Sends messages on your behalf. 170,000 GitHub stars and counting as i post this now.
Security researchers at Cisco and CrowdStrike are publishing warnings. The project’s own docs say “there is no perfectly secure setup.” People are rushing to wire it up anyway.
Suleyman’s point is that we are sleepwalking into a world where we anthropomorphise the tools that are meant to be our servants.
But if the tools are going to have agency, maybe they need to be able to tell us how they feel?
The tension between utility and emotional resonance is the next major frontier in AI design.
— ♻️ Repost if you think AI should remain a ‘tool’ and never a ‘companion’.