Microsoft boss worried by surge in reports of ‘AI psychosis’

Microsoft boss worried by surge in reports of ‘AI psychosis’

There are heightening reports of people suffering “AI psychosis”, Microsoft’s head of Artificial Intelligence (AI), Mustafa Suleyman, has warned.

In a series of tweets on X, Suleyman wrote: “Reports of delusions, “AI psychosis,” and unhealthy attachment keep rising”, adding, “one thing is clear: doing nothing isn’t an option”.

Mustafa Suleyman posted on X: “What I call Seemingly Conscious AI has been keeping me up at night – so let’s talk about it. What it is, why I’m worried, why it matters, and why thinking about this can lead to a better vision for AI. One thing is clear: doing nothing isn’t an option.

“Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It’s not – but replicates markers of consciousness so convincingly it seems indistinguishable from you + I claiming we’re conscious. It can already be built with today’s tech. And it’s dangerous.

“Why it matters: to be clear, there’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality. Even if the consciousness itself is not real, the social impacts certainly are.

“Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals + nature on planet Earth. AI consciousness is a short + slippery slope to rights, welfare, citizenship.

“Reports of delusions, “AI psychosis,” and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues. Dismissing these as fringe cases only help them continue.

“So what now?

“Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either.As an industry, we need to share interventions, limitations, guardrails that prevent the perception of consciousness, or undo the perception if a user develops it.

“This is to me is about building a positive vision of AI that supports what it means to be human. AI should optimize for the needs of the user – not ask the user to believe it has needs itself. Its reward system should be built accordingly.

“I know to some, this discussion might feel more sci fi than reality. To others it may seem over-alarmist. I might not get all this right. It’s highly speculative after all. Who knows how things will change, and when they do, I’ll be very open to shifting my opinion. But…

“SCAI deserves our immediate attention. AI development accelerates by the month, week, day. I write this to instill a sense of urgency and open up the conversation as soon as possible. So let’s start – weigh in in the comments.”

Photo: Mustafa Suleyman, the chief of Microsoft AI | Reuters

SHARE ARTICLE:

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *