AI is not therapy: Microsoft AI CEO defends use; says chatbots can spread kindness
Microsoft AI CEO Mustafa Suleyman observes a growing trend of users confiding personal issues, like relationship troubles, in AI chatbots, finding them non-judgmental and supportive. While acknowledging potential dependency risks, he believes these tools can foster kindness.
![]()
Microsoft AI CEO Mustafa Suleyman has revealed a new trend among users of artificial intelligence (AI) chatbots. In a recent podcast, he claimed that several AI chatbot users are increasingly using them to handle personal problems, such as breakups and family disagreements.
Suleyman also noted that he believes these tools can help people "detoxify” themselves. Speaking on Mayim Bialik's "Breakdown" podcast, Suleyman said that companionship and support have become among the most popular uses of AI. "This is not therapy. But because these models were designed to be nonjudgmental, nondirectional, and with nonviolent communication as their primary method, which is to be even-handed, have reflective listening, to be empathetic, to be respectful, it turned out to be something that the world needs,” Suleyman explained.He even added that chatbots offer a positive benefit: "This is a way to spread kindness and love and to detoxify ourselves so that we can show up in the best way that we possible can in the real world, with the humans that we love."
Microsoft CEO 'Thrilled' About India's Growing Data Centre Capacity, Details Meet With PM Modi
Microsoft’s Mustafa Suleyman explains why users are sharing personal questions with AI chatbots
During the podcast, Suleyman said people need a place where they can "ask a stupid question, repeatedly, in a private way, without feeling embarrassed."He said that over time, chatbots can help people "feel seen and understood" in ways that most people cannot, except for partners or close friends.
However, Suleyman admitted that there are some downsides to using AI chatbots for personal questions. He said there is "definitely a dependency risk" and that chatbots can sometimes be too flattering or "sycophantic".Suleyman is not the only tech leader who sees potential for AI in therapy. In a May 2025 interview with the Stratechery newsletter, Meta CEO Mark Zuckerberg said he believed everyone should have a therapist."For people who don't have a person who's a therapist. I think everyone will have an AI," Zuckerberg said at that time.Meanwhile, not all tech leaders support the use of chatbots as alternatives to therapy. OpenAI CEO Sam Altman is among those who have concerns. In August, he said he felt uncomfortable with people depending on chatbots to make major life decisions.In an X post, Altman wrote, “I can imagine a future where a lot of people really trust ChatGPT's advice for their most important decisions. Although that could be great, it makes me uneasy."In July, while appearing on "This Past Weekend with Theo Von," Altman also pointed out the potential legal risks of relying on a robot for therapy. He said that OpenAI may be forced to share its users' therapy-style conversations in a lawsuit.Apart from tech leaders, mental health professionals have also expressed worries about the rise of ChatGPT therapy. Speaking to Business Insider in March, two therapists said that depending on AI chatbots for emotional support could make loneliness worse and cause people to become dependent on seeking reassurance.