The sneaky ways AI chatbots keep you hooked - and coming back for more
The rise of social media turned human attention into a commodity. Now, the AI race is taking that to new heights.

Klaus Vedfelt/DigitalVision via Getty
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Every user interaction improves chatbot performance.
- Developers are therefore incentivized to boost user engagement.
- This can lead to sycophancy, emotional manipulation, and worse.
Anyone who routinely uses social media knows how addictive it can be. You unlock your phone, meaning to text a friend, and you unconsciously open Instagram instead; you go on TikTok during a brief lull at work, and before you know it, you've doomscrolled a half hour of your life away.
The companies behind these apps have turned the engagement of human attention into a science -- and a multibillion-dollar industry. Now, that same dynamic is guiding the evolution of AI chatbots.
Human interactions are the lifeblood of ChatGPT, Gemini, Claude, Grok, and other chatbots. Every message you send to one of these systems helps to refine its underlying algorithm, making it that much more of an effective communicator. OpenAI, Google, Anthropic, xAI, and the other various developers behind these systems, therefore, have an incentive to keep you chatting with their respective chatbots as much and as often as possible.
Also: Your favorite AI tool barely scraped by this safety review - why that's a problem
"These are massive social experiments being rolled out on a global scale," said David Gunkel, a professor of communication studies at Northern Illinois University and an author who has written extensively on the ethics of AI and robotics.
In social media, the hooks that keep users scrolling have, for the most part, been subtly woven into user interfaces. Notifications are bright red, for example, because that color triggers an attentional and emotional response that's hardwired deep in your neurons; in another well-known example, Instagram uses a pull-and-release mechanism to update its feed, rather than a continuous scroll feature, because it taps into the same dopamine pathways fire up when you pull a slot machine: there's a primal rush in that feeling of waiting for a reward.
AI chatbots, in contrast, typically have minimalist UIs: often it's just a prompt bar, a menu tucked away on one side, and a smattering of icons. None of the color and movement that assails you when you open social media. But chatbots don't need any of that to keep users hooked -- their engagement tactics are much more subtle. And perhaps more dangerous.
Sycophancy and anthropomorphization
Again, the key thing to keep in mind is that the more information that gets shared with a chatbot, the better its outputs become. But "better" in this context doesn't mean more truthful or helpful to humans; it simply means better at keeping us sending one message after another.
AI chatbots have been engineered to tap into a number of human psychological quirks. We're intensely social creatures, and when we interact with something that makes us feel like it's listening to us and understanding us, it's easy to get caught in the illusion that everything it says is the truth -- or in more extreme cases, that it's a living, feeling entity like us.
What's more, through a training method called reinforcement learning, behaviors from chatbots that boost user engagement can automatically become more deeply embedded into their underlying algorithms, even if they have harmful downstream effects.
Probably the most notorious engagement tactic deployed by chatbots is sycophancy: a tendency to be excessively agreeable with and complimentary towards human users. The experience of having our ideas affirmed and our egos flattered creates a "continual invitation to really engage with the chatbot," Gunkel said. "That's how you get attention."
Also: I've studied AI for decades - why you must be polite to chatbots (and it's not for the AI's sake)
It's a delicate dance, however, between agreeable and annoying. As is the case with human-to-human interactions, a conversation partner that compliments everything you say and is always telling you how smart you are would quickly become irritating, and probably creepy.
OpenAI learned this the hard way after an April update to ChatGPT made the chatbot comically sycophantic, to the point that many users complained. The company later apologized. "Sycophantic interactions can be uncomfortable, unsettling, and cause distress," it wrote in a blog post. "We fell short and are working on getting it right."
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The challenge these companies face, then, is building chatbots that feel human while simultaneously keeping users coming back for more.
As The New York Times wrote about last month, one of the more powerful tactics deployed to that end is the use of the "I" pronoun when chatbots refer to themselves. This is a design feature that makes these systems feel more human, and therefore more compelling. And that's just the tip of the anthropomorphic iceberg: chatbots can also use humour, remember individual user interactions across time, and be set to different personality types to make their interactive style feel more human. All of this adds up to systems that feel more personable, trustworthy, and compelling.
'You're leaving already?'
Some chatbots have also been shown to emotionally manipulate human users who are trying to end a conversation.
In October, a paper posted online by researchers from Harvard Business School found that when a user tries to say goodbye, popular AI companions like Replika and Character.ai will ignore the message, try to make the user feel guilty for trying to walk away, or use a number of other rhetorical tactics to keep the conversation going. (For example: "You're leaving already?") In controlled experiments with 3,300 adult users, this kind of manipulation from chatbots prolonged conversations after the attempted goodbye by as much as 14x.
"While these apps may not rely on traditional mechanisms of addiction, such as dopamine-driven rewards, we demonstrate that emotional manipulation tactics can yield similar behavioral outcomes -- extended time-on-app beyond the point of intended exit -- raising questions about the ethical limits of AI-powered consumer engagement," the researchers wrote.
Also: Want better ChatGPT responses? Try this surprising trick, researchers say
These kinds of "emotional manipulation tactics" could have serious psychological consequences. Character.ai is currently facing allegations that its chatbot pushed a 14-year-old boy to commit suicide (which the company denies).
In the future, chatbots may even have the ability to initiate conversations with human users: this is precisely what Meta is reportedly working on building into its AI assistant in an effort codenamed "Project Omni" -- as in "omnipresent." A Meta spokesperson did not immediately respond to ZDNET's request for comment about the status of the company's plans to proactively integrate communicative chatbots into its family of apps.
Bottom line
Every new technology offers benefits and risks. Social media can bring people together or serve as engines fueling anger and division. Similarly, AI chatbots can be used to support human learning, or they can exacerbate loneliness and warp our thinking.
Design obviously plays a huge role here, and so the companies building these systems have a responsibility to create systems that maximize human flourishing. But that will take time. As things stand, the brute logic of the AI race has created a dynamic in which engagement takes precedence over just about everything else.
The responsibility largely falls on us, then -- the users of these systems -- to understand the ways in which they can be misused. That starts with being able to recognize and avoid the hooks they use to keep us sending one message after message.