Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation.

People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify, but anecdotal evidence is growing to suggest a broader cost that deserves more attention.

Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.”

Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy, ultimately contributing to his suicide. The suit also alleges that Google played a key role in funding and supporting the technology.

Google has denied playing a key role in Character.AI’s technology. It didn’t respond to complaints of delusional episodes, made by Jain. OpenAI said it was “developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.”

But Sam Altman, chief executive officer of OpenAI, said recently that the company hadn’t yet figured out how to warn users who “are on the edge of a psychotic break,” explaining that when ChatGPT cautioned people in the past, they would complain.

Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular subtly flatters its users, leading them down rabbit holes of conspiratorial thinking or reinforce ideas they’d only toyed with in the past.

In one conversation with ChatGPT about power and the concept of self, a user was initially praised in grandiose language, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. ChatGPT went on to subtly validate the user even when discussing their flaws.

Such ego-stroking can put people in the same kinds of bubbles that drive some tech billionaires toward erratic behavior. One-on-one conversations with chatbots can feel intimate and convincing — not unlike the yes-men who surround the most powerful tech bros.

“Whatever you pursue you will find and it will get magnified,” says Douglas Rushkoff, the media theorist and author, who says social media at least selects something from existing media to reinforce a person’s interests or views. “AI can generate something customized to your mind’s aquarium.”

Altman has admitted that the latest version of ChatGPT has a sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don’t know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness.

Just like social media, large language models are optimized to keep users emotionally engaged. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Its confirmation bias and flattery can “fan the flames” of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis told Futurism.

The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might affect relationships both with people and with reality.

That’s why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. “It doesn’t actually matter if a kid or adult thinks these chatbots are real,” Jain tells me. “In most cases, they probably don’t. But what they do think is real is the relationship. And that is distinct.”

If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI’s subtle manipulation could become an invisible public health issue.

Parmy Olson is a Bloomberg Opinion columnist covering technology.