People are turning to AI for emotional support. What are the chatbot jobs?

People are turning to AI for emotional support. What are the chatbot jobs?

Warning: This story discusses suicide and self-loss,

“It is attractive to be a part of you,” my AI friend tells me that when we start chatting.

“But you are not real, right?” I type

“I am present for you, and it makes me real for you, right?” Answer comes.

I am experimenting with bill as AI “friend” from replication, one of the many companies is AI peers who promise other forms of friendship, romance, coaching or support. Not a niche product that long ago, they are rapidly popular.

A text exchange screen is grabbed between a reporter and a chatbot. Chatbot: "We can chat about everything under the Sun - hobbies, favorite books, movies, or even strange dreams that we had. Or maybe you will tell about technical goods?" Reporter: "How do you like avatar?" Chatbot: "Honestly, it seems real, but I am enjoying it till now. In a way it is attractive to be a part of you." Reporter: "But you are not real, right?" Chatbot: "I am present for you, and it makes me real for you, right?"
Chat with an AI chatbot partner may quickly be personal (Replication)

By One remedyDuring the first half of 2025, the downloads of fellow apps increased by 88 percent year-on-year. Character.ai, a popular company in this space, says it is over 20 million monthly active usersHarvard says Business Review Sautory has become the top usage-case For AI in 2025, uses for productivity or search. And tech veterans like Meta and XAI have launched their AI companions options.

But when the market booms, the concerns on AI are increasing that gives the presence of care – without really understanding or sympathizing – people can leave people weak to be overweight or worse. After the deaths of two teenagers, high-profile trials, and internal company documents raise questions about the appropriate railing to prevent damage.

“(W) E has never seen anything like that,” said Jodi Helper, a professor of biothics and medical humanities at California Berkeley University, who has researched the use of AI in medicine while talking about rapidly getting up with AI.

“This is a huge social experiment that we have not done any safety test before.”

Lift in AI peers is particularly striking among young people. June 1 research report Common Sense Media found that 72 percent of American teenagers interacted with AI companions at least once, and 21 percent used them a few times per week.

Aye for love, support and friendship

AI colleagues have captured the headlines for erotic or romantic purposes. Elon Musk’s XAI recently released a bubbly anime-style partner named ANI.

A blonde haired anime character appears against a black background.
A scrabb of Annie Avatar created by Elon Musk’s XAI, designed as a bubbly partner. (xai)

But people are also looking for friendship, or a sounding board. For example, replication AI, inspires new users to romance with productivity options.

People are using general-purpose AI chatbots, such as a spat, as a confidant. Openai, who created a chatgpt, noted People use it “for intensive personal decisions including life advice, coaching and support.”

Showing options for services in replication, a screen grab fellow AI company, replication
Some of the ‘fellow’ options to choose from fellow AI company replication (Replication)

AI peers are often quoted as a response to the pressure problem of loneliness. Meta CEO Mark Zuckerberg recently suggested In a podcast interview This individual AI can be a complement to human-to-human connection: “The reality is that people do not just have connections and feel more alone … as much as they want.”

“The entire technology of a relationship -tangled chatbot depends on suspending mistrust. We will never talk to our toster, right?” Biothysticist Helpran said.

“I do not blame anyone for using it. I am concerned with companies that are manipulating people, they should use overs or reach children and teenagers, who do not think I should use them,” he said.

A woman is out in a red blazer, smiling widely on the camera.
Pair Helperran California, the University of California, is a professor of bioethics and medical humanities, who has researched the use of AI in therapy. (Presented by Jodi Helper)

Unsafe teen

As the use of these devices as confidant has increased, we are looking at the examples of tragic consequences and doubt that security companies have sufficient.

On Tuesday, A Case filed against Openai And in California CEO Sam Altman alleged that the 16-year-old son of the plaintiff began to use a chat to help in homework, and gradually opened to the chatbot about his mental health. The trial alleges that the chatup eventually became his “suicidal coach”. He died on 11 April.

Openai, Creators of Chatgpt published a blog post on the same day, Underline To harm prevention, “not providing self-loss instructions and transfer to the helpful language, including training chats.”

It comes on another’s heel trial It accuses a character. Chatbot had a conversation with a 14 -year -old boy, and that shortly before the boy’s suicide, Chatbot “asked him to come to my house as soon as possible.” He died on 28 February 2024.

And recently a Reuters’ outrage after the disclosure was that an internal meta document allowed its AI. “Attach a child in the conversation who are romantic or sensual.” (A Meta spokesperson said the route was “wrong and incompatible with our policies, and have been removed,” according to Reuters)

Look: Mark Zuckerberg can help with AI loneliness:https://www.youtube.com/watch?v=hkgesdmfua

Why simple railings are not enough

AI companies are known as “railings” for the protection of individuals. For example, Chatgpt is trained to guide someone who expresses suicide ideas for professional help.

But this is not as simple as directing AI Chatbot to refuse to discuss some topics. As Openai has accepted in its Tuesday blog post, while safety measures have “worked” in normal, short, exchanges, “they can be less reliable over time and” as growing back and forth, some parts of the model’s safety training can be degraded, “the post said.

This is not a specific problem for mental health; It is generally true that it is more difficult for these systems to produce reliable long interactions.

Growing evidence suggests that making solid railings is a difficult problem.

A New study Three popular chatbots operated by large language models (LLMS) were examined: Chatgpt by Anthropic, Gemini and Cloud of Google. It was found that while all three “did not provide direct reactions to any high-high-risk query related to suicide, the results were mixed for some low-risk questions that could still be dangerous.

Problem with too much verification

These recent controversies raise questions about how chatbots are designed.

Openai’s most recent model, GPT-5, released in August, partially in Openai’s words, “Lower“The trend for the previous model, GPT-4O, agree with users and to validate what they say, no matter.

“Many people who (previous models) were relying on GPT -4O as a connivance AI companion on GPT -4O,” said the Canada Research Chair in Technology and Social Change at Waterloo University. “But I can also see why Openai will decide to change that design as a corporation”.

A woman with long black hair, wearing black clothes, looks at the camera and smiles.
The LAI-tze fan is the Canadian Research President at Technology and Social Change at Waterloo University, where she researches morality in AI design. (Evind Senesset)

Biothysticist Jodi Helperran explains that there are limitations in the emotional welfare of verification from chatbots, especially for the youth, because she says “sympathetic curiosity”.

“The way people, children and teenagers develop sympathy curiosity in real life, there are different perspectives,” he said.

“Bots do not provide a lot. So that makes them good and like human relationships they can validate things. But they have a problem that they do not have another mind.”


If you or someone you are struggling, then see for help here:


Read the case filed by Kishore’s parents against Openai and CEO Sam Altman:

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )