No More Mr. Nice Bot: Some chatbot users push back against flattery, fluff

No More Mr. Nice Bot: Some chatbot users push back against flattery, fluff

Some chatbot users are toning the friendship of their artificial intelligence agents because the reports are spread about the confusion of AI-fuel.

And as people push back against the flattery of technology and sometimes against the tendency of drug addiction, experts say it is time for government regulation to protect young and weak users – some Canada’s AI Ministry says it is looking at it.

Vancouver composer Dave Pickel became concerned about his relationship with Openi’s chat, which he was using for research topics for daily fun and to find places for gigs, later, Recently read CBC article on AI Psychosis,

Worrying that he was getting very attached, he began sending hints at the beginning of each conversation to create an emotional distance from the chatbot, feeling its human tendency “may be manipulated in a way that is unhealthy.”

As a few examples, they asked to stop it from referring to themselves with “I” pronouns, stopping their questions with more questions to prevent the flattery language.

“I admitted that I was answering as if it was a person,” he said.

71 -year -old Pickel also stopped calling Chatbot “thanks”, which he says he felt bad about the first.

He says that he now feels that he has a healthy relationship with technology.

Headshot
Vancouver composer Dave Pikel says that after knowing about cases of confusion affected by AI, he has changed the way he talks to chatbots. (Supplied by Dave Pickel)

He said, “Research requires a chatbot that you are applying butter and tell you that you have a great idea? It’s just crazy,” he said.

Cases of “AI Psychosis” have been reported in recent months that include those who have fallen into confusion through interaction with chatbots. Some cases include henish episodes and some have led violence Or SuicidalA person who spoke with the CBC is convinced that he was living in the AI ​​simulation, and another believed that he had prepared a groundbreaking mathematical formula.

A Recent study found that big language model (LLMS) encourage confusion, possibly to push back or provide purposeful information rather than their tendency to flatter and agree with users.

This is an issue that Openai has accepted and addressed with its latest model, GPT-5, which was rolled in August.

Openai CEO Sam Altman said, “I think the worst talk we have talked about in the chats so far is that we have this issue, where the model was very flattering for users.” Heavy conversation Podcast in August. “And for most users it was just annoying. But for some users that had delicate mental conditions, it was encouraging confusion.”

Listen A discussion on chatbot prejudices:

Edmonton M5:51Can AI chatbott be neutral?

What is AI Left, politically? American Republican thinks so. Last week, President Donald Trump signed an executive order, which he called “Vok AI”. This requires federal agencies to work with AI platforms which are considered free from ideological bias. But can AI be really neutral? Our technology columnists, Dana Ditomaso, include us to discuss.

‘Just don’t agree with me’

Pickel is not the only one who is pushing back against AI’s tendency.

On reddit, many users have Expressed jealousy The way bots talk, and Shared strategies But Many threads To tone “flatter” and “puffed” in their chatbot reactions.

Some suggested as if “challenge my beliefs – just don’t agree with me.” some have detailed instructions To make the privatization of the chatgipt on, for example “sympathetic phrases” seems to be “difficult” and “understanding.”

Chatpat asked himself to reduce the sycophancy, when suggested typing users in signals such as “Play Devils Advocate”.

A Canada Cifar AI chair Alaona Fyshe, a Canadian Cifar AI in Amii, says that users should also start from scratching, so that the chatbot cannot be attracted to the previous history and the risk of going down from the “rabbit hole” of having an emotional relationship.

She also says that it is not important to share personal information with LLM – not only to keep the conversation less emotional, but also because user chat is often used to train the AI ​​model, and your personal information can end in someone else’s hands.

Headshot
Cifar Canada AI Chair, Alona Fyshe in Amii warns against giving personal information to Chatbots. (Emprasand Gray)

“You must assume that you should not put anything in an LLM that you (X) will not post,” Fishi said.

“When you get into these situations, where you are starting to build this feeling of faith with these agents, I think you can also start falling into these situations where you are sharing more normally.”

ONUS should not be on individuals: Researcher

Peter Lewis, Canada Research Chair in trusted Artificial Intelligence and Associate Professor at Ontario Tech University, say evidence suggests that some people are “very willing” to disclose personal information to AI chatbots than their friends, family or even their doctor.

He says that Pickel’s strategies are useful to keep an emotional distance, and also suggests that giving chatbots a “foolish” personality – for example, ask them to work like Homer Simpson.

But he emphasizes that Onas should not be to protect themselves on individual users.

“We cannot only tell the common people who are using these devices that they are doing it wrong,” he said. “These devices have been deliberately presented in these ways so that people use them and in some cases, through the pattern of drug addiction, to stay with them.”

Headshot
Peter Lewis, Canada Research Chair at trusted Artificial Intelligence, says Ons should not be on individuals to protect themselves while dealing with chatbots. (Kyle Laviote)

He said that the responsibility of regulating technology rests with technical companies and government, especially for the safety of young and weak users.

Professor Ebrahim Bagheri, Professor at the University of Toronto, who specializes in AI development, says that AI always has stress in space, some more and more regulation can negatively affect innovation.

“But the fact of this fact is, now you have tools where there are a lot of reports that they are causing social losses,” he said.

Listen How chat is affecting our intelligence:

Is Chatgpt making us smarter or dumber?

How do children really like AI chatbots like chat? We hit the streets around La Grande Cota in Montreal, Quebec.

“The effects are now real, and I don’t think the government can ignore it.”

Former Openai Security Researcher Steven Adler Posted an analysis On Thursday, tech companies can reduce “chatbot psychosis”, making several suggestions, including many suggestions, including better staffing support teams.

Bagheri says that he would like to see technical companies take more responsibility, but they do not doubt that they will not be forced.

Call for safety measures against ‘human-like’ trend

Bagheri and other experts are calling for comprehensive safety measures to make it clear that chatbots are not real.

“There are things that we now know that it is important and can be regulated,” he said. “The government can say that LLM should not engage in human conversations with its users.”

Beyond the regulation, he says that it is primarily important to bring education on chatbott in schools.

Headshot
Professor Ebrahim Bagheri of the University of Toronto says the government needs to regulate the chatbot. (Presented by Ibrahim Bagheri)

He said, “As soon as anyone can read or write, they will do (use) how to go on social media.” “So education is important from a government point of view, I think.”

A spokesperson of AI Minister Ivan Solomon said that the Ministry is looking at the chatbot safety issues as part of the AI ​​and comprehensive talks about the online security law.

Federal government launched an AI strategy task force last week, with A 30-day public counseling This includes questions on AI safety and literacy, which spokesperson Sophia Olis said it would be used to inform the future law.

However, most of the major AI companies are based in the US and Trump administration. Has been against regulating technology,

Chris Tenov, Assistant Director at the Center for the study of Democratic Institutions at the University of British Columbia, Told Canadian Press Last month that if Canada proceeds with online horms regulation, it is clear that “we will face an American backlash.”

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus ( )