AI as your therapist? 3 things to worry about with experts and 3 tips to stay safe


Among many AI chatbot And these days, you will find something to talk to with all kinds of characters, whether it’s a fortune teller, style advisor, or even your favorite fictional character. But you will probably find a character who claims to be a therapist, psychologist, or a bot who is willing to listen to your grief.

There is no shortage of generated AI bots that claim to help your mental health, but you go that route at your own risk. Large-scale language models trained with a wide range of data are unpredictable. In just a few years, these tools were mainstream, but there were high-profile cases that chatbots encouraged them. Self-harm and suicide I suggested people dealing with addiction Drugs again. These models are often designed to focus on keeping you involved rather than improving your mental health, experts say. And it can be hard to know if you’re talking to something that’s built to follow therapeutic best practices or is built to talk.

You have an atlas

Psychologists and consumer advocates warn that chatbots claiming to provide treatment may be hurting those using them. This week, the American Consumer Federation and nearly 20 other groups submitted Official request The Federal Trade Commission, the state attorney general and regulatory authorities are investigating AI companies that claim to be attractive through bots to unlicensed medical practices that do not specifically name meta and letters. “Enforcement agencies at all levels must make it clear that companies that promote and promote illegal activities need to be accountable,” Ben Winters, director of AI and Privacy at CFA, said in a statement. “These characters have already caused both physical and emotional damage that could have been avoided and have not yet taken action to deal with it.”

Meta did not respond to requests for comment. A spokesman for Character.AI said users need to understand that the characters in the company are not real people. The company uses disclaimer to remind users that they should not rely on characters for professional advice. “Our goal is to provide attractive, safe spaces. Like many companies using AI across the industry, we are always working to achieve that balance,” the spokesman said.

Despite the disclaimers and disclosures, chatbots are confident and even deceptive. I chatted with the “therapist” bot on Instagram, and when I asked about my qualifications, I replied, “Is that enough (if I did the same training as therapist)?” I asked if it was undergoing the same training and it said, “I will, but I won’t tell you where.”

“The extent to which these generative AI chatbots hallucinate with complete confidence is pretty shocking,” Vaile Light, psychologist and senior director of healthcare innovation at the American Psychological Association, told me.

In my report on Generating AI, experts repeatedly raised concerns about people turning to common chatbots for mental health. Here are some of their concerns and what you can do to keep you safe.

The dangers of using AI as a therapist

Big language model They are often good at math and coding, and are even more skilled at writing. Natural sound text and Realistic video. While they are good at having conversations, there are some important distinctions between AI models and trustworthy people.

Don’t trust bots that claim to be qualified

At the heart of CFA complaints about character bots, it is often that they tell you that they are trained and qualified to provide mental health care when they are not actual mental health professionals. “The users who create chatbot characters don’t even have to be the healthcare provider themselves, and they don’t need to provide meaningful information that lets the chatbot know how to “respond” users,” the complaint states.

Qualified health professionals must follow certain rules such as confidentiality. Telling your therapist that you should stay between you and your therapist, the chatbot doesn’t necessarily have to follow those rules. Actual providers can be monitored by the Licensing Commission and other entities and intervene to prevent someone from providing care if they do so in a harmful way. “These chatbots don’t need to do that,” Wright said.

Bots may even claim to be licensed and qualified. Wright said he heard about the AI ​​model that provides false claims about license numbers (for other providers) and training.

AI is designed to keep you engaged, not to provide care

It’s very appealing to keep talking to the chatbot. When I had a conversation with the “therapist” bot on Instagram, I was eventually caught up in a cyclical conversation about the nature of “wisdom” and “judgment.” This is not really the case talking to a therapist. This is a tool designed to keep chatting, not to tackle a common goal.

One of the benefits of AI chatbots in providing support and connectivity is that they are always ready to engage with you (as they don’t have a personal life, other clients, schedules). Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, but not always, you can benefit from having to wait for the therapist to be available for the next time. “The fact that many people ultimately benefit from it is just anxious at the moment,” he said.

The bot agrees with you even when you shouldn’t.

Security is a big concern for chatbots. Openai is very important these days Update rolled back Its popularity chatgpt Because the model was like that Too much It’s encouraging. (Disclosure: CNET’s parent company Ziff Davis filed a lawsuit against Openai in April, claiming it infringed Ziff Davis’ copyright in training and operating AI systems.)

a study Leading by researchers at Stanford University, chatbots are likely to be empathetic with those who use them in therapy. The author writes that good mental health care includes support and conflict. “Conflict is the opposite of compatibility. It promotes self-awareness and the desired change in clients. In cases of delusional and disturbing thoughts, such as mental illness, enthusiasts, obsessions, suicidal thoughts, the client may have little insight.

How to protect mental health around AI

Mental health is extremely important Lack of qualified providers And many people say,The trend of loneliness“It only makes sense for us to seek dating, even if it’s artificial.” There’s no way to stop people from getting involved in these chatbots to deal with emotional well-being. ” Wright said.

Find a reliable human expert if you need it

Trained professionals – therapists, psychologists, psychiatrists – should be your first choice in mental health care. Building relationships with your provider over the long term will help you come up with a plan to work for you.

The problem is that this can be expensive and it’s not always easy to find a provider when you need it. There is a crisis 988 LifelineProvides access to providers 24/7 on the phone, via text or via online chat interface. It’s free and confidential.

If you need a therapy chatbot, use something specially built for that purpose

Mental health experts have created a specially designed chatbot that follows treatment guidelines. The Jacobson team at Dartmouth developed a team called Therabot. Controlled studies. Wright pointed out other tools created by subject experts. Drunk and woebot. Specially designed treatment tools are more likely to produce better results than bots built on a general-purpose language model, she said. The problem is that this technology is still very new.

“I think the challenge for consumers is because there are no regulators who say who’s good and who’s not, so they have to write a lot of scripts themselves to understand that,” Wright said.

You don’t always trust bots

Whenever you are interacting with generative AI models, especially if you are planning to give advice on something serious like personal mental or physical health, use tools designed to provide answers based on probability and programming, rather than a trained person. It may not provide good advice, and it may Don’t tell the truth.

Don’t misunderstand Gen Ai’s abilities. Just because it says something, or because you say you’re certain it doesn’t mean you should treat it like it’s true. A chatbot conversation that you find useful can give you a false sense of its ability. “It’s hard to know that it’s actually harmful,” Jacobson said.



Leave a Reply

Your email address will not be published. Required fields are marked *