Even Openai CEOs should be aware of what they share with ChatGpt
Maybe don’t spill your deepest, darkest secrets and Broadcast. You don’t need to take my words about it. Take it from the guy behind the most popular generation AI models on the market.
CEO Sam Altman chatgpt Maker Openai raised the issue this week on An Interview With host Theo Von on this past weekend’s podcast. He suggested that conversations with AI should have similar protections to what you have with your doctor or lawyer. At one point, Fong said that one of the reasons he was hesitant to use some AI tools was because he “didn’t know who had his personal information.”
“I think that makes sense,” Altman said.
More and more AI users Treat chatbots like a therapistdoctors or lawyers, and it is causing serious privacy issues for them. There are no rules of confidentiality, and the actual mechanisms of what happens to those conversations are surprisingly unclear. Of course, there are other issues with using AI as a therapist or confidant, such as how bots give terrible advice and how they can do it. Enhance stereotypes or stigma. (My colleague Nelson Aguilar 11 things you should never do with chatgpt And the reason. )
Altman is clearly aware of the problem here, and at least seems to be a bit bothered. “People use it, especially young people, use it as a therapist, a life coach. I have these relationship issues, what should I do?” he said. “Now, you have legal privilege to talk to a therapist, lawyer, or doctor about those issues.”
During part of the conversation, a question came up about whether it should be. More rules or regulations regarding AI. Like President Donald Trump, rules that curb AI companies and technological development in Washington have rarely gained favor these days in Washington. AI Action Plan Released this week, it expressed its hope that the technology is not too regulated and not more. However, rules to protect them may find favor.
read more: AI Essentials: Our experts say 29 ways to make Gen AI work
Altman seemed most concerned about his lack of legal protection to ensure that businesses like him were not forced to take over private conversations in lawsuits. Openai has it Opposition To request that users be held during a lawsuit with the New York Times Copyright infringement and intellectual property problem. (Disclosure: CNET’s parent company Ziff Davis filed a lawsuit against Openai in April, claiming it infringed Ziff Davis’ copyright in training and operating AI systems.)
“If you go to ChatGpt about what’s most sensitive, if there’s a lawsuit or something, you might need to create one,” says Altman. “I think that’s a very mess. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever.”
Be careful when you tell AI about yourself
For you, this issue isn’t that big so Openai may have to turn your conversations over in a lawsuit. It’s a problem with whom you trust in your secrets.
William Agnew, a researcher at Carnegie Mellon University and was part of the team. Rated chatbots I recently told me that privacy is the most important issue when confessing about their performance in dealing with questions like therapy. The uncertainty about how the model works and how your conversations don’t show up in other people’s chats is reason enough to hesitate.
“Even if these companies pay attention to data, these models are well known to reflux information,” Agnew said.
If ChatGpt or another tool refluxes information from your treatment session or the medical questions you asked, it may be a sign that it is whether your insurance company or someone interested in your personal life is looking for the same tool about you.
“People should actually think more about privacy and know that almost everything they tell these chatbots is private,” Agnew says. “It’s used in all sorts of ways.”