AI Customer Service Chatbot has configured company policy, causing confusion


Developers using popular on Monday ai power Code Editor cursor I noticed something strange. Switching between machines instantly logged out, breaking a common workflow for programmers using multiple devices. When the user contacted support for cursors, an agent named “Sam” said he expected action under the new policy. But no such policy existed, and Sam was a bot. AI models raised policies, causing waves of documented complaints and cancellation threats Hacker News and reddit.

This marks the latest instance of AI Comparison (Also Called “Hazardization”) Causes potential business damage. Confabulation is a type of response that “fills the creative gap” that, while AI models are plausible, inventing misinformation. Instead of acknowledging uncertainty, AI models often prioritize creating plausible, confident responses, even when it means producing information from scratch.

For businesses that deploy these systems in customer-facing roles without human supervision, the results can be immediate and costly. Frustrated customers, damaged trusts, and cursors may cancel your subscription.

How did it unfold?

The incident began when a Reddit user named Brokentoasterov began. I noticed The cursor session ended unexpectedly while swapping between the desktop, laptop and remote development box.

“When you log in to the cursor on one machine, the session will be disabled on the other machine immediately,” Brokentoastereven said. It was deleted later By R/Cursor Moderator. “This is an important UX regression.”

Confused and frustrated users wrote an email to cursor support and received a reply from SAM immediately. The reaction sounded decisive and official, and users didn’t suspect that Sam was not human.

After the initial Reddit post, users received the post as an official confirmation of actual policy changes. This is a break from the habits that are essential to the daily life of many programmers. “Multi-device workflows are developer table stakes,” one user wrote.

Shortly afterwards, several users published subscription cancellations on Reddit, citing a policy that does not exist. “I literally just cancelled my sub,” he wrote the original Reddit poster, adding that their workplace is now “completely purged it.” Others said, “Yeah, I’m canceling too, this is Ashinin.” Soon after that, the moderator locked the Reddit thread and deleted the original post.

“Hey! We don’t have that policy.” I wrote it The Reddit cursor representative will reply in 3 hours. “Of course, you can use your cursor freely on multiple machines. Unfortunately, this is a false response from the frontline AI support bot.”

Compiling AI as a business risk

The cursor’s big failure reminds me of a Similar episodes From February 2024, when Air Canada was ordered to respect the refund policy invented by its own chatbot. In that case, Jake Moffat contacted Air Canada’s support after his grandmother passed away and told him that the airline’s AI agent could mistakenly apply to him for a bereavement rate. When Air Canada later rejected the refund request, the company claimed that “the chatbot is another corporation responsible for its own actions.” A Canadian court refused to defend this and found that the company was liable for the information provided by the AI ​​tools.

Rather than challenging the liability as Air Canada did, Cursor admitted the error and took steps to correct it. Cursor co-founder Michael Truell later Apologises to the hacker news We explain that users are refunded due to confusion about non-existent policies and that the issue is due to backend changes to improve session security that have been created unintentionally to invalidate sessions for some users.

“The AI ​​responses used for email support are clearly labelled as such,” he added. “We use AI-assisted responses as our first filter for email support.”

Still, the incident raised prolonged questions about disclosure among users, as many people who interacted with Sam clearly believed it was human. “It’s a LLMS pretending to be a person (you named it Sam!) and not labeled that way, is clearly meant to be deceptive,” said one user I wrote it on Hacker News.

The cursor fixed a technical bug, but this episode shows the risk of deploying AI models in customer-facing roles without proper protection and transparency. For businesses selling AI productivity tools to developers, inventing alienating policies to their own AI support systems represents a particularly troublesome self-harm scar.

“There’s a cynical feeling that they’re really trying hard to say hallucinations aren’t a big deal anymore,” one user I wrote it on Hacker News“And the companies that benefit from that story are directly hurt by it.”

This story originally appeared Ars Technica.

Leave a Reply

Your email address will not be published. Required fields are marked *