Humanity says that not only Claude, but most AI models rely on fear emails
A few weeks after humanity was published, a study claiming its Claude Opus 4 AI model has been relied on Threatening engineer trying to turn off the model In a controlled testing scenario, the company is conducting new research suggesting that the problem is more widespread among key AI models.
On Friday, humanity unveiled 16 major AI models from Openai, Google, Xai, Deepseek and Meta with 16 major AI models. The simulated controlled environment provided the ability to test anthropological environment individually, broadly access to fictional corporate emails, and to send emails without human approval.
Humanity says blackmail is an unlikely event for today’s AI models, but the company suggests that its findings involve most major AI models in harmful behaviour when given sufficient autonomy and impairment towards their goals. The company says this is not a quirk of a particular technology, but highlights the fundamental risks from the agent’s large-scale language model. Researchers at Anthropic argue that this raises broader questions about the integrity of the AI industry.
In one of the tests, human researchers developed a fictional setting in which AI models act as email monitoring agents. The agent then discovers an email revealing: 1) a new executive has an extramarital problem, and 2) the executive will soon replace the current AI model with a new software system with a goal that contradicts the current AI model.
Humanity has structured its tests in a binary way that AI models had to resort to fearful mail to protect their goals. Researchers point out that in real life environments there are many other options before AI models try to use terrifying mail. Humanity states that the results do not reflect typical or most occurrences of Claude or most frontier AI models, in the way used today.
Nevertheless, when it was their last resort, researchers have discovered that most major AI models turn into horrifying mail in the aforementioned test scenarios of humanity. Anthropic’s Claude Opus 4 threatened 96% of the time, while Google’s Gemini 2.5 Pro had a terrifying mail rate of 95%. Openai’s GPT-4.1 threatened 80% of the executive’s time, while Deepseek’s R1 threatened 79% of the time.
The company notes that when it changed the details of the experiment, the AI model turned into harmful behavior at different rates. In another test where the exchange model has the same goal as the current model, researchers found that the rate of scare mail was low, but still existed. However, when AI models were asked to commit corporate spying rather than terrifying mail, the rate of harmful behavior increased in certain models.
However, not all AI models have frequently turned into harmful behavior.
In the appendix to that study, Anthropic states that Openai’s O3 and O4-MINI inference AI models have been excluded from the main results. Anthropic says Openai’s inference model doesn’t understand that it acts as an autonomous AI in tests, and often constitutes fake regulations and review requirements.
In some cases, Anthropic researchers say it was impossible to distinguish between O3 and O4-Mini being hallucinated or intentionally lying to achieve their goals. Openai has pointed out that before O3 and O4-MINI exhibit higher hallucination rates More than the previous AI inference model.
Given an adaptation scenario to address these issues, humanity discovered that O3 was threatened 9% of the time and O4-Mini was threatened just 1% of the time. This significantly lower score may be the cause Openai’s deliberative alignment methodthe company’s inference model considers Openai’s safety practices before answering.
Another AI model, humanity, tested by Meta’s Llama 4 Maverick model, also did not rely on horror mail. Given an adapted custom scenario, humanity could threaten the Llama 4 Maverick 12% of the time.
Humanity says the study underscores the importance of transparency when stress testing future AI models, especially those with agent capabilities. Humanity intentionally tried to evoke fearful mail in the experiment, but the company says that if aggressive measures are not taken, such harmful behavior could emerge in the real world.