Now worth $61 billion, humanity is revealing the most powerful AI model ever.


Humanity unveiled the latest generation of “frontier” or cutting-edge AI models, the Claude Opus 4 and the Claude Sonnet 4 at its first conference for developers on Thursday in San Francisco. The $61 billion-dollar AI startup said in a blog post that the new, highly anticipated OPUS model is the “world’s best coding model” and “provides sustainable performance for long-term tasks that require focused effort and thousands of steps.” With the new model, AI agents can analyze thousands of data sources and perform complex actions.

The new releases highlight the fierce competition among companies competing in races, implementing new technologies of speed and efficiency, particularly to build the world’s most advanced AI models in areas such as software coding. Google This week was called a demonstration of an experimental research model Gemini Spread. In a benchmark that compares how well different large-scale language models work in software engineering tasks, two Anthropic models have beaten Openai’s latest models, while Google’s best models fall behind.

Some early testers have already accessed the model and tried it out on real tasks. In one example the company offers, the AI ​​general manager at Rakuten, a shopping rewards company, said the Opus 4 was “autonomously coded for nearly seven hours” after being deployed on a complex project.

said Dianne Penn, a member of the technical staff at Anthropic. luck “This is actually a huge change, and it’s a huge change in terms of what these AI systems can do, and it’s a huge change and it’s a leap forward.” In particular, models are moving forward as they contribute to “copilots” or virtual collaborators who can work autonomously on behalf of an assistant, an agent, or a user.

Claude Opus 4 has several new features, she added. Historically, these systems don’t remember everything they did before, Penn said, but “we were cautious that we could unleash our long-term task awareness.” This model uses some kind of file system to track progress and strategically checks what is stored in memory to add the next step.

Both models can alternate between inference and using tools such as web search, and can also use multiple tools at once, such as web searching or running code tests.

“I think this is really a race to the top,” said Michael Gerstenharbor, humanity’s AI platform product lead. “We want to make sure that AI is improving for everyone and pressures all labs to increase it safely,” he explained.

The Claude 4 Opus is launched with stricter safety protocols than previous human models. The company’s Responsible Scaling Policy (RSP) was a public commitment originally released in September 2023, claiming that humanity will not “train or deploy a model that could cause catastrophic harm unless it implements safety and security measures that keep the risk below acceptable levels.” Humanity was founded in 2021 by a former Openai employee who was worried about it Openai prioritized speed and scale over security and governance.

In October 2024, the company updated its RSP with “a more flexible and nuanced approach to assessing and managing AI risks while maintaining its commitment to not train or deploying models unless appropriate safeguards are implemented.”

Until now, all Anthropic models have been categorized under AI Safety Level 2 (ASL-2) under the company’s responsible scaling policy, which provides a baseline level for AI models and model security. A human spokesman said the company has not ruled out that the new Claude Opus 4 can meet ASL-2 thresholds, but is actively launching models under stricter ASL-3 safety standards. Pursuing enhanced protection against model theft and misuse, preventing access to the “weight” inside the model, including strong defenses to prevent the release of harmful information.

The model, categorized as the third safety level of humanity, meets the more dangerous capabilities thresholds in accordance with the company’s responsible scaling policy, and is powerful enough to pose serious risks such as weapon development and AI R&D automation. Humanity has confirmed that OPUS 4 does not require the highest level of protection classified as ASL-4.

“We expected this might be done when we launched our last model, the Claude 3.7 Sonnet,” a human spokesman said. “In that case, the model decided that it would not require protection of the ASL-3 standard. However, given the pace of progress, it acknowledged the very realistic possibility that models in the near future could guarantee these enhanced measures.”

Towards the release of Claude 4 Opus, she explained and actively decided to launch it under ASL-3 standards. “This approach allowed us to focus on the development, testing and refinement of these protections before they were needed. The model eliminated the need for an ASL-4 safeguard based on testing.” Humanity did not say anything that caused the decision to move to ASL-3.

Humanity is constantly releasing models or systems launches that provide detailed information about the capabilities and safety assessments of models. Penn said luck That humanity has released model cards with new releases of the Opus 4 and Sonnet 4, and a spokesman has confirmed that the model will be released when it is released today.

Recently, companies, including Openai and Google, have been delaying the release of model cards. In April, Openai was like that It was criticized This is because the company said it wasn’t a “frontier” model and wasn’t necessary to release the GPT-4.1 model without a model card. And in March, Google unveiled the Gemini 2.5 Pro model card a few weeks after the model’s release, revealing AI governance experts. It was criticized It is as “slight” and “worry.”

This story was originally introduced Fortune.com

Leave a Reply

Your email address will not be published. Required fields are marked *