Openai warns that its new ChatGPT agent has the ability to support dangerous biological development
Openai’s latest product promises that someone can automatically collect data, create spreadsheets, create book travel, and spin up slide decks. ChatGpt Agent, a new agent AI tool that allows users to perform actions on their behalf, is the first product classified by Openai, which has “high” features in Biorisk.
This means that the model can provide meaningful support to “beginner” actors and allow them to create known biological or chemical threats. According to Openai’s “preparation framework,” the real-world meaning of this means that biological or chemical terrorist events by non-state actors are more likely and more frequent.
“Some people may think Biorisk is not realistic, and the model may only provide information that can be found in searches. It may not have been the case in 2024, but it is definitely not true today. Based on our ratings and expert ratings, the risks are very realistic,” said a member of OpenAI’s technical staff. In a social media post.
“We cannot be sure this model can produce serious biological harm for beginners, but I think it was very irresponsible to release this model without the comprehensive mitigation we installed,” he added.
Openai said classifying the model as a high risk of Bio-Misuse is a “precaution” and caused an extra protection for the tool.
Kennen Gu, Safety Research and Oceana, I said this while the company was doing that There was no conclusive evidence that the model could meaningfully guide beginners and produce serious biological harms, but it still stimulated conservation measures. These protective guards include prompts that can help someone to create a BioWeapon, systems that flag potentially unsafe requests for professional reviews, strict rules that block dangerous content, faster responses to issues, and robust monitoring of indications of misuse.
One of the key challenges in mitigating the possibility of biorisk is that the same feature can be unlocked A life-saving medical breakthrough One of the great promises for advanced AI models.
The company is increasingly concerned about the possibility of misuse of models in biological weapon development. In a blog post last month, Openai announced it is stepping up safety testing to reduce the risk that the model will be used to help create biological weapons. The AI Lab warned that without these precautions, the model could soon enable “beginner bumps.”
“Unlike nuclear or radiation threats, the acquisition of materials is not a barrier to creating bio threats, so security depends heavily on the rarity of knowledge and lab skills.” Barak said. “Based on our assessment and external experts, unauthorized ChatGPT agents may narrow the gap in their knowledge and provide advice close to the subject matter experts.”
chatgpt agent
Openai’s new ChatGPT feature is an attempt to acquire one of the most talked about and most dangerous areas of AI development: agents.
The new feature will work with features like personal assistants that can handle tasks like booking restaurant reservations, shopping online, and organizing job seekers lists. Unlike previous versions, the tool uses a virtual computer to actively control the web browser, interact with files, and navigate apps such as spreadsheets and slide decks.
The company has integrated the team behind Deep Research, a tool developed to conduct multi-step online research on complex tasks, forming a single group developing new tools.
AI Labs is currently competing to build agents that can manage complex digital tasks independently. Google and humanity. Large tech companies view AI agents as commercial opportunities. This is because businesses are increasingly moving to implement AI in their workflows and automate specific tasks.
Openai acknowledges that greater autonomy poses more risks and emphasizes user control to mitigate these risks. For example, an agent I’ll ask for permission Users can pause, redirect or stop anytime before taking any important actions.