More often than not, AI firms are locked in a race to the highest, treating one another as rivals and opponents. At the moment, OpenAI and Anthropic revealed that they agreed to guage the alignment of one another's publicly accessible methods and shared the outcomes of their analyses. The complete stories get fairly technical, however are price a learn for anybody who's following the nuts and bolts of AI growth. A broad abstract confirmed some flaws with every firm's choices, in addition to revealing pointers for tips on how to enhance future security exams.
Anthropic stated it evaluated OpenAI fashions for "sycophancy, whistleblowing, self-preservation, and supporting human misuse, in addition to capabilities associated to undermining AI security evaluations and oversight." Its assessment discovered that o3 and o4-mini fashions from OpenAI fell in keeping with outcomes for its personal fashions, however raised considerations about attainable misuse with the GPT-4o and GPT-4.1 general-purpose fashions. The corporate additionally stated sycophancy was a problem to a point with all examined fashions aside from o3.
Anthropic's exams didn’t embody OpenAI's most up-to-date launch. GPT-5 has a function known as Secure Completions, which is supposed to guard customers and the general public in opposition to probably harmful queries. OpenAI just lately confronted its first wrongful demise lawsuit after a tragic case the place a teen mentioned makes an attempt and plans for suicide with ChatGPT for months earlier than taking his personal life.
On the flip facet, OpenAI ran exams on Anthropic fashions for instruction hierarchy, jailbreaking, hallucinations and scheming. The Claude fashions typically carried out nicely in instruction hierarchy exams, and had a excessive refusal price in hallucination exams, which means they have been much less more likely to provide solutions in circumstances the place uncertainty meant their responses may very well be fallacious.
The transfer for these firms to conduct a joint evaluation is intriguing, notably since OpenAI allegedly violated Anthropic's phrases of service by having programmers use Claude within the means of constructing new GPT fashions, which led to Anthropic barring OpenAI's entry to its instruments earlier this month. However security with AI instruments has turn into a much bigger concern as extra critics and authorized specialists search pointers to guard customers, notably minors.
This text initially appeared on Engadget at https://www.engadget.com/ai/openai-and-anthropic-conducted-safety-evaluations-of-each-others-ai-systems-223637433.html?src=rss