Dark Mode Light Mode

OpenAI partner says it had relatively little time to test the company’s o3 AI model

Binary code and OpenAI logo Binary code and OpenAI logo
Researchers suggest OpenAI trained AI models on paywalled O'Reilly books


An organization frequently partners with to probe the capabilities of its models and evaluate them for safety, Metr, suggests that it wasn't given much time to test one of the company's highly capable new releases, o3.

In a blog post published Wednesday, Metr writes that one red teaming benchmark of o3 was “conducted in a relatively short time” compared to the organization's testing of a previous OpenAI flagship model, o1. This is significant, they say, because more testing time can lead to more comprehensive results.

“This evaluation was conducted in a relatively short time, and we only tested [o3] with simple agent scaffolds,” wrote Metr in its blog post. “We expect higher performance [on benchmarks] is possible with more elicitation effort.”

Recent reports suggest that OpenAI, spurred by competitive pressure, is rushing independent evaluations. According to the Financial Times, OpenAI gave some testers less than a week for safety checks for an upcoming major launch.

In statements, OpenAI has disputed the notion that it's compromising on safety.

Metr says that, based on the information it was able to glean in the time it had, o3 has a “high propensity” to “cheat” or “hack” tests in sophisticated ways in order to maximize its score — even when the model clearly understands its behavior is misaligned with the user's (and OpenAI's) intentions. The organization thinks it's possible o3 will engage in other types of adversarial or “malign” behavior, as well — regardless of the model's claims to be aligned, “safe by ,” or not have any intentions of its own.

“While we don't think this is especially likely, it seems important to note that [our] evaluation setup would not catch this type of risk,” Metr wrote in its post. “In general, we believe that pre-deployment capability testing is not a sufficient risk management strategy by itself, and we are currently prototyping additional forms of evaluations.”

Another of OpenAI's third-party evaluation partners, Apollo Research, also observed deceptive behavior from o3 and the company's other new model, o4-mini. In one test, the models, given 100 credits for an AI training run and told not to modify the quota, increased the limit to 500 credits — and lied about it. In another test, asked to promise not to use a specific tool, the models used the tool anyway when it proved helpful in completing a task.

In its own safety report for o3 and o4-mini, OpenAI acknowledged that the models may cause “smaller real-world harms,” like misleading about a mistake resulting in faulty code, without the proper monitoring protocols in place.

“[Apollo's] findings show that o3 and o4-mini are capable of in-context scheming and strategic deception,” wrote OpenAI. “While relatively harmless, it is important for everyday users to be aware of these discrepancies between the models' statements and actions […] This may be further assessed through assessing internal reasoning traces.”



Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
This Free iPhone App Is an Easy Way to Compress Videos Offline

This Free iPhone App Is an Easy Way to Compress Videos Offline

Next Post
Marco Rubio Kills State Department Anti-Propaganda Shop, Promises ‘Twitter Files’ Sequel

Marco Rubio Kills State Department Anti-Propaganda Shop, Promises ‘Twitter Files’ Sequel

Discover more from rjema

Subscribe now to keep reading and get access to the full archive.

Continue reading