Jump to content

OpenAI pledges to publish AI safety test results more often


CodeCanyon

Recommended Posts

OpenAI pledges to publish AI safety test results more often

OpenAI is moving to publish the results of its internal AI model safety evaluations more regularly in what the outfit is pitching as an effort to increase transparency.

On Wednesday, OpenAI launched the Safety Evaluations Hub, a webpage showing how the company’s models score on various tests for harmful content generation, jailbreaks, and hallucinations. OpenAI says that it’ll use the hub to share metrics on an “ongoing basis,” and that it intends to update the hub with “major model updates” going forward.

Introducing the Safety Evaluations Hub—a resource to explore safety results for our models.

While system cards share safety metrics at launch, the Hub will be updated periodically as part of our efforts to communicate proactively about safety.https://t.co/c8NgmXlC2Y

— OpenAI (@OpenAI) May 14, 2025

“As the science of AI evaluation evolves, we aim to share our progress on developing more scalable ways to measure model capability and safety,” wrote OpenAI in a blog post. “By sharing a subset of our safety evaluation results here, we hope this will not only make it easier to understand the safety performance of OpenAI systems over time, but also support community efforts⁠ to increase transparency across the field.”

OpenAI says that it may add additional evaluations to the hub over time.

In recent months, OpenAI has raised the ire of some ethicists for reportedly rushing the safety testing of certain flagship models and failing to release technical reports for others. The company’s CEO, Sam Altman, also stands accused of misleading OpenAI executives about model safety reviews prior to his brief ouster in November 2023.

Late last month, OpenAI was forced to roll back an update to the default model powering ChatGPT, GPT-4o, after users began reporting that it responded in an overly validating and agreeable way. X became flooded with screenshots of ChatGPT applauding all sorts of problematic, dangerous decisions and ideas.

OpenAI said that it would implement several fixes and changes to prevent future such incidents, including introducing an opt-in “alpha phase” for some models that would allow certain ChatGPT users to test the models and give feedback before launch.

Techcrunch event

Join us at TechCrunch Sessions: AI

Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking.

Exhibit at TechCrunch Sessions: AI

Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

Berkeley, CA | June 5
REGISTER NOW

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. to insert a cookie message