Jump to content

xAI’s promised safety report is MIA


CodeCanyon

Recommended Posts

xAI’s promised safety report is MIA

Elon Musk’s AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project.

xAI isn’t exactly known for its strong commitments to AI safety as it’s commonly understood. A recent report found that the company’s AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of.

Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company’s approach to AI safety. The eight-page document laid out xAI’s safety priorities and philosophy, including the company’s benchmarking protocols and AI model deployment considerations.

As The Midas Project noted in a blog post on Tuesday, however, the draft only applied to unspecified future AI models “not currently in development.” Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit.

In the draft, xAI said that it planned to release a revised version of its safety policy “within three months” — by May 10. The deadline came and went without acknowledgement on xAI’s official channels.

Despite Musk’s frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its “very weak” risk management practices.

That’s not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever.

Techcrunch event

Join us at TechCrunch Sessions: AI

Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking.

Exhibit at TechCrunch Sessions: AI

Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

Berkeley, CA | June 5
REGISTER NOW
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. to insert a cookie message