Elon Musk’s AI firm, xAI, has missed a self-imposed deadline to publish a finalized AI security framework, as famous by watchdog group The Midas Mission.
xAI isn’t precisely identified for its robust commitments to AI security because it’s generally understood. A latest report discovered that the corporate’s AI chatbot, Grok, would undress photographs of girls when requested. Grok may also be significantly extra crass than chatbots like Gemini and ChatGPT, cursing with out a lot restraint to talk of.
Nonetheless, in February on the AI Seoul Summit, a worldwide gathering of AI leaders and stakeholders, xAI printed a draft framework outlining the corporate’s method to AI security. The eight-page doc laid out xAI’s security priorities and philosophy, together with the corporate’s benchmarking protocols and AI mannequin deployment issues.
As The Midas Mission famous within the weblog put up on Tuesday, nonetheless, the draft solely utilized to unspecified future AI fashions “not presently in growth.” Furthermore, it didn’t articulate how xAI would determine and implement danger mitigations, a core element of a doc the corporate signed on the AI Seoul Summit.
Within the draft, xAI stated that it deliberate to launch a revised model of its security coverage “inside three months” — by Might 10. The deadline got here and went with out acknowledgement on xAI’s official channels.
Regardless of Musk’s frequent warnings of the hazards of AI gone unchecked, xAI has a poor AI security observe report. A latest research by SaferAI, a nonprofit aiming to enhance the accountability of AI labs, discovered that xAI ranks poorly amongst its friends, owing to its “very weak” danger administration practices.
That’s to not counsel different AI labs are faring dramatically higher. In latest months, xAI rivals together with Google and OpenAI have rushed security testing and have been sluggish to publish mannequin security stories (or skipped publishing stories altogether). Some consultants have expressed concern that the seeming deprioritization of security efforts is coming at a time when AI is extra succesful — and thus probably harmful — than ever.