The pinnacle of OpenAI’s {hardware} group, Caitlin Kalinowski, resigned over the weekend. In posts on social media, Kalinowski stated she was involved about OpenAI’s current settlement with the Division of Protection.
The settlement would make OpenAI’s generative AI techniques out there inside safe Protection Division computing techniques.
“I resigned from OpenAI. I care deeply in regards to the robotics group and the work we constructed collectively. This wasn’t a simple name,” Kalinsowski wrote on X. “AI has an necessary position in nationwide safety. However surveillance of People with out judicial oversight and deadly autonomy with out human authorization are traces that deserved extra deliberation than they obtained. This was about precept, not individuals. I’ve deep respect for Sam and the group, and I’m happy with what we constructed collectively.”
“To be clear, my challenge is that the announcement was rushed with out the guardrails outlined,” she wrote in a follow-up put up. “It’s a governance concern in the beginning. These are too necessary for offers or bulletins to be rushed.”
Kalinowski joined OpenAI in November 2024 to steer the corporate’s renewed {hardware} and robotics group. Whereas on the firm, Kalinowski helped to construct the corporate’s robotics group because it scaled. Earlier than working at OpenAI, Kalinowski led the Meta group constructing augmented actuality glasses.
Earlier than becoming a member of OpenAI, Kalinowski labored at Meta, Oculus VR, Apple, and extra. | Supply: Meta
OpenAI’s protection settlement
Kalinowski’s resignation comes as AI’s position in army operations turns into an more and more heated dialogue. In current months, the Pentagon has put stress on main AI firms to permit their fashions for use for “all lawful functions.” Anthropic, one other AI developer, has pushed again on this assertion.
Anthropic hoped to ink a take care of the U.S. Division of Protection (DoD) and negotiate tips to stop its AI know-how from getting used for mass home surveillance or totally autonomous weapons. After the discussions fell by way of, the Pentagon designated Anthropic a supply-chain danger. Anthropic stated it plans to combat in opposition to this designation in courtroom.
OpenAI introduced its take care of the DoD on the finish of February. The information got here simply days after Anthropic ended its personal discussions with the DoD. As a part of its settlement with the DoD, OpenAI is permitting its know-how for use in categorised environments. Sam Altman, the CEO of OpenAI, claimed that the corporate’s contract contains protections stopping the DoD from utilizing the corporate’s know-how for surveillance or autonomous weapons.
The put up OpenAI robotics head resigns over Pentagon deal appeared first on The Robotic Report.



