In a recent blog post, OpenAI unveiled its new ‘Safety and Security Committee’, set to be chaired by Sam Altman, CEO, OpenAI and three other board members. This move comes on the heels of significant executive departures, including prominent safety researcher Jan Leike.
Leike resigned, voicing concerns over the company’s insufficient investment in AI safety. He criticised OpenAI for prioritising product development over safety culture and processes, indicating that his disagreements with leadership had ‘reached a breaking point.’
In response to growing internal and external scrutiny, OpenAI has enlisted several technical and policy experts to address safety concerns. Notable appointments include Aleksander Madry as head-preparedness, Lilian Weng as head-safety systems, John Schulman as head-alignment science, Matt Knight as head-security, and Jakub Pachocki as chief scientist.
The committee’s immediate priority will be to evaluate and enhance OpenAI’s safety and security processes over the next 90 days. Following this assessment, they will provide recommendations to the OpenAI Board and publicly share the updates on the actions taken.
To further bolster its efforts, OpenAI plans to engage with additional safety, security and technical experts, including former cybersecurity officials. This initiative aims to address the concerns of its workforce and reaffirm the company’s commitment to AI safety in a highly- competitive market for AI talent.
Earlier this month, OpenAI disbanded its ‘superalignment’ team, which was dedicated to aligning the company’s products with human needs and priorities. OpenAI claimed that reallocating the team members across the organisation would enhance their effectiveness. However, this decision was met with discontent from team members, leading to the departure of another key executive, Ilya Sutskever.