After multiple leadership departures, OpenAI dissolves the AI safety team

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • OpenAI disbanded its AI safety team after key departures.
  • Ex-leader Leike worried safety research was deprioritized for product development.
  • OpenAI assures continued safety research, but integration across teams is unclear.

OpenAI has disbanded its team, which is dedicated to long-term safety risks from advanced AI. This news comes just days after the departures of two key figures: Ilya Sutskever, co-founder and former chief scientist, and Jan Leike, who co-led the Superalignment team.

Leike publicly voiced his concerns about OpenAI prioritizing the development of new products over safety research. 

“safety culture and processes have taken a backseat to shiny products.”

He said that more resources should be allocated to monitoring, preparedness, and the societal impact of AGI (Artificial General Intelligence). Leike voiced his belief that OpenAI should prioritize safety above all else.

The disbanding of the Superalignment team comes after a period of internal turmoil at OpenAI. In November 2023, co-founder Sam Altman was ousted by the board due to a lack of transparency in communication. 

While Altman was eventually reinstated, Sutskever and other board members who opposed him departed the company.

OpenAI maintains that safety research will continue, it’s unclear how effectively it will be integrated across different teams without a dedicated group focused solely on long-term risk mitigation.

This news comes alongside the launch of OpenAI’s new AI model GPT-4o and desktop version of ChatGPT. 

Only time will tell how OpenAI will navigate the complex challenges of ensuring responsible development of advanced AI, while balancing those efforts with the pursuit of innovation.

More here.