ChatGPT creator warns: AI models getting dangerous; OpenAI needs safety boss
OpenAI's Sam Altman seeks a high-stakes 'Head of Preparedness' to address AI's growing security risks and mental health impacts. The company faces lawsuits over alleged harm caused by ChatGPT, while its safety teams have been disbanded. This new role aims to balance rapid AI advancement with crucial safeguards, a challenging task given the technology's unprecedented capabilities and potential dangers.
![]()
OpenAI's Sam Altman seeks a high-stakes 'Head of Preparedness' to address AI's growing security risks and mental health impacts. The company faces lawsuits over alleged harm caused by ChatGPT, while its safety teams have been disbanded. This new role aims to balance rapid AI advancement with crucial safeguards, a challenging task given the technology's unprecedented capabilities and potential dangers.
The man who unleashed ChatGPT on the world is now sounding the alarm about his own creation. OpenAI CEO Sam Altman is hunting for a Head of Preparedness to tackle what he calls "real challenges" emerging from AI systems that have grown so sophisticated they're finding critical security vulnerabilities and impacting users' mental health.
The $555,000 role comes as the company that revolutionised generative AI in late 2022 grapples with lawsuits, safety team departures, and the uncomfortable reality that its technology may be causing genuine harm."Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges," Altman wrote on X. He didn't sugarcoat the position's demands, warning applicants: "This will be a stressful job and you'll jump into the deep end pretty much immediately."
OpenAI's Master Plan for India
The new hire will lead OpenAI's preparedness framework, evaluating frontier AI capabilities and coordinating safeguards across cybersecurity, biosecurity, and the prospect of AI systems that can improve themselves—challenges that Altman admits have "little precedent." The company specifically needs help figuring out "how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm.
"
Safety teams keep vanishing as products multiply
The job opening arrives amid a disturbing pattern at OpenAI: safety teams appear, then disappear. The company's Superalignment team, launched in 2023 with the mission of preventing AI systems "much smarter than us" from going rogue, lasted less than a year before being disbanded in May 2024. Co-leader Jan Leike quit with a blistering exit statement, declaring that "safety culture and processes have taken a backseat to shiny products.