‘This Will Be a Stressful Job’: OpenAI Is Hiring for a Position That Sounds Horrifying
Being OpenAI's "head of preparedness" sounds like a hellish way to make a living.
“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” OpenAI CEO Sam Altman wrote on X in his announcement of the “head of preparedness” job at OpenAI on Saturday.
In exchange for $555,000 per year, according to OpenAI’s job ad, the head of preparedness is supposed to “expand, strengthen, and guide,” the existing preparedness program within OpenAI’s safety systems department. This side of OpenAI builds the safeguards that, in theory, make OpenAI’s models “behave as intended in real-world settings.”
But hey, wait a minute, are they saying OpenAI’s models behave as intended in real-world settings now? In 2025, ChatGPT continued to hallucinate in legal filings, attracted hundreds of FTC complaints, including complaints that it was triggering mental health crises in users, and evidently turned pictures of clothed women into bikini deepfakes. Sora had to have its ability to make videos of figures like Martin Luther King, Jr. revoked because users were abusing the privilege to make revered historical figures say basically anything.
When cases related to problems with OpenAI products reach the courts—as with the wrongful death suit filed by the family of Adam Raine, who, it is alleged, received advice and encouragement from ChatGPT that led to his death—there’s a legal argument to be made that users were abusing OpenAI’s products. In November, a filing from OpenAI’s lawyers cited rule violations as a potential cause of Raine’s death.
Whether you buy the abuse argument or not, it’s clearly a big part of the way OpenAI makes sense of what its products are doing in society. Altman acknowledges in his X post about the head of preparedness job that the company’s models can impact people’s mental health, and can find security vulnerabilities. We are, he says, “entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits.”
After all, if the goal were purely to not ever cause any harm, the quickest way to make sure of that would be to just remove ChatGPT and Sora from the market altogether.
The head of preparedness at OpenAI, then, is someone who will thread this needle, and “[o]wn OpenAI’s preparedness strategy end-to-end,” figuring out how to evaluate the models for unwanted abilities, and design ways to mitigate them. The ad says this person will have to ”evolve the preparedness framework as new risks, capabilities, or external expectations emerge.” This can only mean figuring out new potential ways OpenAI products be able to harm people or society, and come up with the rubric for allowing the products to exist, while demonstrating, presumably, that the risks have been dulled enough that OpenAI isn’t legally liable for the seemingly inevitable future “downsides.”