Sam Altman is willing to pay somebody $555,000 a year to keep ChatGPT in line
How’d you like to earn more than half a million dollars working for one of the world’s fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn’t stick around. Over the weekend, OpenAI boss Sam Altman went public with a search for a new Head of Preparedness, saying rapidly improving AI models are creating new risks that need closer oversight.
Altman flagged an opening for the company's Head of Preparedness on Saturday in a post on X. Describing the role, which carries a $555,000 base salary plus equity, as one focused on securing OpenAI's systems and understanding how they could be abused, Altman also noted that AI models are beginning to present "some real challenges" as they rapidly improve and gain new capabilities.
"The potential impact of models on mental health was something we saw a preview of in 2025," Altman said, without elaborating on specific cases or products.
AI has been flagged as an increasingly common trigger of psychological troubles in both juveniles and adults, with chatbots reportedly linked to multiple deaths in the past year. OpenAI, one of the most popular chatbot makers in the market, rolled back a GPT-4o update in April 2025 after acknowledging it had become overly sycophantic and could reinforce harmful or destabilizing user behavior.
Despite that, OpenAI released ChatGPT-5.1 last month, which included a number of emotional dependence-nurturing features, like the inclusion of emotionally-suggestive language, "warmer, more intelligent" responses, and the like. Sure, it might be less sycophantic, but it'll speak to you with more intimacy than ever before, making it feel more like a human companion instead of the impersonal, logical ship computer from Star Trek that spits facts with little regard for feeling.
It's no wonder the company needs someone to steer the ship with regard to model safety.
"We have a strong foundation of measuring growing capabilities," Altman said, "but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused."
According to the job posting, the Head of Preparedness will be responsible for leading technical strategy and execution of OpenAI's preparedness framework [PDF], which the company describes as its approach "to tracking and preparing for frontier capabilities that create new risks of severe harm."
It's not a new role, mind you, but it's one that's seen more turnover than the Defense Against Dark Arts faculty position at Hogwarts.
Aleksander Madry, director of MIT's Center for Deployable Machine Learning and faculty leader at the Institute's AI Policy Forum, occupied the Preparedness role until July 2024, when OpenAI reassigned him to a reasoning-focused research role.
This, mind you, came in the wake of a number of high-profile safety leadership exits at the company and a partial reset of OpenAI's safety team structure.
- OpenAI's Atlas shrugs off inevitability of prompt injection, releases AI browser anyway
- OpenAI turns the screws on chatbots to get them to confess mischief
- Some like it bot! ChatGPT promises AI-rotica is coming for verified adults
- OpenAI reorg at risk as Attorneys General push AI safety
In Madry's place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to lead the preparedness team. Both occupied other roles at OpenAI prior to heading up preparedness, but neither lasted long in the position. Weng left OpenAI in November 2024, while Candela left his role as head of preparedness in April for a three-month coding internship at OpenAI. While still an OpenAI employee, he's out of the technical space entirely and is now serving as head of recruiting.
"This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman said of the open position.
Understandably so - OpenAI and model safety have long had a contentious relationship, as numerous ex-employees have attested. One executive who left the company in October called the Altman outfit out for not being as focused on safety and the long-term effects of its AGI push as it should be, suggesting that the company was pushing ahead in its goal to dominate the industry at the expense of the rest of society.
Will $555,000 be enough to keep a new Preparedness chief in the role? Skepticism may be warranted.
OpenAI didn't respond to questions for this story. ®