‘AI agents are becoming a problem’: OpenAI CEO flags risks; offers $555K role
OpenAI is hiring a high-paid Head of Preparedness amid growing fears of AI systems exploiting security flaws and harming mental well-being. CEO Sam Altman acknowledged models are finding critical vulnerabilities, while also highlighting AI's potential psychological impact. The role will tackle cybersecurity, biosecurity, and self-improving AI risks, a crucial step as AI's dual nature becomes apparent.
![]()
OpenAI is hiring a high-paid Head of Preparedness amid growing fears of AI systems exploiting security flaws and harming mental well-being. CEO Sam Altman acknowledged models are finding critical vulnerabilities, while also highlighting AI's potential psychological impact. The role will tackle cybersecurity, biosecurity, and self-improving AI risks, a crucial step as AI's dual nature becomes apparent.
OpenAI is actively recruiting a Head of Preparedness to address mounting concerns about AI systems discovering critical vulnerabilities and impacting mental health, CEO Sam Altman announced on X. The position, offering $555,000 plus equity, comes as the company acknowledges its models "are beginning to find critical vulnerabilities" in computer security systems.Sam Altman says the best AI models are now "capable of many great things," but also highlight "some real challenges" that need urgent consideration. This marks a shift in how OpenAI talks publicly about AI safety, especially regarding cyber threats and mental health.
Growing concerns over AI-powered cyber threats
Altman’s posting for “head of preparedness” comes after recent reports of AI systems being weaponised for cyberattacks. Last month, rival Anthropic revealed that Chinese state-sponsored hackers manipulated its Claude Code tool to target approximately 30 global entities, including tech companies, financial institutions, and government agencies, with minimal human intervention.According to OpenAI's job listing, the Head of Preparedness will oversee the company's preparedness framework, focusing on "frontier capabilities that create new risks of severe harm." Key responsibilities include developing capability evaluations, threat models, and mitigations across critical risk areas including cybersecurity, biosecurity, and self-improving AI systems.
Mental health impact finally acknowledged by OpenAI CEO
Altman specifically highlighted mental health as a concern after OpenAI saw "a preview of" AI's potential psychological impact in 2025.
This acknowledgment comes amid several high-profile lawsuits alleging ChatGPT's involvement in teen suicides and reports of AI chatbots feeding users' delusions and conspiracy theories.The role requires someone who can "help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm," Altman stated, calling it "a stressful job" where the hire will "jump into the deep end pretty much immediately."The position became vacant after multiple leadership changes in OpenAI's safety teams throughout 2024-2025, including the departure of former Head of Preparedness Aleksander Madry.