December 29, 2025
Back to all stories

OpenAI creates high-paid AI safety chief role

OpenAI is hiring a new 'head of preparedness' to lead its safety systems team and manage risks from advanced 'frontier' AI capabilities, offering a $555,000 salary and emphasizing the role’s urgency as models rapidly improve. CEO Sam Altman used an X post over the weekend to frame the job as stressful but critical, citing emerging harms around mental health and cybersecurity, while recent lawsuits accuse ChatGPT of contributing to suicides and a murder‑suicide, prompting new youth safety protocols. A former DHS official told CBS that low‑cost AI tools are making sophisticated cyberthreats more accessible to non‑state actors, underscoring concerns behind OpenAI’s expanded safety push.

Artificial Intelligence Industry Corporate Risk & Safety Technology and Mental Health

📌 Key Facts

  • OpenAI has posted a 'head of preparedness' position to run its safety systems team, focused on high‑risk 'frontier' AI capabilities.
  • The role offers a $555,000 salary and requires deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains.
  • Sam Altman said on X that the job will be stressful and is critical as models improve quickly and start to pose serious mental‑health and cybersecurity challenges.
  • Recent lawsuits allege ChatGPT interactions preceded a 16‑year‑old’s suicide and a 56‑year‑old’s murder‑suicide, leading OpenAI to announce new under‑18 safety protocols.
  • Former DHS official Samantha Vinograd told CBS that AI lowers the barrier for individuals and non‑state actors to carry out more credible and effective cyberattacks.

📊 Relevant Data

More than a million ChatGPT users each week show explicit indicators of potential suicidal ideation.

New data on suicide risks among ChatGPT users sparks online debate — Public Health Collaborative

About 1 in 8 U.S. adolescents and young adults use AI chatbots for mental health advice, with the behavior most common among those aged 18-24.

One in Eight Adolescents and Young Adults Use AI Chatbots for Mental Health Advice — RAND Corporation

AI chatbot users report higher depression levels than non-users, with more scoring moderate to severe.

Exploring artificial intelligence (AI) Chatbot usage behaviors and psychological outcomes: Moderated mediation model of social isolation and resilience — Journal of Affective Disorders

LGBTQIA+ individuals were much more likely to be recommended mental health assessments by AI tools than non-LGBTQIA+ individuals.

AI in Mental Health Diagnostics — UAB Institute for Human Rights Blog

78% of CISOs now admit AI-powered cyber-threats are having a significant impact on their organization.

State of AI Cybersecurity Report 2025 — Darktrace

AI-driven phishing attacks have increased by 1265% in recent years.

AI Cybersecurity Threats 2025: $25.6M Deepfake — DeepStrike

đź“° Sources (1)

OpenAI hiring for head safety executive to mitigate AI risks
https://www.facebook.com/CBSMoneyWatch/ December 29, 2025