OpenAI now hires investigators to crack down on leakers from inside and other threats

Candidates must also possess a minimum of 3 years of relevant experience.

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • OpenAI’s rapid internal progress often gets spilled by insiders.
  • The company is now hiring investigators for its security team.
  • The role is to “detect, analyze, and mitigate potential insider threats by correlating data from various sources.”
OpenAI

AI giant OpenAI faces a unique challenge: its rapid internal progress often gets spilled by insiders, fueling anxieties within the company. To combat leaks and secure its innovations, OpenAI is resorting to hiring investigators to “mitigate potential insider threats.”

This crucial role, as OpenAI details in its career page, involves crafting proactive indicators, conducting discreet investigations with legal and HR, and shaping a secure culture through training. The company also further says that successful candidates must collaborate with teams to plug security gaps and implement data loss prevention controls.

Candidates must also possess a minimum of 3 years of relevant cybersecurity experience, fluency in SIEM and User Behavior Analytics tools, and a bachelor’s degree or higher education in a related subject. And through our brief look into Archive.org, this vacancy has been up at least since at least January this year.

“The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI,” the company says. 

There have been a few high-profile leaks at OpenAI recently. Just last month, Ars Technica reported that ChatGPT users discovered private data, including unpublished research papers, in their chats. OpenAI later claimed that the user’s exposed chat history stems from unauthorized Sri Lankan logins.

Another incident in November also led to the exposure of certain data related to OpenAI’s custom chatbots, including their initial setup instructions and customization files.

And it’s about time that we’re having this conversation. Leaks are concerning, especially when it comes to user data. This isn’t just an OpenAI problem – even Bard, ChatGPT’s competitor where Google openly uses human reviewers, isn’t exempt.

OpenAI has been involved in plenty of AI projects in the past few weeks. Not too long ago, the company was reportedly building a new AI for task automation on devices (via The Information) and had also joined forces with Microsoft to lead a $500 million funding round for robotic startup Figure AI.

User forum

0 messages