Now that OpenAI's Superalignment Team Has Been Disbanded, Who's Preventing AI from Going Rogue? We spoke to an AI expert who says safety and innovation are not separate things that must be balanced; they go hand in hand.

By Sherin Shibu Edited by Melissa Malamut

Key Takeaways

  • Former OpenAI research lead Jan Leike and chief scientist, Ilya Sutskever, resigned last week.
  • Leike stated that it was because he felt safety took a backseat to new products at OpenAI.
  • One AI expert tells "BIZ Experiences" that safety and innovation are not separate things that need to be balanced — they should go hand in hand.

How do we prevent AI from going rogue?

OpenAI, the $80 billion AI company behind ChatGPT, just dissolved the team tackling that question — after the two executives in charge of the effort left the company.

The AI safety controversy comes less than a week after OpenAI announced a new AI model, GPT-4o, with more functionality — and a voice eerily similar to Scarlett Johansson's. The company paused the rollout of that particular voice on Monday.

Related: Scarlett Johansson 'Shocked' That OpenAI Used a Voice 'So Eerily Similar' to Hers After Already Telling the Company 'No'

Sahil Agarwal, a Yale PhD in applied mathematics who co-founded and currently runs Enkrypt AI, a startup focused on making AI less of a risky bet for businesses, told BIZ Experiences that innovation and safety are not separate things that need to be balanced, but rather two things that go hand in hand as a company grows.

"You're not stopping innovation from happening when you're trying to make these systems more safe and secure for society," Agarwal said.

OpenAI Exec Raises Safety Concerns

Last week, the former OpenAI chief scientist and co-founder Ilya Sutskever and former OpenAI research lead Jan Leike both resigned from the AI giant. The two were tasked with leading the superalignment team, which ensures that AI is under human control, even as its capabilities grow.

Related: OpenAI Chief Scientist, Cofounder Ilya Sutskever Resigns

While Sutskever stated he was "confident" that OpenAI would build "safe and beneficial" AI under CEO Sam Altman's leadership in his parting statement, Leike said he left because he felt OpenAI did not prioritize AI safety.

"Over the past few months my team has been sailing against the wind," Leike wrote. "Building smarter-than-human machines is an inherently dangerous endeavor."

Leike also said that "over the past years, safety culture and processes have taken a backseat to shiny products" at OpenAI and called for the ChatGPT-maker to put safety first.

OpenAI dissolved the superalignment team that Leike and Sutskever led, the company confirmed to Wired on Friday.

Sam Altman, chief executive officer of OpenAI. Photographer: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI president and co-founder Greg Brockman released a statement in response to Leike on Saturday, pointing out that OpenAI has raised awareness about the risks of AI so that the world can prepare for it and the AI company has been deploying systems safely.

How Do We Prevent AI from Going Rogue?

Agarwal says that as OpenAI tries to make ChatGPT more human-like, the danger is not necessarily a super-intelligent being.

"Even systems like ChatGPT, they are not implicitly reasoning by any means," Agarwal told BIZ Experiences. "So I don't view the risk as from a super-intelligent artificial being perspective."

The problem is that as AI becomes more powerful and multifaceted, the possibility of more implicit bias and toxic content increases and the AI becomes riskier to implement, he explained. By adding more ways to interact with ChatGPT, from image to video, OpenAI has to think about safety from more angles.

Related: OpenAI's Launches New AI Chatbot, GPT-4o

Agarwal's company released a safety leaderboard earlier this month that ranks the safety and security of AI models from Google, Anthropic, Cohere, OpenAI, and more.

They found that the new GPT-4o model potentially contains more bias than the previous model and can possibly produce more toxic content than the previous model.

"What ChatGPT did is it made AI real for everyone," Agarwal said.

Sherin Shibu

BIZ Experiences Staff

News Reporter

Sherin Shibu is a business news reporter at BIZ Experiences.com. She previously worked for PCMag, Business Insider, The Messenger, and ZDNET as a reporter and copyeditor. Her areas of coverage encompass tech, business, strategy, finance, and even space. She is a Columbia University graduate.

Want to be an BIZ Experiences Leadership Network contributor? Apply now to join.

Business Ideas

70 Small Business Ideas to Start in 2025

We put together a list of the best, most profitable small business ideas for BIZ Experiencess to pursue in 2025.

Business News

AI Will Create More Millionaires in the Next 5 Years Than the Internet Did in 2 Decades, According to Nvidia's CEO

Nvidia CEO Jensen Huang said that AI enables people to create new things, generating more opportunities to produce revenue.

Marketing

How to Make Sure ChatGPT Recommends Your Products — Not Your Competitor's

AI is changing how people shop — if you're still relying on SEO, you're already behind. Optimize for AI to stay visible.

Starting a Business

Her Self-Funded Brand Hit $25 Million Revenue Last Year — And 3 Secrets Keep It Growing Alongside Her 'Mischievous' Second Venture: 'Entrepreneurship Is a Mind Game'

Raised in a "very BIZ Experiencesial" family, Tanya Taylor always dreamed of starting a business of her own.

Starting a Business

The One Real Problem You Must Solve to Make Your Startup Succeed

Some of the most successful startups didn't start with a business plan. They started with a problem. More specifically — a personal pain point.

Business News

Gen Z Can't Get Enough of This 'Grandparents' Food — and Suppliers Can't Keep Up With Demand

Health influencers made cottage cheese so popular that companies are struggling to produce it fast enough.