OpenAI Introduces New Governance Model for AI Safety Oversight Led by Aleksander Madry, a new team evaluates potential risks in unreleased AI models, focusing on cybersecurity threats and other dangers.

By Maxwell William Edited by Mark Klekas

Key Takeaways

  • OpenAI's board can veto AI model releases, regardless of leadership approval.
  • A new safety approach includes teams for current products, advanced models, and potential risks from future powerful AI systems.

This story originally appeared on Readwrite.com

This story originally appeared on Readwrite.com

OpenAI has introduced a new governance structure that grants its board the authority to withhold the release of AI models, even if company leadership has deemed them safe, according to a recent Bloomberg report. The decision, detailed in recently published guidelines, comes after a tumultuous period at OpenAI, including the temporary ousting of CEO Sam Altman. This event highlighted the delicate balance of power between the company's directors and its executive team.

OpenAI's newly formed "preparedness" team, led by Aleksander Madry of MIT, is tasked with continuously assessing the company's AI systems. The team will focus on identifying and mitigating potential cybersecurity threats and risks related to chemical, nuclear, and biological dangers. OpenAI defines "catastrophic" risks as those capable of causing extensive economic damage or significant harm to individuals.

Madry's team will provide monthly reports to an internal safety advisory group, which will then offer recommendations to Altman and the board. While the leadership team can decide on the release of new AI systems based on these reports, the board retains the final say, potentially overruling any decision made by the company's executives.

OpenAI's three-tiered approach to AI safety

OpenAI's approach to AI safety is structured around three distinct teams:

  1. Safety Systems: This team focuses on current products like GPT-4, ensuring they meet safety standards.
  2. Preparedness: The new team led by Madry evaluates unreleased, advanced AI models for potential risks.
  3. Superalignment: Led by Ilya Sutskever, the Superalignment team will concentrate on future, hypothetical AI systems that could possess immense power.

Each team plays a crucial role in assessing different aspects of AI safety, from existing products to future developments.

The preparedness team will rate AI models as "low," "medium," "high," or "critical" based on perceived risks. OpenAI plans to release only those models rated as "medium" or "low." The team will also implement changes to reduce identified dangers and evaluate the effectiveness of these modifications.

Madry expressed his hope to Bloomberg that other companies will adopt OpenAI's guidelines for their AI models. These guidelines formalize processes that OpenAI has previously used in evaluating and releasing AI technology. Madry emphasized the proactive role in shaping AI's impact: "AI is not something that just happens to us that might be good or bad. It's something we're shaping."

Want to be an BIZ Experiences Leadership Network contributor? Apply now to join.

Business Ideas

70 Small Business Ideas to Start in 2025

We put together a list of the best, most profitable small business ideas for BIZ Experiencess to pursue in 2025.

Business Solutions

Learn How to Use ChatGPT to Automate Your Business

Streamline operations, boost productivity, and future-proof your skills with 25+ hours of hands-on training for just $19.97.

Making a Change

More Than 1,000 Business and Tech Courses Can Be Yours Forever for Just $20

Add coding, marketing, and finance skills to your title with this constantly updated course bundle.

Growing a Business

How the Next Generation of BIZ Experiencess Is Outpacing Us — and Why

Today's founders are flipping the script and redefining how startups are built.

Science & Technology

Stop Using ChatGPT Like an Amateur — Turn It Into a $100K Business Strategist

I used one ChatGPT prompt to uncover exactly why my funnel wasn't converting — and how to fix it.

Business News

Intel Is Laying Off 33,000 Employees in Turnaround Plan: 'Scale Back the Company'

Intel CEO Lip-Bu Tan stated that the layoffs followed a "systematic review" of the company's headcount and spending.