in

What The Ex-OpenAI Safety Employees Are Worried About



William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what’s troubling ex-OpenAI safety team members.

Listen to the full episode on Big Technology Podcast

Spotify:  https://spoti.fi/32aZGZx
Apple:  https://apple.co/3AebxCK
Etc. https://pod.link/1522960417/

We discuss whether the Saudners’ former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the ‘Right to Warn’ a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

Sound Bites

“I thought that this would mean that they would prioritize putting safety first. But over time, it started to really feel like the decisions being made by leadership were more like the white star line building the Titanic.”
“I’m more afraid that like GPT-5 or GPT-6 or GPT-7 might be the Titanic in this analogy.”
“The disturbing part of this was that the only response was reprimanding the person who raised these concerns.”
“You need to have external review as well.”
“Would you imagine it’s enough just to have something like the SEC as a way to complain about this?”
“Ideally, you can just like call up somebody at the government and say like, hey, I think this might be going like a little bit wrong. What do you think about it?”

Chapters:

00:00 Introduction and Overview
01:03 Concerns about OpenAI’s Trajectory and Prioritization of Product Development
09:59 Government Oversight and the Role of Regulatory Agencies in AI
11:36 The Challenges of Whistleblowing in the AI Industry
29:09 The Importance of External Review and Oversight
30:02 Whistleblowing and the Role of the SEC
31:57 The Need for Legislation
39:12 Accountability, Transparency, and a Culture of Safety
49:38 Assessing the Potential Risks and Timeline
52:50 Taking Proactive Measures to Ensure AI Safety

Elon Musk SUES Sam Altman & OpenAI | What It Means.

OpenAI Assistant API Tutorial With Code Examples