Five prominent Senate Democrats have sent a letter to OpenAI CEO Sam Altman, seeking clarity on the company’s safety and employment practices.
The letter – signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. – comes in response to recent reports questioning OpenAI’s commitment to its stated goals of safe and responsible AI development.
The senators emphasise the importance of AI safety for national economic competitiveness and geopolitical standing. They note OpenAI’s partnerships with the US government and national security agencies to develop cybersecurity tools, underscoring the critical nature of secure AI systems.
“National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable,” the letter states.
The lawmakers have requested detailed information on several key areas by 13 August 2024. These include:
- OpenAI’s commitment to dedicating 20% of its computing resources to AI safety research.
- The company’s stance on non-disparagement agreements for current and former employees.
- Procedures for employees to raise cybersecurity and safety concerns.
- Security protocols to prevent theft of AI models, research, or intellectual property.
- OpenAI’s adherence to its own Supplier Code of Conduct regarding non-retaliation policies and whistleblower channels.
- Plans for independent expert testing and assessment of OpenAI’s systems pre-release.
- Commitment to making future foundation models available to US Government agencies for pre-deployment testing.
- Post-release monitoring practices and learnings from deployed models.
- Plans for public release of retrospective impact assessments on deployed models.
- Documentation on meeting voluntary safety and security commitments to the Biden-Harris administration.
The senators’ inquiry touches on recent controversies surrounding OpenAI, including reports of internal disputes over safety practices and alleged cybersecurity breaches. They specifically ask whether OpenAI will “commit to removing any other provisions from employment agreements that could be used to penalise employees who publicly raise concerns about company practices.”
This congressional scrutiny comes at a time of increasing debate over AI regulation and safety measures. The letter references the voluntary commitments made by leading AI companies to the White House last year, framing them as “an important step towards building this trust” in AI safety and security.
Kamala Harris may be the next US president following the election later this year. At the AI Safety Summit in the UK last year, Harris said: “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI-enabled myths and disinformation.”
Chelsea Alves, a consultant with UNMiss, commented: “Kamala Harris’ approach to AI and big tech regulation is both timely and critical as she steps into the presidential race. Her policies could set new standards for how we navigate the complexities of modern technology and individual privacy.”
The response from OpenAI to these inquiries could have significant implications for the future of AI governance and the relationship between tech companies and government oversight bodies.
(Photo by Darren Halstead)
See also: OpenResearch reveals potential impacts of universal basic income
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.