The company was caught removing mentions of a ban on “military and warfare” from its website last week.
AI Warfare
After being caught quietly removing mentions of a ban on “military and warfare” from its usage policies page, OpenAI is working with the Pentagon.
As Bloomberg reports, the company confirmed that it’s working with the US Defense Department on open-source cybersecurity software and is also looking into ways to prevent veteran suicide with its tech.
OpenAI VP of global affairs Anna Makanju revealed the company’s reversal in its willingness to work with the military during a Bloomberg talk at the World Economic Forum in Davos, Switzerland this week.
“Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” she said, as quoted by Bloomberg.
However, OpenAI is keeping a ban in place for having its tech being used to develop weapons or harm people.
Weapons Ban
The news comes after The Intercept noticed that OpenAI quietly yanked mentions of a ban on “military and warfare” from its “usage policies” last week. According to the revised webpage, the changes are meant to make the policies “clearer” and “more readable.”
“Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,’ is disallowed,” OpenAI spokesperson Niko Felix told The Intercept.
It’s an especially notable change considering OpenAI’s largest investor, Microsoft, already has several high-profile contracts with the US military in place.
OpenAI has also revealed it’s collaborating with the US Defense Advanced Research Agency.
“We are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” an OpenAI spokesperson told The Register.
“It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies,” they added. “So the goal with our policy update is to provide clarity and the ability to have these discussions.”
Critics of the change are warning OpenAI’s revised policies could still technically allow for problematic uses of its AI technologies in war zones.
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, managing director of the AI Now Institute, told The Intercept. “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”
More on OpenAI: Sam Altman Says Human-Tier AI Is Coming Soon