Peters says that the creators of the images—and any people that appear in them—have consented to having their art used in the AI model. Getty is also offering a Spotify-style compensation model to creatives for the use of their work.
The fact that creatives will be compensated in this way is good news, says Jia Wang, an assistant professor at Durham University in the UK, who specializes in AI and intellectual-property law. But it might be tricky to determine which images have been used in generated AI images in order to determine who should be compensated for what, she adds.
Getty’s model is only trained on the firm’s creative content, so it does not include imagery of real people or places that could be manipulated into deepfake imagery.
“The service doesn’t know who the pope is and it doesn’t know what Balenciaga is, and they can’t combine the two. It doesn’t know what the Pentagon is, and [that] you’re not gonna be able to blow it up,” says Peters, referring to recent viral images created by generative AI models.
As an example, Peters types in a prompt for the president of the United States, and the AI model generates images of men and women of different ethnicities in suits and in front of the American flag.
Tech companies claim that AI models are complex and can’t be built without copyrighted content and point out that artists can opt out of AI models, but Peters calls those arguments “bullshit.”
“I think there are some really sincere people that are actually being thoughtful about this,” he says. “But I also think there’s some hooligans that just want to go for that gold rush.”