in

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway


U.K.-based startup Yepic AI claims to use “deepfakes for good” and promises to “never reenact someone without their consent.” But the company did exactly what it claimed it never would.

In an unsolicited email pitch to a TechCrunch reporter, a representative for Yepic AI shared two “deepfaked” videos of the reporter, who had not given consent to having their likeness reproduced. Yepic AI said in the pitch email that it “used a publicly available photo” of the reporter to produce two deepfaked videos of them speaking in different languages.

The reporter requested that Yepic AI delete the deepfaked videos it created without permission.

Deepfakes are photos, videos or audio created by generative AI systems that are designed to look or sound like an individual. While not new, the proliferation of generative AI systems allow almost anyone to make convincing deepfaked content of anyone else with relative ease, including without their knowledge or consent.

On a webpage it titles “Ethics,” Yepic AI said: “Deepfakes and satirical impersonations for political and other purposed [sic] are prohibited.” The company also said in an August blog post: “We refuse to produce custom avatars of people without their express permission.”

It’s not known if the company generated deepfakes of anyone else without permission, and the company declined to say.

When reached for comment, Yepic AI chief executive Aaron Jones told TechCrunch that the company is updating its ethics policy to “accommodate exceptions for AI-generated images that are created for artistic and expressive purposes.”

In explaining how the incident happened, Jones said: “Neither I nor the Yepic team were directly involved in the creation of the videos in question. Our PR team have confirmed that the video was created specifically for the journalist to generate awareness of the incredible technology Yepic has created.”

Jones said the videos and image used for the creation of the reporter’s image was deleted.

Predictably, deepfakes have tricked unsuspecting victims into falling for scams and unknowingly giving away their crypto or personal information by evading some moderation systems. In one case, fraudsters used AI to spoof the voice of a company’s chief executive in order to trick staff into making a fraudulent transaction worth hundreds of thousands of euros. Before deepfakes became popular with fraudsters, it’s important to note that people used deepfakes to create nonconsensual porn or sex imagery victimizing women, meaning they created realistic-looking porn videos using the likeness of women who had not consented to be part of the video.

CHAT GPT @ MY PODCAST

{BONUS} – Six mois d’utilisation de ChatGPT avec Pascal Henrard (23.05.26)