in

Are Those TV Doctors Real? The Deepfake Scam Explained


TV Doctor Art Concept Illustration

Popular UK TV doctors are being digitally impersonated through deepfake technology to falsely endorse health products on social media, as reported by The BMJ. This misleading use of AI involves cloning digital likenesses onto other bodies, making the fake endorsements seem genuine. Credit: SciTechDaily.com

The BMJ reports that deepfake technology is being used to create fraudulent endorsements by famous UK TV doctors for health products on social media, complicating efforts to identify and eliminate these misleading videos.

Some of the UK’s most recognizable TV doctors are increasingly being “deepfaked” in videos to sell scam products across social media, finds The BMJ today, July 18.

Trusted names including Hilary Jones, Michael Mosley, and Rangan Chatterjee are being used to promote products claiming to fix high blood pressure and diabetes, and to sell hemp gummies, explains journalist Chris Stokel-Walker.

Deepfaking is the use of artificial intelligence (AI) to map a digital likeness of a real-life human being onto a video of a body that isn’t theirs. Reliable evidence on how convincing it is can be hard to come by, but one recent study suggests that up to half of all people shown deepfakes talking about scientific subjects cannot distinguish them from authentic videos.

The Economics of Deepfake Misinformation

John Cormack, a retired doctor based in Essex, worked with The BMJ to try and capture a sense of the scale of so-called deepfaked doctors across social media.

“The bottom line is, it’s much cheaper to spend your cash on making videos than it is on doing research and coming up with new products and getting them to market in the conventional way,” he says.

The slew of questionable content on social media co-opting the likenesses of popular doctors and celebrities is an inevitable consequence of the AI revolution we’re currently living through, says Henry Ajder, an expert on deepfake technology. “The rapid democratization of accessible AI tools for voice cloning and avatar generation has transformed the fraud and impersonation landscape.”

Combating Deepfake Exploitation

“There’s been a significant increase in this kind of activity,” says Jones, who employs a social media specialist to trawl the web for deepfake videos that misrepresent his views and tries to take them down. “Even if you do, they just pop up the next day under a different name.”

A spokesperson for Meta, the company that owns both Facebook and Instagram, on which many of the videos found by Cormack were hosted, told The BMJ: “We will be investigating the examples highlighted by the British Medical Journal. We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement. We encourage anyone who sees content that might violate our policies to report it so we can investigate and take action.”

Challenges and Solutions in Identifying Deepfakes

Deepfakes work by preying on people’s emotions, writes Stokel-Walker, and when it comes to medical products, that emotional connection with the individual telling you about the wonder drug or magnificent medical product matters all the more.

Someone you don’t know trying to sell you on the virtues of a particular treatment may raise suspicions. But if they’re someone you’ve seen before on social media, television or radio, you’re more likely to believe what they’re saying.

Spotting deepfakes can be tricky too, says Ajder, as the technology has improved. “It’s difficult to quantify how effective this new form of deepfake fraud is, but the growing volume of videos now circulating would suggest bad actors are having some success.”

For those whose likenesses are being co-opted, there’s seemingly very little they can do about it, but Stokel-Walker offers some tips on what to do if you find a deepfake. For instance, take a careful look at the content to make sure your suspicions are well-founded then leave a comment, questioning its veracity. Use the platform’s built-in reporting tools to voice your concerns, and finally report the person who or account that shared the post.

Reference: “Deepfakes and doctors: How people are being fooled by social media scams” 17 July 2024, The BMJ.
DOI: 10.1136/bmj.q1319



OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough

AI Undetect: Undetectable AI, AI Humanizer, Anti AI Detector (aiundetect.com)