in

Netflix True Crime Producer Responds to AI Allegations in “What Jennifer Did” Documentary


Over the weekend, Futurism discovered some very strange pictures — with garbled hands and other details that are hallmarks of AI-generated or AI-manipulated images — images in the new Netflix true crime documentary called “What Jennifer Did.”

The film tells a story of a woman named Jennifer Pan, who was convicted of orchestrating a murder-for-hire plot against her parents in Canada back in 2010. She is now serving life in prison.

The images, which appear around the 28-minute mark of the documentary, include morphed fingers, bizarre and misshapen facial features, and garbled background objects.

Questions abound. Did the film’s producers use existing archival images of Pan to generate new ones? Or were AI tools used to edit an existing image? Or do the images look like AI, but actually have another explanation?

Now the executive producer of the documentary, Jeremy Grimaldi, has weighed in in an interview with the Toronto Star — but his remarks are hard to parse, and made no direct mention of AI.

“Any filmmaker will use different tools, like Photoshop, in films,” he said.

“The photos of Jennifer are real photos of her,” he added. “The foreground is exactly her. The background has been anonymized to protect the source.”

Grimaldi’s comments are extremely vague on a core point: exactly what “photo editing software” did the team use to “anonymize” the images, and did they involve AI? When he says the foreground is “exactly her,” does that include her mangled fingers and teeth?

It’s hard to know what to make of Grimaldi’s remarks, and the Star doesn’t seem to have pushed hard on followups. (Futurism has reached out to both Grimaldi and Netflix, but neither has issued a response.)

Regardless of intent, the use of AI-generated images in a true crime documentary has stirred a heated debate, with viewers and fellow documentarians accusing Netflix of distorting the historical record by failing to disclose the use of AI — which they say could set a dangerous precedent.

“I don’t want to think how else they could use AI images in true crime documentaries or ANY type of documentary — it’s insane,” one user on the TrueCrimeDiscussion subreddit wrote. “They should definitely disclose when an image is AI- a watermark, caption etc!”

“They shouldn’t use AI at all when it comes to stuff like this,” another user remarked.

Others slammed Netflix for airing “cash grab” documentaries.

“Netflix has a long history of airing true crime docs with dubious standards of journalistic ethics,” one redditor wrote.

“Exploitative true crime sucks, generative AI sucks, everything about this sucks,” another exasperated user added.

We’ve already seen our fair share of AI-generated content being used in films and TV. A recent episode of HBO’s “True Detective,” for instance, featured bizarre, AI-generated posters in the background of a shot.

But instead of being set dressing for a fictional setting, the archival images of Pan are being presented as the real thing, given the absence of any kind of disclosure.

That could set a dangerous precedent when it comes to the use of AI in documentaries. As 404 Media reports, filmmakers were gathering right around the time we published our story on Sunday to discuss guidelines for how to safely and responsibly use generative AI.

“One of the things we’ve realized is once a piece of media exists, even if it is disclosed [that it’s AI generated], it can then be lifted out of any documentary, make its way onto the internet and into other films, and then it’s forever part of the historic record,” documentarian and Archival Producers Alliance co-founder Rachel Antell told 404 Media.

“If it’s being represented as this is a picture of this person, then that’s what’s going into the historic record,” she added. “And it’s very hard to pull that back.”

More on the story: Netflix Uses Seemingly AI-Manipulated Images in True Crime Doc


A Wave of AI Tools Is Set to Transform Work Meetings

A Wave of AI Tools Is Set to Transform Work Meetings

SciTechDaily

MIT’s New AI Model Predicts Human Behavior With Uncanny Accuracy