That’s not good.
Dream Within a Dream
The people building the next iteration of AI technology are growing concerned with how lifelike the next generation of generative content has already become.
In an interview with Axios, an unnamed “leading AI architect” said that in private tests, experts can no longer tell whether AI-generated imagery is real or fake, which nobody expected to be possible this soon.
As the report continues, AI insiders expect this kind of technology to be available for anyone to use or purchase in 2024 — even as social media companies are weakening their disinformation policies and slashing the departments that work to enforce them.
This kind of anonymous sourcing should, of course, be taken with a grain of salt. Whoever gave Axios that tidbit may well have a vested interest in marketing scary-yet-tempting generative AI tech, or they might just be another AI industry booster who’s gotten high on their own supply.
But with an almost certainly contentious presidential election coming up and the latest Israel-Hamas conflict already acting as a battleground for AI disinformation, there certainly is cause for legitimate concern.
Regulatory Blues
We’ve known for a while now that AI image generators are rapidly becoming sophisticated enough to fool casual viewers, and experts have for most of 2023 been ringing alarm bells about how unsettling this effect is going to become.
Indeed, President Joe Biden apparently got really freaked out by the prospect of killer AI when watching the new “Mission: Impossible” — a sequence of events that just happened to occur before the White House issued an expansive-yet-vague executive order about AI.
“If he hadn’t already been concerned about what could go wrong with AI before that movie,” deputy White House chief of staff Bruce Reed told PBS last week, “he saw plenty more to worry about.”
While we’re admittedly very far from the kind of evil godlike algorithms depicted in the new “Mission: Impossible,” the Biden White House and Congress have proffered that in the short term, watermarks on AI-generated videos can help us tell fake from real. According to experts, however, watermarking doesn’t really work and is very easy to fake or break, too.
We may well look back on 2023 as the year that AI began taking over pretty much everything — and if the warnings of Axios‘ unnamed source are to be believed, our current concerns may soon feel quaint.
More on AI: Unemployed Man Uses AI to Apply for 5,000 Jobs, Gets 20 Interviews