in

AI isn’t and won’t soon be evil or even smart, but it’s also irreversibly pervasive


Artificial intelligence – or rather, the variety based on large language models we’re currently enthralled with – is already in the autumn of its hype cycle, but unlike crypto, it won’t just disappear into the murky, undignified corners of the internet once its ‘trend’ status fades. Instead, it’s settling into a place where its use is already commonplace, even for purposes for which it’s frankly ill-suited. Doomerism would you have you believe that AI will get so smart it’ll enslave or sunset humanity, but the reality is that it’s much more threatening as an omnipresent layer of error and hallucinations that seep into our shared intellectual groundwater.

The doomerism vs. e/acc debate continues apace, with all the grounded, fact-based arguments on either side that you can expect from the famously down-to-earth Silicon Valley elites. Key context for any of these figures of influence is to remember that they spend their entire careers lauding/decrying the extreme success of failure of whatever tech they’re betting on or against – only to have said technology usually fizzle well-short of ether the perfect or the catastrophic state. Witness everything always, forever, but if you’re looking for specifics, self-driving is a very handy recent one, as is VR and the metaverse.

Utopian vs. dystopian debates in tech always do what they’re actually intended to do, which is distract from having real conversations about the real, current-day impact of technology as it’s actually deployed and used. AI has undoubtedly had a massive impact, particularly since the introduction of ChatGPT just over a year ago, but that impact isn’t about whether we’ve unwittingly sown the seeds for a virtual deity, it’s about how ChatGPT proved far more popular, more viral and more sticky than its creators ever thought possible – even while its capabilities actually matched their relatively humble expectations.

Use of generative AI, according to most recent studies, is fairly prevalent and growing, especially among younger users. The leading uses aren’t novelty or fun, per a recent Salesforce study of use over the past year; instead, it’s overwhelmingly being used to automate work-based tasks and communications. With a few rare exceptions like when it’s used for preparing legal arguments, the consequences of some light AI hallucination in generating these communications and corporate drudgery are insignificant, but it’s also undoubtedly resulting in a digital strata that consists of easy-to-miss factual errors and minor inaccuracies.

That’s not to say people are particularly good at disseminating information free of factual error; rather the opposite, actually, as we’ve seen via the rise of the misinformation economy on social networks, particularly in the years leading up to and including the Trump presidency. Even leaving aside malicious agendas and intentional acts, error is just a baked in part of human belief and communication, and as such has always pervaded shared knowledge pools.

The difference is that LLM-based AI models do so casually, constantly, and without self-reflection, and that they do so with a sheen of authoritative confidence which users are susceptible to because of many years of relatively stable, factual and reliable Google search results (admittedly, ‘relatively’ is doing a lot of work here). Early on, search results and crowdsourced online pools of information were treated with a healthy dose of critical skepticism, but years or even decades of fairly reliable info delivered by Google search, Wikipedia and the like has short-circuited our distrust of things that come back when we type a query into a text box on the internet.

I think the results of having ChatGPT and its ilk producing a massive volume of content with questionable accuracy for menial everyday communication will be subtle, but they’re worth investigating and potentially mitigating, too. The first step would be examining why people feel like they can entrust so much of this stuff to AI in its current state to begin with; with any widespread task automation, the primary focus of inquiry should probably be on the task, not the automation. Either way, though, the real, impactful big changes that AI brings are already here, and while they don’t look anything like Skynet, they’re more worthy of study than possibilities that rely on techno-optimistic dreams coming true.

Twitter

New Study Unveils Hidden Vulnerabilities in AI

Distributional wants to develop software to reduce AI risk

Distributional wants to develop software to reduce AI risk