“Shall I say thou art a man, that hast all the symptoms of a beast? How shall I know thee to be a man? By thy shape? That affrights me more, when I see a beast in likeness of a man.”
— Robert Burton, The Anatomy of Melancholy
I propose that software be prohibited from engaging in pseudanthropy, the impersonation of humans. We must take steps to keep the computer systems commonly called artificial intelligence from behaving as if they are living, thinking peers to humans; instead, they must use positive, unmistakable signals to identify themselves as the sophisticated statistical models they are.
If we don’t, these systems will systematically deceive billions in the service of hidden and mercenary interests, and, aesthetically speaking, because it is unbecoming of intelligent life to suffer imitation by machines.
As numerous scholars have observed even before the documentation of the “Eliza effect” in the ’60s, humanity is dangerously overeager to recognize itself in replica: A veneer of natural language is all it takes to convince most people that they are talking with another person.
But what began as an intriguing novelty, a sort of psycholinguistic pareidolia, has escalated to purposeful deception. The advent of large language models has produced engines that can generate plausible and grammatical answers to any question. Obviously these can be put to good use, but mechanically reproduced natural language that is superficially indistinguishable from human discourse also presents serious risks. (Likewise generative media and algorithmic decision-making.)
These systems are already being presented as or mistaken for humans, if not yet at great scale — but that danger continually grows nearer and clearer. The organizations that possess the resources to create these models are not just incidentally but purposefully designing them to imitate human interactions, with the intention of deploying them widely upon tasks currently performed by humans. Simply put, the intent is for AI systems to be convincing enough that people assume they are human and will not be told otherwise.
Just as few people bother to discover the truthfulness of an outdated article or deliberately crafted disinformation, few will inquire as to the humanity of their interlocutor in any commonplace exchange. These companies are counting on it and intend to abuse the practice. Widespread misconception of these AI systems being like real people with thoughts, feelings and a general stake in existence — important things, none of which they possess — is inevitable if we do not take action to forestall it.
This is not about a fear of artificial general intelligence, or lost jobs, or any other immediate concern, though it is in a sense existential. To paraphrase Thoreau, it is about preventing ourselves from becoming the tools of our tools.
I contend that it is an abuse and dilution of anthropic qualities, and a harmful imposture upon humanity at large, for software to fraudulently present itself as a person by superficial mimicry of uniquely human attributes. Therefore, I propose that we outlaw all such pseudanthropic behaviors and require clear signals that a given agent, interaction, decision, or piece of media is the product of a computer system.
Some possible such signals are discussed below. They may come across as fanciful, even absurd, but let us admit: We live in absurd, fanciful times. This year’s serious conundrums are last year’s science fiction — sometimes not even as far back as that.
Of course, I’m under no illusions that anyone will adhere to these voluntarily, and even if they were by some miracle required to, that would not stop malicious actors from ignoring those requirements. But that is the nature of all rules: They are not laws of physics, impossible to contravene, but a means to guide and identify the well-meaning in an ordered society, and provide a structure for censuring violators.
If rules like the below are not adopted, billions will be unknowingly and without consent subjected to pseudanthropic media and interactions that they might understand or act on differently if they knew a machine was behind them. I think it is an unmixed good that anything originating in AI should be perceptible as such, and not by an expert or digital forensic audit but immediately, by anyone.
At the very least, consider it a thought experiment. It should be a part of the conversation around regulation and ethics in AI that these systems could and ought to both declare themselves clearly and forbear from deception — and that we would probably all be better off if they did. Here are a few ideas on how this might be accomplished.
1. AI must rhyme
This sounds outlandish and facetious, and certainly it’s the least likely rule of all to be adopted. But little else would as neatly solve as many problems emerging from generated language.
One of the most common venues for AI impersonation today is in text-based interactions and media. But the problem is not actually that AI can produce human-like text; rather, it is that humans try to pass off that text as being their own, or having issued from a human in some way or another, be it spam, legal opinions, social studies essays, or anything else.
There’s a lot of research being performed on how to identify AI-generated text in the wild, but so far it has met with little success and the promise of an endless arms race. There is a simple solution to this: All text generated by a language model should have a distinctive characteristic that anyone can recognize yet leaves meaning intact.
For example, all text produced by an AI could rhyme.
Rhyming is possible in most languages, equally obvious in text and speech, and is accessible across all levels of ability, learning and literacy. It is also fairly hard for humans to imitate, while being more or less trivial for machines. Few would bother to publish a paper or submit their homework in an ABABCC dactylic hexameter. But a language model will do so happily and instantly if asked or required to.
We need not be picky about the meter, and of course some of these rhymes will necessarily be slant, contrived or clumsy — but as long as it comes in rhyming form, I think it will suffice. The goal is not to beautify, but to make it clear to anyone who sees or hears a given piece of text that it has come straight from an AI.
Today’s systems seem to have a literary bent, as demonstrated by ChatGPT:
An improved rhyming corpus would improve clarity and tone things down a bit. But it gets the gist across and if it cited its sources, those could be consulted by the user.
This doesn’t eliminate hallucinations, but it does alert anyone reading that they should be on watch for them. Of course it could be rewritten, but that is no trivial task either. And there is little risk of humans imitating AI with their own doggerel (though it may prompt some to improve their craft).
Again, there is no need to universally and perfectly change all generated text, but to create a reliable, unmistakable signal that the text you are reading or hearing is generated. There will always be unrestricted models, just as there will always be counterfeits and black markets. You can never be completely sure that a piece of text is not generated, just as you cannot prove a negative. Bad actors will always find a way around the rules. But that does not remove the benefit of having a universal and affirmative signal that some text is generated.
If your travel recommendations come in iambics, you can be pretty sure that no human bothered to try to fool you by composing those lines. If your customer service agent caps your travel plans with a satisfying alexandrine, you know it is not a person helping you. If your therapist talks you through a crisis in couplets, it doesn’t have a mind or emotions with which to sympathize or advise. Same for a blog post from the CEO, a complaint to the school board, or a hotline for eating disorders.
In any of these cases, might you act differently if you knew you were speaking to a computer rather than a person? Perhaps, perhaps not. The customer service or travel plans might be just as good as a human’s, and faster to boot. A non-human “therapist” could be a desirable service. Many interactions with AI are harmless, useful, even preferable to an equivalent one with a person. But people should know to begin with, and be reminded frequently, especially in circumstances of a more personal or important nature, that the “person” talking to them is not a person at all. The choice of how to interpret these interactions is up to the user, but it must be a choice.
If there is a solution as practical but less whimsical than rhyme, I welcome it.
2. AI may not present a face or identity
There’s no reason for an AI model to have a human face, or indeed any aspect of human individuality, except as an attempt to capture unearned sympathy or trust. AI systems are software, not organisms, and should present and be perceived as such. Where they must interact with the real world, there are other ways to express attention and intention than pseudanthropic face simulation. I leave the invention of these to the fecund imaginations of UX designers.
AI also has no national origin, personality, agency or identity — but its diction emulates that of humans who do. So, while it is perfectly reasonable for a model to say that it has been trained on Spanish sources, or is fluent in Spanish, it cannot claim to be Spanish. Likewise, even if all its training data was attributed to female humans, that does not impart femininity upon it any more than a gallery of works by female painters is itself female.
Consequently, as AI systems have no gender and belong to no culture, they should not be referred to by human pronouns like he or she, but rather as objects or systems: like any app or piece of software, “it” and “they” will suffice.
(It may even be worth extending this rule to when such a system, being in fact without a self, inevitably uses the first person. We may wish to have these systems use the third person instead, such as “ChatGPT” rather than “I” or “me.” But admittedly this may be more trouble than it is worth. Some of these issues are discussed in a fascinating paper published recently in Nature.)
An AI ought not claim to be a fictitious person, such as a name invented for the purposes of authorship of an article or book. Names such as these serve wholly to identify the human behind something and as such using them is pseudanthropic and deceptive. If an AI model generated a significant proportion of the content, the model should be credited. As for the names of the models themselves (an inescapable necessity; many machines have names after all), a convention might be useful, such as single names beginning and ending with the same letter or phoneme — Amira, Othello, and the like.
This also applies to instances of specific impersonation, like the already common practice of training a system to replicate the vocal and verbal patterns and knowledge of an actual, living person. David Attenborough, the renowned naturalist and narrator, has been a particular target of this as one of the world’s most recognizable voices. However entertaining the result, it has the effect of counterfeiting and devaluing his imprimatur, and the reputation he has carefully cultivated and defined over a lifetime.
Navigating consent and ethics here is very difficult and must evolve alongside the technology and culture. But I suspect that even the most permissive and optimistic today will find cause for worry over the next few years as not just world-famous personalities but politicians, colleagues and loved ones are re-created against their will and for malicious purposes.
3. AI cannot “feel” or “think”
Using the language of emotion or self-awareness despite possessing neither makes no sense. Software can’t be sorry, or afraid, or worried, or happy. Those words are only used because that is what the statistical model predicts a human would say, and their usage does not reflect any kind of internal state or drive. These false and misleading expressions have no value or even meaning, but serve, like a face, only to lure a human interlocutor into believing that the interface represents, or is, a person.
As such, AI systems may not claim to “feel,” or express affection, sympathy, or frustration toward the user or any subject. The system feels nothing and has only chosen a plausible series of words based on similar sequences in its training data. But despite the ubiquity of rote dyads like “I love you/I love you too” in literature, naive users will take an identical exchange with language model at face value rather than as the foregone outcome of an autocomplete engine.
Nor is the language of thought, consciousness, and analysis appropriate for a machine learning model. Humans use phrases like “I think” to express dynamic internal processes unique to sentient beings (though whether humans are the only ones is another matter).
Language models and AI in general are deterministic by nature: complex calculators that produce one output for each input. This mechanistic behavior can also be avoided by salting prompts with random numbers or otherwise including some output-variety function, but this must not be mistaken for cogitation of any real kind. They no more “think” a response is correct than a calculator “thinks” 8 x 8 is 64. The language model’s math is more complicated — that is all.
As such, the systems must not mimic the language of internal deliberation, or that of forming and having an opinion. In the latter case, language models simply reflect a statistical representation of opinions present in their training data, which is a matter of recall, not position. (If matters of ethics or the like are programmed into a model by its creators, it can and should of course say so.)
NB: Obviously the above two prohibitions directly undermine the popular use case of language models trained and prompted to emulate certain categories of person, from fictitious characters to therapists to caring partners. That phenomenon wants years of study, but it may be well to say here that the loneliness and isolation experienced by so many these days deserves a better solution than a stochastic parrot puppeteered by surveillance capitalism. The need for connection is real and valid, but AI is a void that cannot fill it.
4. AI-derived figures, decisions and answers must be marked⸫
AI models are increasingly used as intermediate functions in software, interservice workflows, even other AI models. This is useful, and a panoply of subject- and task-specific agents will likely be the go-to solution for a lot of powerful applications in the medium term. But it also multiplies the depth of inexplicability already present whenever a model produces an answer, a number, or binary decision.
It is likely that, in the near term, the models we use will only grow more complex and less transparent, while results relying on them appear more commonly in contexts where previously a person’s estimate or a spreadsheet’s calculation would have been.
It may well be that the AI-derived figure is more reliable, or inclusive of a variety of data points that improve outcomes. Whether and how to employ these models and data is a matter for experts in their fields. What matters is clearly signaling that an algorithm or model was employed for whatever purpose.
If a person applies for a loan and the loan officer makes a yes or no decision themselves, but the amount they are willing to loan and the terms of that loan are influenced by an AI model, that must be indicated visibly in any context those numbers or conditions are present. I suggest appending an existing and easily recognizable symbol that is not widely used otherwise, such as a signe-de-renvoi — such as ⸫ — which historically indicated removed (or dubious) matter.
This symbol should be linked to documentation for the models or methods used, or at the very least naming them so they can be looked up by the user. The idea is not to provide a comprehensive technical breakdown, which most people wouldn’t be able to understand, but to express that specific non-human, decision-making systems were employed. It’s little more than an extension of the widely used citation or footnote system, but AI-derived figures or claims should have a dedicated mark rather than a generic one.
There is research being done in reducing statements made by language models reducible to a series of assertions that can be individually checked. Unfortunately, it has the side effect of multiplying the computational cost of the model. Explainable AI is a very active research area, and so this guidance is as likely as the rest to evolve.
5. AI must not make life or death decisions
Only a human is capable of weighing the considerations of a decision that may cost another human their life. After defining a category of decisions that qualify as “life or death” (or some other term connoting the correct gravity), AI must be precluded from making those decisions, or attempting to influence them beyond providing information and quantitative analysis (marked, per supra).
Of course it may still provide information, even crucial information, to the people who do actually make such decisions. For instance, an AI model may help a radiologist find the correct outline of a tumor, and it can provide statistical likelihoods of different treatments being effective. But the decision on how or whether to treat the patient is left to the humans concerned (as is the attendant liability).
Incidentally, this also prohibits lethal machine warfare such as bomb drones or autonomous turrets. They may track, identify, categorize, etc., but a human finger must always pull the trigger.
If presented with an apparently unavoidable life or death decision, the AI system must stop or safely disable itself instead. This corollary is necessary in the case of autonomous vehicles.
The best way to short-circuit the insoluble “trolley problem” of deciding whether to kill (say) a kid or a grandma when the brakes go out, is for the AI agent to destroy itself instead as safely as possible at whatever cost to itself or indeed its occupants (perhaps the only allowable exception to the life or death rule).
It’s not that hard — there are a million ways for a car to hit a lamppost, or a freeway divider, or a tree. The point is to obviate the morality of the question and turn it into a simple matter of always having a realistic self-destruction plan ready. If a computer system acting as an agent in the physical world isn’t prepared to destroy itself or at the very least take itself out of the equation safely, the car (or drone, or robot) should not operate at all.
Similarly, any AI model that positively determines that its current line of operation could lead to serious harm or loss of life must halt, explain why it has halted, and await human intervention. No doubt this will produce a fractal frontier of edge cases, but better that than leaving it to the self-interested ethics boards of a hundred private companies.
6. AI imagery must have a corner clipped
As with text, image generation models produce content that is superficially indistinguishable from human output.
This will only become more problematic, as the quality of the imagery improves and access broadens. Therefore it should be required that all AI-generated imagery have a distinctive and easily identified quality. I suggest clipping a corner off, as you see above.
This doesn’t solve every problem, as of course the image could simply be cropped to exclude it. But again, malicious actors will always be able to circumvent these measures — we should first focus on ensuring that non-malicious generated imagery like stock images and illustrations can be identified by anyone in any context.
Metadata gets stripped; watermarks are lost to artifacting; file formats change. A simple but prominent and durable visual feature is the best option right now. Something unmistakable yet otherwise uncommon, like a corner clipped off at 45 degrees, one-fourth of the way up or down one side. This is visible and clear whether the image is also tagged “generated” in context, saved as a PNG or JPG, or any other transient quality. It can’t be easily blurred out like many watermarks, but would have to have the content regenerated.
There is still a role for metadata and things like digital chain of custody, perhaps even steganography, but a clearly visible signal is helpful.
Of course this exposes people to a new risk, that of trusting that only images with clipped corners are generated. The problem we are already facing is that all images are suspect, and we must rely entirely on subtler visual clues; there is no simple, positive signal that an image is generated. Clipping is just such a signal and will help in defining the increasingly commonplace practice.
Appendix
Won’t people just circumvent rules like these with non-limited models?
Yes, and I pirate TV shows sometimes. I jaywalk sometimes. But generally, I adhere to the rules and laws we have established as a society. If someone wants to use a non-rhyming language model in the privacy of their own home for reasons of their own, no one can or should stop them. But if they want to make something widely available, their practice now takes place in a collective context with rules put in place for everyone’s safety and comfort. Pseudanthropic content transitions from personal to societal matter, and from personal to societal rules. Different countries may have different AI rules, as well, just as they have different rules on patents, taxes and marriage.
Why the neologism? Can’t we just say “anthropomorphize”?
Pseudanthropy is to counterfeit humanity; anthropomorphosis is to transform into humanity. The latter is something humans do, a projection of one’s own humanity onto something that lacks it. We anthropomorphize everything from toys to pets to cars to tools, but the difference is none of those things purposefully emulate anthropic qualities in order to cultivate the impression that they are human. The habit of anthropomorphizing is an accessory to pseudanthropy, but they are not the same thing.
And why propose it in this rather overblown, self-serious way?
Well, that’s just how I write!
How could rules like these be enforced?
Ideally, a federal AI commission should be founded to create the rules, with input from stakeholders like academics, civil rights advocates, and industry groups. My broad gestures of suggestions here are not actionable or enforceable, but a rigorous set of definitions, capabilities, restrictions and disclosures would provide the kind of guarantee we expect from things like food labels, drug claims, privacy policies, etc.
If people can’t tell the difference, does it really matter?
Yes, or at least I believe so. To me it is clear that superficial mimicry of human attributes is dangerous and must be limited. Others may feel differently, but I strongly suspect that over the next few years it will become much clearer that there is real harm being done by AI models pretending to be people. It is literally dehumanizing.
What if these models really are sentient?
I take it as axiomatic that they aren’t. This sort of question may eventually achieve plausibility, but right now the idea that these models are self-aware is totally unsupported.
If you force AIs to declare themselves, won’t that make it harder to detect them when they don’t?
There is a risk that by making AI-generated content more obvious, we will not develop our ability to tell it apart naturally. But again, the next few years will likely push the technology forward to the point where even experts can’t tell the difference in most contexts. It is not reasonable to expect ordinary people to perform this already difficult process. Ultimately it will become a crucial cultural and media literacy skill to recognize generated content, but it will have to be developed in the context of those tools, as we can’t do it beforehand. Until and unless we train ourselves as a culture to differentiate the original from the generated, it will do a lot of good to use signals like these.
Won’t rules like this impede innovation and progress?
Nothing about these rules limits what these models can do, only how they do it publicly. A prohibition on making mortal decisions doesn’t mean a model can’t save lives, only that we should be choosing as a society not to trust them implicitly to do so independent of human input. Same for the language — these do not stop a model from finding or providing any information, or performing any helpful function, only from doing so in the guise of a human.
You know this isn’t going to work, right?
But it was worth a shot.