AI’s weird and uncanny mistakes reveal the gaps in how I perceive intelligence
I’m used to seeing human-like intellectual capabilities together as a bundle, what I consider human intelligence. To feel the presence of some part of intelligence without the rest is weird.
ART∩CODE is now on Substack. If you’re not expecting this email, please see my note at the end.
Do you remember four years ago when we first saw those AI generated photos of people’s faces and were told “this person does not exist”?
I remember the disorientation of that moment. It seemed incredible that an AI had acquired such a deep knowledge of the complexities of the human face, as well as the capability to render with photographic realism. When I was only confronted with the flawless images, it was easy to jump to that conclusion.
Those images were created with an AI model called StyleGAN. It’s the same family of model I’ve been using in a lot of my work, including the video I shared last month “Cosmic Insignificance Therapy”.
These days, I'm more familiar with its mistakes.
StyleGAN’s mistakes aren’t simple factual errors like colouring the pupils green or drawing the eyes too big. It seems to completely lose grasp of what physical reality looks like. Its mistakes fall into the uncanny valley, that disturbing gulf between the cute and the lifelike where zombies and ghosts belong.
The weirdness of deconstructed intelligence
the weird is a particular kind of perturbation. It involves a sensation of wrongness: a weird entity or object is so strange that it makes us feel that it should not exist, or at least it should not exist here. Yet if the entity or object is here, then the categories which we have up until now used to make sense of the world cannot be valid.
— Mark Fisher, The Weird and the Eerie
I think StyleGAN’s uncanny mistakes can be disturbing because they violate my gut assumptions about how intelligence manifests in this world. I’m used to seeing human-like intellectual capabilities together as a bundle, what I consider human intelligence. If a human can draw photorealistic faces, I might assume they have mastered many other intellectual abilities, like a deep sensitivity to human physiology and how it exists in physical reality.
But the sight of that woman’s face slowly degenerating through deformity into smudges reveals a thinking process lacking these abilities. To feel the presence of some part of intelligence without the rest is weird. It disrupts my assumptions of what I can expect from reality, shaking me into a world where something approximating human-like intelligence can arise from unrecognisable ingredients. It’s scary like a zombie who has enough agency to animate a human corpse into violence but lacks the capacity for compassion, reason or pain that might stop it.
The danger of seeing intelligence as a spectrum
StyleGAN takes one piece of the intelligence bundle, isolates it and amplifies it. Those initial uncanny images forcefully unbundled my conception of intelligence. This feels like an important experience.
StyleGAN is now already over four years old. But I’ve spoken to a few people who I think might be experiencing a similar uncanny reckoning with ChatGPT. An initial overwhelm of its impressive abilities is followed by disappointment on finding its limitations. In some cases this is met with a sense of vindictive relief. Maybe this AI is not so intelligent after all. Thank God for that.
There are indeed good reasons to talk down the “intelligence” of AI. For example, last month the Romanian Prime Minister Nicolae Ciuca unveiled an “AI advisor” called Ion which will eventually inform the government of the thoughts of the population so it can make better decisions. The story was reported in the Guardian with zero critical analysis after a press launch featuring Ciuca speaking to a mirror that was pretending to be an AI.
To give Ion its human name, and call it an AI, implies some kind of intellectual authority. It leans into the intuition that intelligence is a singular trait, and so one intellectual capability implies all the others. But Ion is, at best, a software package for statistical analysis, and possibly little more than a PR stunt. And call me a cynic, but in my experience statistics tend to be used by those in power to justify their decisions rather than inform them. I can’t imagine a more poignant image of a government AI advisor than the PM speaking into a mirror.

But let’s take care. Even if ChatGPT is little more than turbocharged statistical analysis of the web, to call it “unintelligent” is as problematic as calling it “intelligent”. It can likely do things - intellectual things - beyond what we can even dream up right now. It may have just a slice of the intellectual capabilities of a human, but it scales to a capacity that gives qualitatively different results.
For example, ChatGPT can code, test its code to see if it works, modify that code and then iteratively build up a piece of software much like a human coder. But, unlike humans, we can spawn thousands of instances of ChatGPT to work in parallel for a relatively tiny cost. Someone can (so probably will) give it an internet connection, a list of their enemies and ask it to discover new hacking techniques to dig up dirt on them all.
It’s difficult to anticipate how far it would get. Ten years ago, many were expecting self-driving cars to be the dominant mode of transport by now. Sometimes, you need far more parts of the bundle we call intelligence than it seems at first. This may also be true of ChatGPT coding, but its limited ability for original critical analysis gives much less of a meaningful signal than it would for a human.
Intelligence is far more complex than a bundle of intellectual capabilities, but I’m finding the bundle a more useful analogy than the ascending ladder of abilities with humans at the top. Some capabilities which have previously only been present in humans are now in machines. Others are coming into existence that we’ve never seen before.
Tim
Montreal, 21 April 2023
This is the ART∩CODE newsletter from Tim Murray-Browne.
I’ve just moved us from Mailchimp to Substack. I think Substack emails are less likely to end up in Spam, so if you’re not expecting this, then maybe I’ve been emailing your spam folder for a few years. If you don’t want to receive it, there should be an unsubscribe link below. If you’d like to know how you joined this list, email me and I’ll let you know.
"Ten years ago, many were expecting self-driving cars to be the dominant mode of transport by now."
Did they though?
I'm pretty sure once we got a few years into the new millennium most people gave up on thinking about the future because it was becoming painfully obviously that nothing had really changed that much in the real world.
We're now into the 4th decade of the 90s... all our "advancements" only exist inside a glowing rectangle.
Also: AI has been a thing since the 50s. The version the public is allowed to access has likely been around for decades. I'm partially joking when I say this, but also kinda not: there's likely a good reason why when you ask ChatGPT to be "creative", it comes up with stuff that sounds like it's the 50s and 60s.
Lovely article - I am enjoying these.
In addition to the sensation of weird, I also feel a sense of hollowness to AI generated images. The technique can be very impressive, but the expression, for me, is not. I am continually surprised by our willingness to divorce intelligence from emotion and experience. I think this is something common in programming circles, where intelligence is seen as synonymous with logic, and the measurable ways of using logic. I also think it is subtly, or perhaps not so subtly, dehumanising.
Noam Chomsky - as contrarian as ever - had quite withering words for idea that GPT is in any sense intelligent. I would be curious what you think of this. Quoted from the NYT article "The False Promise of GPT".
"Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity."