There truly is no such thing as Artificial Intelligence. But, but ChatGPT, you cry! But self-driving cars! But The Algorithm! None of these are even close to intelligent. But how do we know?
When I was teaching, I used to have fantastic conversations with my year 11 class, at the start of our Artificial Intelligence (AI) unit, on what it means to be intelligent. If I let them, those conversations could have gone on indefinitely, because, actually, intelligence is surprisingly tricky to define. Is it the ability to learn? To recognise oneself in a mirror? To use tools? To adapt to change? To be creative? Maybe all of the above?
It turns out that there is no definitive answer. No simple “if it does this, it’s definitely intelligent” criterion. Some basics are generally agreed upon, though. Intelligence requires, among other things, the ability to learn, to adapt to new situations, and to be creative.
In a thought experiment, Alan Turing famously proposed what is now called The Turing Test, in which a person has a conversation by typing into a terminal, not knowing whether they are talking to a computer or another person. If they are conversing with a computer program and can’t tell that it’s not a human, then that program is said to have passed the Turing Test. The easiest way of passing the turing test, incidentally, is not to make your program smarter, but to slow down its typing, and add typos.
This has often been held up as the true test of Artificial Intelligence, but is plausibly human conversation actually sufficient to be considered intelligent? ChatGPT might, in some cases, pass the Turing test, but it has been described by researcher Emily Bender as a Stochastic Parrot – meaning it is parroting back a calculated set of word patterns and forms that it has seen before, rather than creating meaning of its own. And, indeed, it doesn’t take an expert very long to hit ChatGPT’s boundaries, and have it producing responses that are clearly not intelligent.
ChatGPT can do one thing – answer questions. In truth, it does it remarkably well, or, rather, remarkably plausibly. It might even seem intelligent at times, providing realistic sounding answers to almost anything you can think of, while not actually having more than a passing acquaintance with truth.
For example, when asked to write my bio, it said, among other things, that I completed my Ph.D. in Computer Science at the University of Queensland, where my research focused on using machine learning to identify patterns in large datasets. The only part of this that is true is that I completed my PhD. It was at Monash, not UQ, and my research focused on introductory programming education. ChatGPT is designed to sound plausible. Not to be accurate. Not to be intelligent. Just plausible. (Not unlike some politicians).
There are many systems which claim to use AI. From detection of certain cancers to predictive policing, all of the systems that claim to be “AI” are, at best, very good at doing a single, specific task. (Some, like predictive policing, are actually dreadfully bad at their single, specific task, with disastrous outcomes.)
Even the systems that, say, detect lung cancer on CT scans cannot also be used to detect pancreatic cancer. They are trained on very specific datasets, for extremely constrained and limited purposes, and cannot extend and adapt their results to circumstances even marginally different. This is not intelligence.
This is also one of the reasons self driving cars are so tricky. Here, again, we have systems that are frequently mislabelled “intelligent”, but they are extremely brittle, meaning they work very well in situations they have seen before, but they are very easily broken by weirdly simple unexpected things. like a few well placed stickers on a stop sign, that would never confuse a human driver.
There are fabrics that fool facial recognition systems, and computer surveillance systems that can be defeated using a cardboard box, which is very Wallace and Gromit, but not very intelligent. A Go playing AI, KataGo, was recently defeated in 14 out of 15 games by another AI that simply used very bad moves. The system was trained on well played games, not bad ones.
If you think back to the questions my year 11s used to ask, it gives us a clue as to why AI is so hard. Intelligence is complex, and multifaceted. It’s (relatively) simple to create a program to solve one specific type of problem. Harder, but still doable, to write a program to simulate some specific aspects of intelligence, one at a time. But creating a program that has all of the different aspects of intelligence in one package turns out to be, as yet, an entirely unsolved problem.
A system that most of us would think of as real AI – something that can, more or less, think like us – is known in Computer Science as Generalised Artificial Intelligence, and it is nowhere on the horizon. The term Artificial Intelligence is used instead to apply to anything produced using techniques designed in the quest for real AI. It’s not intelligent. It just does some stuff that AI researchers came up with, and that might look a bit smart. In dim light. From the right angle. If you squint.
Most of these systems use some kind of what we call “machine learning”. “Learning” is again, something of a misnomer. They’re not really doing what we think of as learning, which should involve understanding. They’re just getting progressively better, with feedback, at one very specific task.
The trouble with machine learning systems is that we don’t always know what they have learned. Sometimes they have not learned the things we intended them to. For example, an AI trained to detect covid in CT scans of the lungs of patients turned out to be actually detecting which patients were lying down (because they were likely to be sicker than patients who could be scanned while standing), rather than patients who had covid.
Even voice recognition AI struggles at times. I’m currently staying in a highly automated house, with friends, and this morning my “Hey Google, turn the A/C off” was greeted with “Sure, I’ll tell you a joke…” And while it’s true that many people will tell you that my accent is a little unusual, Google has had plenty of time to get used to it, and I don’t really see “turn the A/C off” as sounding like “tell me a joke” – even in my accent.
It’s a shame, really, that the term AI has morphed into referring to systems that are really quite horribly dumb. And even if we don’t have to worry about AI becoming sentient and taking over the world any time soon, there are plenty of dangers in the cavalier way we use AI and machine learning. We tend to trust them too easily, and fail to evaluate them critically. That’s why it’s so important that kids learn about technology, including how to be rationally sceptical of it. It’s a big focus of my work at the Australian Data Science Education Institute, and an important goal for our education system, so that the whole of society can have a voice in how we use AI, and where it takes us.
We can say with certainty that there is no such thing as Artificial Intelligence. At some point in the future there might be, but despite all of the hype, it’s not imminent, and it certainly doesn’t exist yet. We don’t even have any good evidence that it’s possible to create true AI, though, equally, there’s no reason to believe that it isn’t. Our brains are physical things, just biological computers, really. A mass of electrical connections bathed in a sea of hormones. We should be able to puzzle them out and mimic them in some way. Truthfully, though, we are a surprisingly long way from being able to do that in any kind of meaningful fashion.
Meanwhile, next time someone around you is panicking about computers becoming more intelligent than us, you can set them straight!