Every day there is a flurry of new articles about Artificial Intelligence, particularly chatbots. They talk about how they will be transformative, how much they are helping people. One article on the ABC news website today said “the health and social benefits are clear,” which seems a touch delusional.
The same article also describes the downsides of AI as obvious, “whether it’s replacing workers, stifling creativity and critical thinking, or wiping out humanity altogether.” This is missing a very significant downside indeed.
The overblown hysteria about the dangers of AI becoming sentient and killing us all, which is deliberately and manipulatively being whipped to a frenzy, too often obscures the actual risks. CEO of Google, Sundar Pichai, talks about chatbots displaying “emergent behaviour”, as if the very existence of AIs that are plausible conversationalists is proof that we are on the cusp of creating dangerously intelligent beings.
And yet plausible is literally all they are.
As I have noted in the past, these systems are not intelligent. They do not think. They do not understand language. They literally choose a statistically likely next word, using the vast amounts of text they have cheerfully stolen from the internet as their source.
They create, if you can call it creation, a plausible sentence from all the sentences they have seen before, but it is as meaningful, and as accurate, as a bunch of garbage stuffed into a blender and poured out at random. Sure, it’s then filtered for maximum statistical likelihood. But it is never filtered for truth, relevance, or harm. You can’t use chatbots to fact check things – they don’t know what’s true.
There is no viable path from this statistical threshing machine to an intelligent system. You cannot refine statistical plausibility into independent thought. You can only refine it into increased plausibility.
Chatbots such as ChatGPT and Bard are an evolutionary dead end. They are not “on the cusp of a breakthrough”. They are fabulous marketing tools. Wonderfully sparkly smoke and mirror machines. There might even be some useful things they can do (though I am sceptical). But they will not evolve into intelligence. They can’t. They are not on a path to intelligence. (This is not to say there are not other AI systems using different approaches that might be closer, but I’ve seen no evidence, yet, that anything better exists.)
Worse than that, they are already actively causing harm by increasing the amount of misinformation in the world on a truly horrifying scale. They are not designed to output truth. They are designed to output shiny baubles of text. Things that look true. Things we’d like to be true. Things we can easily believe. We certainly don’t need more of that.
It worries me how much of the commentary misses this crucial point. We talk about threats, but seem to forget this one. Even technical folks who should know better ask chatbots questions that are answered with complete fabrications, and fail to check the results.
I asked Microsoft’s Bard to tell me what restaurants in my suburb serve gluten free food. It raved enthusiastically about the gluten free burgers at one particular restaurant, and even provided a link. That link, as it turns out, went to a completely different restaurant that does not serve burgers, let alone gluten free ones. The first restaurant it mentioned doesn’t serve gluten free food at all. Lucky I didn’t take it seriously, eh? But I wanted to!
I think this is the real danger of AI – that we want to believe. Chatbots are so plausible they draw us in, even when we should know better. They give confident and completely wrong answers, in a way that we are all too happy to accept. They seem to have generated a veneer of respectability and credibility which is wholly undeserved.
There simply is no way you can take these fake news generators and make them intelligent. It’s not a step on the road to intelligent machines. At best it’s a cul de sac. At worst we are sleep walking in completely the wrong direction. It’s time to wake up.