In the Terry Pratchett novel Feet of Clay, Sam Vimes has a device called a Disorganiser that reflects Pratchett’s mixed feelings about technology. The device is powered by an imp who is desperate to be useful, so it doesn’t get thrown off the nearest rooftop (Sam has been rather short tempered with previous devices).
After one particularly frustrating encounter, the imp frantically announces a new skill:
“I can recognize handwriting,’ said the imp proudly. ‘I’m quite advanced.’
Vimes pulled out his notebook and held it up. ‘Like this?’ he said.
The imp squinted for a moment. ‘Yep,’ it said. ‘That’s handwriting, sure enough. Curly bits, spiky bits, all joined together. Yep. Handwriting. I’d recognize it anywhere.’
‘Aren’t you supposed to tell me what it says?’
The imp looked wary. ‘Says?’ it said. ‘It’s supposed to make noises?’
Feet of Clay, Terry Pratchett
For all we call them “Artificial Intelligence”, chatbots recognise language the way the imp recognises handwriting. They recognise plausible looking sentences. They can even put them together – it’s a statistical process where they select a likely word to fit the preceding ones, rather like lego towers, where each brick clicks snugly onto the one before. A chatbot can click the words in to place and make a complete sentence, but it has no more understanding of the content you are trying to build than a piano understands a concerto. Chatbots do not reason. They do not interpret. They do not think. At all.
Chatbots are not capable of understanding, or analysis, or even differentiating between truth and fiction. All they can do is fit together something that looks like language. Which makes it desperately worrying when teachers use them for marking, lawyers use them for coming up with arguments to use in court, or they get used for anything requiring analysis or accuracy.
Even if you give them a rubric, or set of rules, they are not actually capable of applying them in any kind of rational, critical way. They will say they can. They will produce “results”. But the results are no more meaningful than if you were to pour the contents of the internet into a blender, blend them vigorously, and then pour out a glass of results. They look more meaningful. As my friend Lily Ryan puts it, chatbot statements are fact shaped. But they are not facts. In this case they are result shaped, but they are not results.
AIs are surprisingly good at image description, when carefully vetted by a human. They can piece together other people’s work into patchworks they call new art. But they can’t figure out whether a person has too many fingers on one hand, too many limbs, or bizarrely smeared facial features.
Google’s AI recommending the use of glue to keep toppings from slipping off pizza, or consumption of one small rock per day for health reasons, or Meta AI telling me that rice bubbles are gluten free are not minor anomalies, they are the inevitable outcome of the Large Language Model technology that powers the systems we are currently so excited about. Microsoft, Google, Facebook, Open AI, and all the other companies grasping for the flood of AI investment, are not creating intelligence. Chatbots are not thinking machines. They won’t even be the ancestors of thinking machines.
They are barely more useful than the toys with the company logo etched on them that you get at conferences, but they use immense amounts of power and water to create their shiny output. They’re just water guzzling. climate wrecking, marketing toys.

