Site icon ADSEI – Teaching Kids to Change the World

People Friendly: Teachers’ Guide to AI – Defining Artificial Intelligence

Defining Artificial Intelligence

“AI is a strange science. It tries to define what it studies, while studying what it defines.” Andy Kitchen

Linda used to teach Computer Science to year 11 students, and that year-long class always started with an introduction to Artificial Intelligence. Step 1 was: Define Intelligence. The expectation from the class was typically that this would be easy, but the classroom debates and discussions went for hours, and we could never really pin down what intelligence is – which was, of course, the point. That’s because I suspect it is, at heart, indefinable. Intelligence, like ethics, is not one thing. It’s pluralistic. It can be expressed and measured in a multitude of ways. We recognise intelligence in certain behaviours, but it’s impossible to put together an essential list that defines intelligence, without stumbling over a host of caveats and exceptions. 

In the end we usually came up with a list of attributes that were necessary for intelligence. The list included things like the ability to:

The trouble is that every definition of intelligence requires a definition of another surprisingly challenging term. For example, we can assert that to be intelligent requires creativity, but what exactly do we mean by creativity? Well, the ability to create something. But if I create something by following a recipe – say, making a tasty curry – is that creative? I haven’t done anything new, I’ve simply followed someone else’s process to create something that has been created before. Perhaps if I vary the recipe by adding different vegetables, or a different combination of spices, that might be properly creative? So is creativity creating something that is wholly unlike anything created before? How new is new? Does painting a statue blue make it a new statue? 

Another problem with trying to formalise intelligence with a precise definition is that it raises uncomfortable questions about our own intelligence (and hence our humanity). If you’re bad at pattern recognition, does that make you unintelligent? If you can’t visualise things in your mind, you have aphantasia, which is estimated to affect around 3.9% of the population to some degree. Does that make you unintelligent? What about if you can’t recognise faces, because you have prosopagnosia, which is estimated to affect around 3% of the population? 

There is increasing evidence that dolphins and great apes can recognise themselves in a reflection, and that many primates can use sign language. Crows can make and use tools. How many of the attributes of intelligence are necessary to meet the definition of intelligence? Which, if any, can we do without and still be considered intelligent? How, indeed, do we define and test for some of those attributes? What is our goal in defining intelligence? Is it to draw a line with humans on one side and all other creatures on the other, or is the goal to define intelligence precisely and see who – and what – falls on our side of the line?

The field of psychology has wrestled with these questions for decades, without reaching complete agreement on an answer. Since we can’t even define intelligence yet, it is fascinating that we seem to think we can create it. But that’s a classic trope in the world of technology, where tackling a problem without understanding or even defining it first, often seems to be the default approach to new technology. We are obviously not the first people to ask these questions. So why do companies like Google DeepMind, OpenAI, and Anthropic claim to be building, or solving intelligence? The likely answer is they know they’re not, but they have to make hyperbolic claims to get Venture Capitalists to pump their coffers with investment funds. While bad, the alternative – that they truly think they can create machines with human-like or super-human intelligence (without a scientifically accepted definition, and working under the market incentives of capitalism) – seems even worse.


Activity – what is intelligence?

This is an excellent activity to run in class, at almost any year level. Have a class discussion on what it means to be intelligent. Come up with a list of attributes which define intelligence. Then, brainstorm examples of exceptions for as many of these attributes as possible.


It’s not only Linda’s classes that expected Intelligence to be simple to define. Back in the 1950s Marvin Minsky and his colleagues at MIT set out to create Artificially Intelligent software, expecting it would take them the whole summer. Here we are in 2024, and not only have we not succeeded in creating truly intelligent software, now we’re not even sure it’s possible.

It turns out that there is no definitive answer. No simple “if it does this, it’s definitely intelligent” criterion. Some basics are generally agreed upon, though. Intelligence requires, among other things, the ability to learn, to adapt to new situations, and to be creative. 

Who/What passes the test? 

In 1950, Alan Turing proposed a test that he thought could help determine whether a computer program is intelligent or not. He based it on a game called the Imitation Game, which was intended to see whether participants could distinguish men from women from their conversation alone. The imitation game had its participants in different rooms, passing notes. Now known as The Turing test, the computing version was to put someone in a room with a keyboard and screen, and have them chat, using the keyboard, to two different agents in other rooms. One agent would be human, and one would be a machine. If they chatted with a computer program and could not tell it was a program and not a person, then perhaps that program could be called intelligent. 

And yet, along come Large Language Models, which could happily pass the Turing test but don’t have any level of understanding or awareness. They simply use a statistical process to calculate a plausible string of words. Are they intelligent? 


Activity – Do Chatbots pass the Turing Test? Why/Why not?

Have your students have conversations with a range of different chatbots. You could use ChatGPT, Claude, Gemini, service chatbots from company websites, or any others you can find. Ask them to identify responses that seem human, and responses that don’t. What makes them seem human? What makes them seem inhuman? How do they differ from conversations with friends, or with strangers? Do you recognise this as an AI? If so, why?


There’s a wonderful quote by Adrian Tchaikovsky, from his novel “Service Model”:

“Humans have been reading personality and self-determination into inanimate phenomena since long before Alan Turing ever proposed a test. The level of complexity in interaction required for an artificial system to convince a human that it is a person is pathetically low.”

One thing is certain – there is no direct path from Large Language Models to truly intelligent systems. They are not, as they are sometimes hyped to be, the very last step before machines become intelligent. They are not even a logical step along the way. 

A system that most of us would think of as real AI – something that can, more or less, think like us – is known in Computer Science as Generalised Artificial Intelligence, and it is nowhere on the horizon. The term Artificial Intelligence is used instead to apply to anything produced using algorithmic and statistical techniques designed in the quest for real AI. It’s not intelligent. It just does some stuff that AI researchers came up with, and that might look a bit smart. In dim light. From the right angle. If you squint.

Most of these systems use one of a group of statistical algorithms called “Machine Learning” in the field. “Learning” is again, something of a misnomer. They’re not really doing what we think of as learning, which should involve understanding. They’re just getting progressively better, with feedback, at one very specific task.

Even voice recognition AI struggles at times. I’m currently staying in a highly automated house, with friends, and this morning my “Hey Google, turn the A/C off” was greeted with “Sure, I’ll tell you a joke…” And while it’s true that many people will tell you that my accent is a little unusual, Google has had plenty of time to get used to it, and I don’t really see “turn the A/C off” as sounding like “tell me a joke” – even in my weird accent.


Activity – How good is Voice Recognition?

Experiment with voice recognition – you could use voice recognition on Google Maps, on phones, text transcription in video conferencing systems like Google Meet, Teams, or Zoom, or the built in speech recognition in Windows or on a Mac. What words does it get right? What words does it get wrong? Is it different for different people? Does it consistently get some words wrong, or does it vary? Can you identify circumstances where it goes wrong, and circumstances where it’s mostly right, or does it seem random? Afterwards, have a class discussion about different uses for Voice Recognition (such as meeting transcription, medical appointment transcription, asking Siri or Google to switch lights or AC on or off, controlling equipment, etc), and consider where mistakes might be problematic. Are there situations where using voice recognition could be dangerous if it goes wrong?


It’s a shame, really, that the term AI has morphed into referring to systems that are really quite horribly dumb. And even if we don’t have to worry about AI becoming sentient and taking over the world any time soon, there are plenty of dangers in the cavalier way we use AI and machine learning. We tend to trust them too easily, and fail to evaluate them critically. That’s why it’s so important that kids learn about technology, including how to be rationally sceptical of it.

Large Language Models are problematic for a number of reasons, which we will discuss in more detail in future chapters, but for now, let’s have a quick look at the highlights (or, more accurately, lowlights) reel:


Activity – Fair use?

Find an opinion piece or other post arguing that training LLMs on creative work is theft, and one arguing that it’s fair use. Compare the arguments. Which do you believe? Are there merits to both, or is one argument much stronger than the other? Who benefits from defining this form of data use as fair use? Who is harmed by it? What is the financial cost or benefit, and to whom, of training LLMs on other people’s data, and what would be the cost or benefit of ruling it illegal (and to whom)?



Activity – Putting the AI to the test

Having everyone working with the same chatbot, set the class the task of finding questions that the chatbot gives wrong answers to. Once there’s a selection of questions, try the same questions on different chatbots. Do they give the same wrong answers? How are they different? How can you test the answers given by a chatbot?

OR

With the class using a range of different chatbots, ask it the questions from a recent test the class sat – whether it’s Digital Technologies, Science, Humanities, Maths… and then mark the chatbot. How many did it get right? Were different chatbots more or less successful? Were any of the answers ambiguous or hard to understand?


One other problem we need to touch on is that of equality of access. With chatbots increasingly offering low levels of access for free, and higher level, better performing systems coming at a cost, we risk further entrenching disadvantage, as the wealthiest people buy access to the best tools, and the poorest people only have access to the low performing free tier.

The rest of this book will contain more about things AI can do, and things it can’t (and why). It will talk about what problems we can use AI to solve right now, and what it might be able to do in the future. We’re also going to spend some time talking about hype cycles and hyperbole, and the very manipulative ways AI is being marketed and discussed right now. We’ll cover some of the ways in which AI can be harmful to us, and to the world. That naturally leads into the issue of bias in Machine Learning. Where and how can it appear, why, and what impact does that have on AI outcomes? 

No discussion of AI would be complete without a look at our rights in AI systems, and what reasonable expectations we can and should have of the way these systems operate, which don’t always align well with the ways AI companies operate. And then we’ll look at a path to a better future, where AI systems are built with transparency, fairness, safety, social good, and wellbeing built in. 

Most importantly, we’ll give you practical activities that you can use to explore AI  for the classroom, and a list of helpful resources you and your students can use to go deeper into the ideas discussed here. 

What’s our position on Artificial Intelligence? Well, like AI, our position is evolving. For now, we can say with certainty that there is no such thing as (AGI) Artificial General Intelligence, or humanlike intelligence. At some point in the future there might be, but despite all of the hype, it’s not imminent, and it certainly doesn’t exist yet. We don’t even have any good evidence that it’s possible to create true AI, though, equally, there’s no reason to believe that it isn’t. Our brains are physical things, just biological computers, really. A mass of electrical connections bathed in a sea of hormones. We should be able to puzzle them out and mimic them in some way. Truthfully, though, we are a surprisingly long way from being able to do that in any kind of meaningful fashion.


Activity – Computers versus Humans

Describe the things that humans are good at that computers aren’t. Describe the things that computers are good at that humans aren’t. Are there some ways that computers have already surpassed human abilities? Are there ways that human abilities can never be matched by a computer, and why/why not?

Extension: Discuss whether it’s even a good goal to try to reproduce all forms of human intelligence with computers.


https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible

The important thing about our response to Artificial Intelligence systems, now and forever, is that we evaluate them critically and rationally, and demand evidence of their strengths and weaknesses, rather than simply taking the hype at face value.

Back to Introduction                    Forward to How AI Works

Exit mobile version