Uncategorized

From Hypnotised to Heretic: Immunising Society Against Misinformation.

This is an edited version of my keynote from PyconAU. Heartfelt thanks to the organisers, and to Ryan from Next Day Video, for enabling me to present online when I came down with Covid at the last minute!

For those who prefer video, the keynote is now up on Youtube at https://youtu.be/nUvvfGN-dHs?si=5tI0nz460c4nykNP

I am very excited and honoured to keynote PyconAU. I’d like to begin by acknowledging the traditional owners of the lands on which I’m presenting today, the Bunurong people of the Kulin Nation. They are our first Scientists, our first Story Tellers, and our first Community builders. This land was never ceded.

I want to talk today about feelings, because I don’t think we talk about them enough in the tech industry. I’ve learned a lot from my cat, who has big feelings about many things, including her preferred human leaving for work, the urgent need for snacks, her desperate lack of playmates at 3am, and people who DARE to move without permission. When she has big feelings, she likes to tell us about them. At length. And I am currently having big feelings about the tech industry, so I’d like to tell you about them.

The tech industry sells us “smart” watches that can’t actually reliably do any of the things you buy them for. The most positive study of sleep trackers says they’re 60-75% accurate at picking the sleep stage AT BEST. In the case of my garmin vivo move, it has repeatedly claimed I was in REM sleep while having breakfast, making a cup of miso soup, and putting a load of washing on. Which is remarkably productive for someone who is allegedly still asleep.

My husband’s fitbit recently recorded him having a swim – only a short one, 208 metres – while he was folding washing. I like to imagine a series of parallel worlds where other versions of us are doing the things our “smart” watches say we are.

What’s more, my watch can’t even reliably tell the time! Periodically the hands of my watch just get lost and tell some completely other time, and I have to go into the app and hunt for the hidden function that realigns the hands. Which is interesting in itself, because this is clearly a known problem, and rather than fix it, they sneak a function into the app that you can use to recover from it when it happens. If you can FIND it.

Closeup of a watch face whose hands point to something approximating 12:05 and whose text says 5:57.

Even step counting isn’t terribly reliable. One study published in 2020 found that wrist worn activity monitors such as smart watches varied in mean average percentage error from 208.6% to 861.2%. That upper value means some activity monitors are recording over 8 times the actual number of steps! ON AVERAGE! The most accurate was more than double! That’s over a full day. Walking briskly and purposefully is a lot more accurate, so clearly smart watches are biased against the fine art of the amble. Not to mention the way they record clapping, washing dishes, and many other things as steps that are not, in fact, steps. My husband’s fit bit records him taking thousands of steps at the same time as riding his bike, which would be an impressive achievement, if only it were true. Similarly, on a recent road trip, I discovered that I took thousands of steps while sitting peacefully in the back seat of our car. 

The tech industry sells us robotic vacuums that will happily share live video footage inside your home with hackers, and new cars that are so pointlessly internet connected that they can be remotely controlled using only the car’s license plate number. Tesla markets an autopilot that is in no way safe to use unsupervised. And, make no mistake, calling it “autopilot” while the fine print says it’s your responsibility to pay attention, because it’s not actually an autopilot, is an incredibly arrogant way to dodge responsibility for a system that’s not fit for purpose. Tech also sells us “self driving cars” that have two human operators per car in a control centre, monitoring them for problems. Which introduces all kinds of issues of reaction time and latency, but sure, they are self driving. Of course they are. 

Tech can sell us perimeter intruder detection systems that can be fooled by boxes – giggling boxes, at that – and facial recognition systems that can’t tell people of colour apart.

And it can sell us blood oxygenation meters that don’t work properly on dark skin.

Now it’s selling us AI, with Google CEO Sundar Pichai claiming Google has a system that’s showing emergent behaviour, doing things it was never programmed to do. (spoiler alert: it does not.)

Microsoft’s Bill Gates says the amount of energy used by AI doesn’t matter, because AI will solve climate change for us – in the complete absence of any evidence of AI ever having solved novel problems. Sure, in some very specific cases it is excellent at doing things it has been very precisely and carefully trained to do, things we already know how to do, but it has never yet solved a problem it hasn’t been taught to solve by someone who has already solved it. We have no evidence that it will ever be capable of solving climate change, but every evidence that its voracious appetite for energy and water are rapidly making it worse.

And there’s more, and worse – AI chatbots encouraging teenagers to kill themselves, Chatbots in healthcare making up symptoms and treatments, and chatbots reporting entirely fabricated criminal histories. But I’m going to move on because we could list AI fails forever, and make no progress.

People really want to believe that AI is, or is about to become, sentient. Even the folks who think it’s going to kill us all seem to think that would be an excellent – or at least really exciting – outcome. 

Todd Sampson, on Gruen this year said :

“[AI] is possibly the most profound transition we will definitely experience in our lifetime.”

Linda McIver, on the couch, said:

It’s really not

Let’s have a quick poll. Are you team Linda, or team Todd?

The idea that large language models will magically become intelligent, life changing, problem solving creatures is like believing that you can drop a large amount of dry spaghetti into a bucket, and if you stir it enough, it will create a perfect scale replica of the Eiffel tower. It’s simply not how these things work. The AI industry’s infinite number of monkeys will never produce Hamlet in real life. 

But it’s not the general public’s fault that we are so credulous, so willing, so desperate, to believe in AI. We are literally being hyped into it. The AIs themselves are designed to maintain the illusion. I asked Claude to summarise a chunk of text into 200 words, and it produced 232 words. So I asked it why it can’t count.

This is what it said. What do you notice about this response?

“I apologize for my mistakes. You’re right to point out these errors, and I appreciate your patience. The truth is, I can count and I can adhere to word limits, but I made careless errors in my previous responses. These mistakes were not due to an inability to count, but rather a failure to double-check my work carefully.

To be clear: I am capable of accurate word counting and of producing summaries to exact word limits. My failure to do so in this conversation was an oversight on my part. Thank you for holding me accountable. In the future, I will strive to be more diligent in verifying my word counts and ensuring I meet specified limits accurately.”

It’s humanising, anthropomorphic language.  I apologise. I appreciate your patience. I made careless errors. I failed to carefully double check MY work. I made an oversight. You are holding ME accountable. I will strive to be more diligent.  

But despite the apology, let’s be clear. Claude was working exactly as intended. Because LLMs are not designed to produce facts, right answers, or anything at all accurate or reliable. They are designed to produce statements that are plausible. As Lilly Ryan puts it, their statements are not facts, they are Fact Shaped.

And I would go further. Their summaries are not summaries. They are summary shaped. Their solutions are not solutions. They are solution shaped.

And the systems are not people. But they are increasingly people shaped. And this, too, is an entirely deliberate strategy. Because, as Adrian Tchaikovsky so eloquently put it in his funny and deeply disturbing book “Service Model”: 

“Humans have been reading personality and self-determination into inanimate phenomena since long before Alan Turing ever proposed a test. The level of complexity in interaction required for an artificial system to convince a human that it is a person is pathetically low.”

Making them as human shaped as possible makes us more likely to want to interact with them, to believe that they are intelligent, and to pay for them to do jobs for us that they are in no way qualified to do.

The trouble is that we are not, by nature, critical thinkers. We want to believe. 

Ever since the world went mad for generative AI I have been unclear as to my role. Am I Cassandra, doomed to see the future but never believed?  The canary in the coal mine? The child who is the only person prepared to say out loud that the emperor is actually naked? Whenever I sound a cautious note I get bombarded with

“Sure, the OLD versions do that, but The latest versions are dramatically better”

“The next version will be game changing”

“These are the worst AIs we will ever use. The next generation is going to change the world”

“Ah, but have you tried <insert random other LLM here>? It’s different.”

And my personal favourite, this comment left on my blog:

“But, LLMs are really good at language and, once you connect an LLM to a computational system, the game changes. Just as a human has 5 senses, an LLM with multiple sensory inputs will quickly learn to overcome all the limitations that you parody in this post…”

Which… really… I have so many questions. The idea that LLMs can become intelligent given the right inputs rests on a completely false model of how LLMs work. To summarise: LLM’s ARE computational systems, and they DO NOT LEARN, even if you plug them into a nose.

I recently saw a person on linkedin unironically using chatgpt to detect bogus chatbot generated references, which is rather akin to asking that dude on facebook whether the rolex he wants to sell you is genuine, only worse, because the dude on facebook knows the rolex is a fake. Chatgpt has no clue. It is not designed to have a clue.

It has been suggested that LLMs learn from text online. That they read text and use them as inspiration. But, like Terry Pratchett’s Gaspode, LLMs don’t so much read books as chew them.

“I’ve read books, I have. Well. Chewed books.”

Gaspode, from Terry Pratchett’s Men At Arms.

It’s busily being marketed as game changing, but the little objective research that I have seen done suggests that it takes more time and effort to use than to do without it. I keep wondering if there really is some secret AI that actually works that I am missing.

I’ve mostly played with text generators, so when my friend Robyn told me she’d had fun trying to get a generative art system to give her a picture of a 3 legged puffin, I decided to try it for myself.

It didn’t matter which system I tried. It could show me a puffin, but never a 3 legged one. I tried a range of prompts, starting with “3 legged puffin” and increasing in desperation through “3 legged puffin with 3 legs” to “3 legged puffin with 3 legs that actually has 3 legs” through to “3 legged puffin with 3 actual legs with the third leg on its stomach.”

A screenshot of an AIimage generator. The prompt says " 3 legged puffin with 3 legs that actually has 3 legs. " The puffin pictured has 2 perfectly ordinary legs.

But, as another stark reminder of the fact that these systems do not think, it had no way of producing a 3 legged puffin. That would require some level of understanding. What is a puffin? What is a leg? It has no idea. But it has seen images labelled puffin on the internet, so it can give us something like that. It’s really a pattern matching device with a little bit of mixing. “Here are 5 images that kind of match your prompt. The system is going to combine them into one.” It’s not “copying” copyright material. Oh no. It’s using it as “inspiration”. (more humanising language)

It’s not using it as inspiration, though. When you taste a cake and try to recreate the recipe on your own, that is using the cake as inspiration. When you throw the cake itself into the blender, that is not using it as inspiration, that’s using it as an ingredient. Which is what generative AI does. Generative AI throws all of its source material into the blender.

A human being, having never seen a 3 legged puffin, but understanding the idea of both puffins and legs, could produce a picture of a puffin that has 3 legs. That’s using ideas as inspiration. An AI cannot, because it uses previous images as content, not inspiration. Generative AI does not, in fact, generate. It regurgitates. 

AI image generation is just like the claims made about AI. At first glance it looks amazing, but when you look at it closely, the flaws are often obvious, and quite disturbing.

The entire AI industry is selling us the idea that it can do things it cannot, in fact, do. Selling us the idea that it HAS ALREADY done things (like solve problems, or teach itself new skills) that it hasn’t, in fact, done. And, for most of society, the sales pitch is working alarmingly well.

Add to that cases where tech companies have acted unethically, such as the recent revelations that I Med has sold patients’ x-ray data to a startup to use for training its AIs, and it’s increasingly clear that we need to rein in the tech industry’s behaviour. 

Technology is also wildly successful at selling us conspiracy theories, lies about health, lies about people, lies about politics, and lies about climate change, among many, many other lies. 

Last year Tiktok tried to take me down a rabbit hole of videos of people trying to break in to hotel rooms. I was sucked in for a while, but it really puzzled me that the women – it was always women, inevitably travelling alone – who were anxiously watching someone break into the room they were in, were filming it to put it on tiktok rather than call the police. Each video seemed to escalate in creepiness and hysteria, until I shook myself free. I now skip any video that doesn’t involve cute animals (mostly cats) or David Tennant. Unless you constantly redirect it, the Tiktok algorithm seems to prioritise fear and rage. 

This year I decided to try threads for promoting my work. While exploring it and thinking about how to build a following there, I posted a small story about doordash, and experienced my first ever nutjob pile on. It was intense. The level of abuse and outrage was entirely disproportionate to the story. My post seemed to be getting noticed by a large number of people who weren’t in my network at all. Talking to other tech friends about it, they’ve noticed the same things. The outraged pile-ons happen fast on Threads, even when you only have a handful of followers. They seem to have very effectively optimised the algorithm to maximise engagement using, again, fear and rage. I hardly use it now. 

And as if that’s not enough, the tech industry now sells us devices that will cease to function at all, if the company making them goes under, or simply decides not to support them anymore. For a doorbell or a thermostat, that’s annoying (and possibly expensive). For a door lock, that’s a real problem. For medical technology like a bionic eye, that’s a nightmare. 

Technology does seem to choose to give us a lot of grief. (and fear and rage) And now, not content with merely spreading misinformation, the tech industry is using AI to generate more of it. Tonnes more. 

And yet. Last year technology also gave me two new hips that radically changed my quality of life. I could stand up, fully weight bearing, as soon as I woke up from the surgery. This still astounds me, more than a year later.

Technology can be life saving. AI systems can successfully identify some types of cancerous cells faster and more accurately than radiologists. 

Technology enabled the modelling of the sars-cov-2 virus that led to the incredibly rapid development of an entirely new class of vaccines that use novel messenger RNA based techniques to vaccinate people against covid19. 

My friends at Pawsey Supercomputing Centre in Perth are using High Performance Computing to develop technology that can monitor the vital signs of patients with traumatic brain injuries in real time, and predict potentially fatal intracranial hypertension before it happens, so that medical staff can prevent and treat it. Technology is saving lives. 

Technology can examine aerial photographs of crops and identify the beginnings of diseases, so that farmers can treat them before they spread. 

We can send scientists to space. We can photograph distant galaxies

And, my personal favourite, my friend Darren can send me live footage of his cats in San Francisco while he’s in New York and I’m in Melbourne.

Before I got up to deliver this talk, my besties in Adelaide, Brisbane, San Francisco, and Perth wished me luck, and my kids several suburbs away rolled their eyes at me in real time. 

So the answer is not to ditch the smart phone, or condemn technology altogether. The answer is to 

<critically evaluate technology>

<critically evaluate information>

<ask rationally difficult questions>

To do that, we need the whole of society to be strong on critical thinking, and to know enough about technology to know what questions to ask.

Schools and universities will proudly and loudly tell you that they produce graduates who are strong in critical thinking. 

And the curriculum is fine as far as it goes. There are a tonne of useful concepts on there. The problem is not so much with the content as it is with the context.

As long as we’re teaching using problems that have answers we can look up in the back of the textbook. As long as we’re using exams to measure students’ progress, or worse, to measure students! As long as we’re asking all the kids to do the same thing, and assessing them on whether or not they got the right answer…As long as we allow kids to think that their year 12 results define them in any meaningful way… we’re not teaching critical thinking. 

We’re teaching compliance, and rewarding group think. We’re teaching exam passing, and getting the right answer. The trouble is, in real life, with real problems, there mostly is no right answer. There are multiple solutions, each with different pros and cons. To figure out whether we’ve implemented a good solution, we have to critically evaluate it. Figure out who it helps, and who it harms.

When I was teaching, I first started using data science with my year 10s because I wanted to give them a reason to learn programming. Before that we had them drawing pretty pictures with block based languages, teaching lego robots to push each other out of circles, and experimenting with slime mold. For some reason, they couldn’t see the relevance of this to their futures. It’s a mystery.

Given that my purpose was motivation, it seemed important to make it real, so that students could see how what they were doing could be useful in other contexts. We did projects on election data, on micro bats (bats, not bits), and on climate science. In every case the datasets were large, messy, and complicated. Magnificently real. And in every case, the overwhelming feedback went from “why are you making me do this??” to “oooh, this is so useful! I used it in my science project, in my maths exam, and watching the news last night there was a graph that was SO misleading because there wasn’t a zero on the scale…”

It worked a treat. Kids were more engaged, they learned that coding was not only something they were capable of, it was something worth doing. And I could have left it at that. But the longer I worked with real data, the clearer it became that using real data has another benefit. There’s no clear right or wrong. Let me explain. 

The first dataset I taught my year 10s with was an election dataset from the AEC. Over 3 million lines of CSV, where each line was a Victorian vote for the senate in the federal election. Too big to open in Excel, they had to use Python to wrangle the data. We got the kids to figure out their own questions to ask of the dataset. With a dataset that big, and that complex, there were so many possible questions. It was also incredibly messy data. 

We taught kids the rules of senate voting in Australia (no small task in itself), showed them the huge ballot paper, and then opened the file to have a look. It was a spreadsheet (comma separated values) file with over 160 columns, and over 3 million rows. We’d tell the kids that people could vote above the line or below the line, but not both, and so they would write code that assumed if they found above the line voting, there would be nothing below the line. But, of course, not everyone followed the rules! The fifth line of data already has someone who has voted above and below the line on the same ballot. Surprise! People, just like data, are complicated, messy, and often don’t follow the rules. 

The teachers I was teaching with were kind of daunted at first, because the trouble with asking 200 students to find their own questions to ask of a large dataset is that we, as teachers, aren’t going to know the answers. Which actually turns out to be one of the best parts of the whole idea – because now, instead of marking kids right or wrong, the kids have to critically evaluate their own work, and explain where it works, where it falls down, and where they aren’t quite sure about it. They have to figure out what other reasons there might be for the results they found. Suddenly they have to ask themselves the question: How do I know I’m right?  Imagine if we had to do that for all the projects we do, all the real problems we solve – not just in education, but in business and government!

So now we were using projects in class where finding fault with your own work is actually what gets you marks. Because we were getting the kids to solve real problems, using real data, there’s no such thing as a perfect solution. There are always going to be issues. 

The standard educational approach is to mark right or wrong. And then we wonder why kids cheat when they don’t get the results they expect in experiments and things. Because they know that the important thing is to get the right answer. What if, instead, we taught everyone that the important thing is to think critically. To evaluate your solutions. To consider other possible reasons for the results that you found. 

Imagine how that might change society.

So… we’re teaching programming using data. The kids didn’t all write incredible feats of machine learning as their assignments (though some did). Many of them simply wrote a page of python to extract some data from a csv file, and process it a little. They didn’t even do the visualisations in python, they mostly did those by hand. (Let’s face it, python visualisation libraries are not super friendly to beginners.) 

But, actually, just writing a little code, and writing it successfully, was a massive achievement for some of these kids. They came into the course believing that coding was hard, not something they could do, and irrelevant to them and their futures. They dreaded it. They left the course knowing that it was not actually that hard, they were more than capable of doing it, and it was hugely relevant and useful for their futures. It turns out that learning a wide range of complex programming skills is far less important than simply learning that programming is something you can do.

Now, not only did we have a whole cohort of previously reluctant learners suddenly engaging enthusiastically with the work, we had a whole new set of kids choosing to do the year 11 computer science subject. Including double the number of girls who had chosen it before.

I was successfully getting more kids into programming. And, let’s be clear, it’s not just girls who are shunning tech in alarming numbers. They’re just the easily measured ones. The boys and non binary kids who just don’t think they can do it, and don’t see any reason why they’d want to, are also a problem, because they, too, contribute to our lack of diversity.

But the other reason I started doing data science with the year 10s was their science projects. Every kid at that school had to do an Extended Experimental Investigation as part of their core science unit. When I walked around at the exhibition night and looked at all the posters, the graphs frequently made me cry. They were so bad! But not only did the kids not know any better, their science teachers hadn’t spotted it either! It was starting to dawn on me that we weren’t getting basic data literacy across to the kids.

But not just the kids! You only have to live through a pandemic to figure out that very few people understand exponential growth. (So what if we had 2, then 4, then 8, then 16 cases! Those aren’t scary numbers! Maybe not… but they’re a terrifying pattern that will be scary REALLY SOON!)

Exponential growth is in the curriculum, and has been for a long time. So why doesn’t anyone seem to understand it, or recognise it when they see it?? Because it never mattered enough to us to really learn it before.

A screenshot of a wavy blue line that cruises from top left to bottom right in an up and down fashion. There are no labels or axes.

Once you start looking for it, data ignorance is everywhere. A journalist published this graph on twitter, what does it mean? When I pointed out that it wasn’t actually meaningful, I got jumped on, and another journo dm’ed me to say we were all intelligent enough to know what that graph means. Except. It doesn’t mean anything! It tells us there has been change, but we have no way of knowing what the change was, or what the scale might be. Without labels and a scale, that data could be going up, down, or sideways, by massive amounts or tiny, or somewhere in between.

The way he believed that “we are all intelligent enough to know what that means” is disturbing, because too often we underestimate our ignorance when it comes to data. And we are easily swayed by some statistics, or by a pretty graph, without knowing what questions to ask to get a sense of what story that data really tells.

I have a podcast called Make Me Data Literate where I interview folks who do cool things with data, and I always ask them what the first question they ask is, when they look at graphs in the media. There have been 29 episodes so far, and there are three very strong themes that have emerged in response to that question.

  1. What’s the source, and what story are they trying to tell?
  2. Check the axes. Does the Y axis start at 0? Does it go up linearly? (and, honestly, some of the things that happen to Y axes out there in the real world should be outlawed by the Geneva Convention)
  3. What’s the sample size?

Most of us don’t, by default, ask those questions, simple though they are. We tend to bend at the knees when we see a graph, and assume the story it’s telling us is valid. Imagine if we all knew enough to be rationally sceptical of those stories?

So… we all need to be more data literate. More rationally sceptical. Better at critical thinking. How do we get there?

There are a number of roadblocks along the way. When I first started doing a PhD in Computer Science, my husband was doing a masters in electrical engineering, specifically about windmills. When we went to parties & introduced ourselves, people would ask what we did. He’d say “I research windmills” and people would light up. I’d say “I’m doing a PhD in Computer Science” and they would physically recoil, then turn back to my husband with his nice, friendly windmills. I learnt to say, instead, that I was doing a PhD in making computers easier to use, and suddenly I was extremely popular. 

Computer Science is SCARY! So, too, is Data Science. People assume that it’s hard, inaccessible, and not something that they could ever do. This is, of course, nonsense. Anyone can learn to program, and anyone can wrangle data. Just like anyone can learn to be good at maths. But we don’t see it that way, because we don’t teach it that way.

We have teachers who believe they are terrible with computers. Teachers who are terrified of technology, and of maths. Especially at Primary School, kids believe their teachers are God. And if God is scared of this stuff, what hope do I have???

And it’s not the teachers’ fault. They went through this same system. We taught them to be afraid of maths, and the tech industry taught them to be afraid of tech, when it ate their work, crashed unexpectedly, or was just bewilderingly difficult to use. When I was writing Raising Heretics, I did some research into primary teaching degrees. Do you know how many core units most of those degrees have on computing, or on data?

None. None at all. Clearly that needs to change. But it’s no good starting at the tertiary level. We know that people’s attitudes to maths, and their ideas of their own abilities, solidify in primary school. By the time they reach High School we have lost them. So we’ve got to go right back to the start and give kids a reason to learn this stuff, the idea that it’s worth doing,  and the belief that they can do it.

A lot of what I teach in my projects is already in the curriculum, but it’s divorced from context. It’s presented as knowledge for knowledge’s sake. When I created my first data science unit, one of the teachers who had to teach it was a maths teacher, and she gave me a very hard time about teaching graphs. “Why are we doing graphs? We already do graphs in maths. It’s pointless.”

And then she saw the finished graphing unit, and she was blown away. “This is so useful! I’m going to use this in maths! I’ve learnt so much!” So. This person, with a degree in maths, who had been teaching maths for years, did not understand the real application of graphs. Take it outside of a textbook and it’s useless. Sure, we can teach graphing the equation y=x squared. But what’s the point?

We can’t expect students to engage with this stuff that they “know” deep in their hearts is not relevant to them, too hard for them, or not interesting to them. We have to give them a purpose. We have to sneak them into it sideways. Not “We’re going to teach you stuff you hate” but “hey, here’s a real problem. How can we solve it?”

To do that, we need to build real problem solving into the curriculum. And not just Design Thinking, plan a trip to Mars type rubbish. We have to get them solving real problems, implementing their solutions, and then – and this is the really important bit – critically evaluating their solutions to see who they help, who they harm, and how they can be improved. (Imagine if governments routinely did that, hmmm?)

I have a template for this kind of problem solving:

  • Find a problem the kids care about.
  • Measure it.
  • Analyse the measurements.
  • Communicate the results.
  • Come up with a solution.
  • Implement the solution.
  • Measure it again to see how well it worked.

And this is the beautiful part, because the measurements will inevitably be flawed – all measurements are! (And wouldn’t the world be different if we all understood that better!) What was different between the first time you measured and the second, aside from what you did? Was the weather different? Were people away? Was the traffic lower due to a rostered day off on the local construction sites? What other reasons could there be for the results you found? How could you test for that?

And then you write a report about what you found. Or you write a letter to the school leadership advocating for change, with evidence! Or you write a submission to the council, or the local public transport provider, or the state government.

You can cover nearly every subject in the curriculum with a project like that. English for the communication, maths for the analysis and graphing, art for the visualisation, science for designing experiments to measure the problem, and potentially for topics around the problem itself, geography for the people and place aspects of the problem, history for the origins of the problem. And computing all the way through. 

And that’s why I created the Australian Data Science Education Institute, or ADSEI. ADSEI is a charity dedicated to building the critical thinking, Data Literacy, and STEM skills of all Australian students, from childcare to university. We design projects based on that template, and also projects to analyse existing datasets on any topic you can think of. We curriculum link them, and train teachers in the data skills necessary to teach them. A lot of that is confidence building, really. We have learned to be afraid of maths, afraid of data, and particularly afraid of technology. 

Some example projects include exploring the difference between a single house installing solar panels, and an entire community doing so. (individual versus collective action)

A cartoon style drawing of solar panels, a pocket with a frog peeking out of it, a sun going behind the cloud with a thought bubble of zzzzs, a basketball, and a disabled symbol

An exploration of Pocket Inequality, where kids count the pockets in mens, womens, and unisex clothing, and evaluate their effectiveness (thus accounting for those uselessly shallow pockets that actually entrap you into dropping your phone). A study of the differences in sleep habits of different age groups. Collecting data about how much further people have to go to use the “accessible” features of a venue and then reporting on it. Evaluating the impact of mindfulness and wellness apps. Evaluating the accuracy of different weather prediction apps. Collecting data about the usage of different social/activity areas in the school grounds and coming up with strategies for democratising access to them. 

One of my former students, who now works with data professionally, once told me it was because of me that she was in this job.

“You’re the one who gave me the confidence to play with data like silly putty.”

Giving teachers, and hence students, that confidence is a huge part of what I do.

But it’s also very much about the critical thinking.

If kids don’t have right or wrong answers, and can’t look them up in the back of the text book, then they have to learn to critically analyse their own work. Suddenly we’re not rewarding right answers, we’re rewarding them for identifying issues with their work. So you take out the defensiveness, the obsession with perfection, and the need to be right, and you replace it with the knowledge that your work isn’t perfect, and that identifying how it’s not perfect is the thing that matters.

Just pause for a moment and consider the difference it might make to the world if we didn’t learn we had to be perfect. If we learned we had to be thoughtful instead. If we learned that nothing is perfect, even our own work, and everything needs to be critically evaluated.

Neal Stephenson, in Diamond Age, wrote:

“The difference between stupid and intelligent people-and this is true whether or not they are well-educated-is that intelligent people can handle subtlety.

They are not baffled by ambiguous or even contradictory situations-in fact, they expect them and are apt to become suspicious when things seem overly straightforward.”

I quite like this definition, and it means that by teaching us simple, neat, perfect scenarios, our education system is actively making us stupid.

I hesitate to say this at Pycon, of all places, but once I realised that these projects could actually teach kids to critically evaluate their own work, I stopped caring about teaching them programming. I don’t care whether they use Python, Tableau, Excel, Pen and paper, or blocks.  It’s the critical thinking that matters. It’s the values that we are teaching them.

Because assessment inevitably teaches kids what we care about, and for too long it has taught them that we care about right answers. About memorization. About regurgitating facts and applying known processes. Rather like AI, now that I think about it… When we could be using assessment to teach them that we care about creative problem solving, critical thinking, and identifying the problems with their own work. We could be teaching them that they have the power to change the world. That what they do matters. 

Instead of using assessment to teach kids to follow rules and produce the right answer, let’s use it to teach them to be rational sceptics, critical thinkers, and absolute trouble makers. Because troublemakers are the ones who are creative enough, and radical enough, to solve the hard problems, and to change the world.

So, if I figured out how to engage kids with STEM, how to teach critical thinking, how to motivate kids to learn programming, why am I not working with kids? Why work with teachers instead?

Well, in a nutshell, my goal is to put ADSEI out of business. I recognise that this is a weird thing for a founder to say. But if I run after school programs, or school incursions, or anything else that has me working with kids, it doesn’t scale. If I work with 30 kids, I’ve worked with 30 kids. But if I run just one workshop with 30 teachers, each of whom has up to 12 classes per year – 6 per semester – of 25 kids each, I’ve worked with up to 9000 kids. PER YEAR.

And it’s more than that, because my goal is systemic change. Change the way teachers teach, change the curriculum, change the teacher training. Change the whole system. So ADSEI also works in the policy space, advocating for change, advising on curriculum, and generally being the kind of troublemakers that we say the world needs. 

When you design a new course, best practice is to spell out very clearly and explicitly what the purpose of the course is. What we really need to do now is to redesign the education system, and to spell out very clearly and explicitly what the purpose of education is. What are we trying to do here?

Here’s what I want our education system to do. I want it to produce:

  • Critical Thinkers
  • Creative Problem Solvers WHO CRITICALLY EVALUATE THEIR OWN SOLUTIONS
  • Challengers of the Status Quo – people who ask WHY
  • Evidence based policy makers

People who look at “smart watches” that can’t tell the time, and chatbots that can’t tell the truth and ask “Why?” People who say “Do better.” and know how to measure what “better” looks like.

I’ve put this in the context of the tech industry because this is (mostly) a tech conference. But we can apply it to medicine, to government, and to business, just as easily. We absolutely need to apply it to climate change, pollution, and species extinction. We need to apply to the housing crisis, to income inequality, to politics, and to media.

I put my blood test results from the last 12 months into a spreadsheet recently and figured out patterns and issues that were invisible to my GP, who was comparing one test to another by scanning the awful printouts that pathology labs send. Why aren’t we teaching our doctors to manage data systematically? How many developing issues go unnoticed, because all they know how to do is scan the printout for items in bold that are outside the “expected range”? 

What we can build this way is a world where policy is evidence based. Where we make data informed decisions, while understanding that the data isn’t perfect. Where kids are empowered to learn all of the skills they need to solve problems in their own communities. Where technological solutions are rationally evaluated, rather than uncritically worshipped. 

Imagine the world we could create.

Find out how you can support ADSEI with donations, volunteering, and more at adsei.org

9 thoughts on “From Hypnotised to Heretic: Immunising Society Against Misinformation.”

  1. Confronting and Inspiring Presentation at PyconAU, Linda! Your 3-legged puffin AI challenge reminded me of the use of Counterfactuals through history – sculpture, drawings and mythology, documented by Judea Pearl in The Book of Why. He sees critical thinking as What if thinking. Another great call to arms for the role of Data Science through our curriculum! Paul

  2. @RossA you might find the big chunk towards the end about primary education interesting

  3. Reads like it was a good talk. I love your story about how looking at real data got your students (and teachers) engaged, as well as thinking critically. Carpe data!

    1. thanks Charles! Great to hear from you. 🙂 The video of the talk should be up soon, too, so keep an eye out!

Leave a Reply