Raising Heretics

This is the text of my keynote from the NSW ICT Educators Conference in Sydney earlier this year.

I am Dr Linda McIver, founder and ED of the ADSEI, a registered charity dedicated to putting students in the driver’s seat of our data driven world.

Today I want to talk to you about the importance of heresy.

I’m going to take you through the place of heresy in science, as well as the phenomenon known as survivorship bias, and how these relate to the extraordinary claims being made in the field of AI, and then I’m going to talk about how I’m aiming to fix it all.
Much of our Science, Technology, Engineering, and Maths education starts from a foundation of facts and known answers. This teaches our kids that the point of STEM is to Get The RIght Answer, whereas the actual point of real world STEM disciplines is to fix things, understand things, and solve problems. In this talk I will show why I founded the Australian Data Science Education Institute, why we’re dedicated to raising Heretics, and why Heresy is something we desperately need right now, both in the Data Science industry and the world as a whole.

First of all, let’s define our terms. Heresy is an opinion profoundly at odds with what is generally accepted. And heresy has been crucial to our scientific development.
Let’s talk about some historical scientific heresies:

  • In the 1840s Ignaz Semmelweis came up with the radical heresy that doctors washing their hands before (and after) surgeries prevented disease. Prior to this doctors went from autopsies to childbirth without washing their hands or even changing their clothes. And they wondered why people died. The idea that this could cause disease was considered so ludicrous that it took decades for the idea of washing hands to be accepted. By the way Semmelweis was so ridiculed and pilloried that his colleagues committed him to an asylum where he was beaten and died.
  • In a well known heresy, Galileo Gallilei so outraged the church with the idea that the earth revolves around the sun, rather than the other way around, that he was accused of literal heresy and committed to house arrest. He only narrowly escaped death.
  • In 1912 Alfred Wegener began to publicly advocate the idea that the continents moved over time – what became known as continental drift. He, too, was widely ridiculed, and did not live to see his ideas finally vindicated.
  • More recently, Marshall and Warren’s original paper on ulcers being caused by bacteria rather than stress was rejected and consigned to the bottom 10% of submissions. Barry Marshall eventually drank helicobacter pylorii – the bacteria that causes ulcers – to prove it, thus inducing an ulcer which he then cured with antibiotics.

It seems like heresy is a pretty dangerous business!

In fact a lot of scientific breakthroughs have been considered heretical. Especially in medicine!

Now let me digress for a moment to tell the story of the WWII planes that were examined for bullet holes to work out where to armour them. Researchers figured that the places with the most holes – the wings and the fuselage – needed the most armour. They found no holes on engines or fuel tanks so they figured they didn’t need armouring… until statistician Abraham Wald pointed out that the planes they were studying were the ones that made it back. The planes needed armour where none of the planes had holes, because clearly the planes that had holes in those other places (the engine and the fuel tanks, btw) were the ones that DIDN’T COME BACK.

I love this story, because it’s a classic example of the obvious conclusion being dead wrong. In similar results, the introduction of helmets in WWI resulted in more head injuries being treated in the field hospitals, so the first reaction was “stop using helmets!”… what data was missing?

Both of these are examples of survivorship bias – where there is a chunk of data missing from the study. In these cases it’s literally survivor bias because it fails to take into account those who don’t make it back.

Have you heard of HireVue? They’re a Human Resources Tech company that uses artificial intelligence to select or reject candidates in job interviews based on… well, nobody actually knows.

They say it’s a machine, therefore it’s without bias. And we could laugh and snort, but over 100 companies are already using it, including big companies like Hilton and Unilever.

According to Nathan Mondragon, HireVue’s chief industrial-organizational psychologist, “Humans are inconsistent by nature. They inject their subjectivity into the evaluations, But AI can database what the human processes in an interview, without bias. … And humans are now believing in machine decisions over human feedback.”
Which is really just all kinds of disturbing when they make all sorts of claims for it, but can’t actually explain how it is making its decisions.

They say that the system employs “superhuman precision and impartiality to zero in on an ideal employee, picking up on telltale clues a recruiter might miss.”

Of course, HireVue won’t tell us how their algorithm works – in part to protect trade secrets, and in part because they don’t really know…

I am fairly confident I’m not the only one who finds that an incredibly disturbing idea.

Luke Stark, a researcher who studies emotion and AI at Microsoft, describes this as the “charisma of numbers”. If an algorithm assigns a number to a person, we can rank them objectively right? Because numbers are objective. And simple. What could possibly go wrong, reducing a complex and multifaceted human being to a simple numerical rank? (Helllooo ATAR – Australian Tertiary Acceptance Rank…)

I think Cathy O’Neil sums it up beautifully: Models are opinions embedded in mathematics, and algorithms are opinions formalized in code. It’s incredibly important that we dispel this pervasive myth that algorithms are unbiased, objective statements of truth.

This whole HireVue system is a textbook example of survivorship bias: looking only at the people who made it through the same hiring process that we are now calling fatally flawed… and thinking we can predict ideal new hires with only that data. It completely ignores the people who didn’t make it through the initial processes, who might have been amazing.

It also highlights an issue I’ve seen raised again and again in works like “Weapons of Math Destruction”, “Made by Humans,” and “Automating Inequality” – that people believe in numbers, computers, and algorithms over other people, even when they’ve been explicitly told those systems are broken. And I have a story about that, too.
Niels Wouters, a researcher at Melbourne University, some years ago designed a system called Biometric Mirror. It was a deliberately simple, utterly naive machine learning system that took a picture of the user’s face and then claimed to be able to tell a whole lot about the person, just from that picture.

The system spat out a rating of ethnicity, gender, sexual preference, attractiveness, trustworthiness, etc. And Niels created the system to start a conversation with people about how transparently ludicrous it was to believe a system that does this. So he set up booths where people would come, have a photo taken, and read all of this obviously false information about themselves, and then have a conversation about trust and ethics and the issues with Artificial Intelligence. So far so good. A noble goal. But there are two postscripts to this story that are horrifying in their implications.

First of all, Niels would overhear people walking away from the display, having had the conversation about how obviously false the “conclusions” drawn by the system were, saying “But it’s a computer, it must be right, and it doesn’t think I’m attractive…”

And secondly, after speaking publicly about all of the issues with Biometric mirror, Niels was contacted by HR companies wanting to buy it…

So here is where we start to make the connection between education and the tech industry.

One of the problems in Data Science is that we often don’t have a lot of time to challenge even our own results, never mind anyone else’s. The rush to data riches (Step 3, Profit!) means we don’t really have time to be cautiously sceptical. We get a result, report it, and move on to the next dataset. And people are all too willing to believe in those results.

When I asked a group of data scientists if they had ever had to release/report on a result that they felt hadn’t been fully tested, that they couldn’t bet their lives on, around half put the hands up. And then when I asked how many hands would have gone up if it had been anonymous, the other half put up their hands.

So all of the discoveries I mentioned in the first half of this talk were made by people being sceptical. Challenging the status quo. Questioning accepted wisdom. By people who were quite prepared to examine new evidence and consider that “what everybody knows” might be wrong. Of course, we need educated heretics, so that our scepticism is rational and fact based, rather than denialism and wishful thinking, which is what we are seeing quite a lot of now. So education is clearly key.

But let’s consider STEM education. We mostly teach Science, Technology, and Maths in schools as a matter of facts and known outcomes (and, yes, I know there’s one more letter in STEM, but we rarely, if ever, actually teach any Engineering) .

Consider the average school Chemistry experiment. We take known substances, apply a known process, and achieve an expected outcome. What do kids who don’t get the results they expect do then? Do they go back and try to find a reason for their results? Do they ask questions and challenge outcomes?

Nope. they don’t have time for that, and they get no credit for it. They copy their friends’ results. Or they simply adjust the results to get the outcome they expected to get. Marks are allocated for the expected results. For the right graph.

Occasionally we’ll run a prac with unknown reagents and ask the students to identify the inputs. But here, again, marks are for the correct answer.

But this isn’t science education. This is an education in confirmation bias. In finding what you are supposed to find. In seeing what you expect to see. It is the exact opposite of the way science should work. Science should be about disproving theories. And you only accept a theory as plausible when you have tried your hardest to disprove it, and failed.

Maths is much the same. The emphasis is on correct answers and known outcomes. On cookie cutter processes that produce the same result every time.

Technology education is often even worse. With a severe shortage of teachers with programming skills, we tend to default to education using toys. Drawing pretty pictures. Making robots follow lines. Writing the same code. Producing the same output.

What if we could teach with experiments where we don’t know the answers?
Well with data, we can easily do that. Can we find a dataset that hasn’t been fully analysed and thoroughly understood? I could probably hit a dozen with a bread roll from where I’m standing.

How do you mark it, then, when you don’t know the right answer? You mark the process. You mark the testing. You ask the students to test and challenge their answers really thoroughly. You give points for their explanation of how they know their answer is right, for how they confirmed it by trying their hardest to prove it wrong.

It has been said, most famously by Grace Hopper, that the most dangerous phrase in the English language is “we’ve always done it that way”. Now, more than ever, we need people who challenge the status quo, who come up with new ideas, who are prepared to be heretical.

By teaching Data Science in the context of real projects, where the outcome isn’t known, we can actually teach kids to challenge their own thinking and their own results. We can teach them to think critically and analyse the information they’re presented with. We can teach them to demand higher standards of validation and reproducibility.

The trouble with this is that it requires a significant amount of setup work. Finding the datasets isn’t hard, but making sense of them can be really challenging – for example when I downloaded a vote dataset from the AEC and tried to find someone who could explain to me how the two dimensional senate ballot paper translated to a one dimensional data string, I literally couldn’t find anyone at the AEC who knew. I mean… presumably there is someone! But I couldn’t find them. It took me hours and hours to make sense of the dataset and design a project that would engage the kids, and give them room to stretch their wings and really fly.

The only reason I was able to commit that kind of time is that I was only teaching part time, so I used my own time to build these engaging projects. In year 10 we did projects on climate, on elections, on microbats. In year 11 we worked with scientists to solve their computational and data needs, in fields like marine biology, conservation ecology, neuroscience, astrophysics and psychology. The possibilities are truly endless.

But a teacher with a full time load doesn’t have the capacity to take on that kind of extra work. It’s just too time consuming, even if they have the skills to start with.

So that’s why I created the Australian Data Science Education Institute, or ADSEI. To develop project ideas and lesson plans that empower kids to explore data and become rational sceptics. To develop their data literacy, critical thinking, and technical skills in the context of projects they really care about. And also to provide professional development training to teachers right across the curriculum – not just in Digital Technologies – to integrate real data science into their teaching. To use data to make sense of the world.

At ADSEI we have created projects where kids use real datasets to explore the world. To solve problems in their own environments and communities, and most importantly: to measure and evaluate their solutions to see if they worked. We’ve got projects that do things like:

  • calculate how much carbon is embodied by the trees on their school grounds and then do various comparisons of the school’s carbon emissions from electricity.
  • construct a set of criteria for good science journalism and then evaluate a bunch of different sources according to those criteria and visualise the results
  • analyse the litter on the school grounds, find ways to fix it, and then analyse it again to see if they worked
  • record and analyse the advertising they see around them in a week and explore its impact on their behaviour
  • use solar energy production & power usage data to explore a household’s impact on the environment
  • use the happiness index data to explore world differences in measures like income inequality and social support
  • use data from scientific observational studies to learn about whales, turtles, climate, and more

When I was teaching Computer Science at John Monash Science School in Melbourne – a school for kids who are passionate about science, who you might be forgiven for assuming were already engaged with tech – we started by teaching them with toys. We had them draw pretty pictures, and program robots to push each other out of circles. And the number one piece of feedback we got was “This isn’t relevant to me. Why are you making me do this? I’m never going to use it.”

When we shifted to teaching the same coding skills – variables, if statements, for loops, etc – in the context of Data Science, using real datasets and authentic problems, that feedback disappeared and instead we heard “this is so important. This is so useful. I’m using this in my other subjects.” and the number one thing I live to hear when teaching tech: “I didn’t know I could do this!”

So not only does teaching tech skills in the context of data science teach the kids that STEM skills empower them to solve problems and find out more about their own world, it gives them the motivation to succeed. To actually learn the skills and put them to good use.

And make no mistake, motivation is the single most important factor in learning.

So Data Science empowers students to learn technical and other STEM skills in the context of real problems. It gives them the capacity to create positive change in their own communities – and to prove that they have. It teaches them to communicate their results.

And most importantly, it teaches them that this is something they can all do.

And that point is crucial, because at the moment we have hordes of students – even at a high performing STEM school like John Monash – believing that tech is not something they can do. Not something that interests them. Not something that’s relevant to them.

Which means that we are continuing to get the same kinds of people choosing to go into tech who have been choosing it for decades now. We are actively perpetuating the stereotypes, because those stereotypes are now so strong that everyone believes that only those types of people should or can go into tech.

One of my friends who works in data science recently met someone who, on learning her occupation, literally said to her: “You work in tech. So, are you on the spectrum?”
Because if I ask you to picture a computer scientist, or a data scientist, chances are you will imagine a young white male who is on the spectrum.

Current figures suggest that women make up as little as 15% of the Data Science industry.
And it’s lack of diversity in the tech industry that leads to systems like the HireVue AI – because there are not enough voices in the room prepared to say things like “Um, have we really thought this through?” or “What are the ethical issues with doing that?”

It also leads to tech solutions that work beautifully for the types of people represented on the development team, but that have serious limitations for everyone else.

And lest you think that women simply aren’t cut out for tech, and there isn’t actually any bias in the field, allow me to remind you of the 2016 study of open source code on github that found that code submitted by a woman was more likely to be accepted than code submitted by a man, but only if the woman’s gender was not identifiable from her github id.

ADSEI’s work isn’t going to turn every student into a data scientist. But it will give kids the option of being data scientists, who wouldn’t have had it otherwise. Because they will understand the power of data science, and they will know that it’s something they can do. And that is phenomenally empowering.

Measuring with Added Data Science – Primary School Lesson

You can add a little Data Science into any lesson, but Measurement in Primary School is just crying out for a little added Data Science. And when I say Added Data Science, I really mean added critical thinking and scepticism. Here is a Grade 6 lesson that I just trialled at Gillen Primary School in Alice Springs, where we took a basic measurement lesson on height and injected some cool data concepts. This lesson might be worth splitting over two lesson times, depending on how the discussion goes.

The goal here is to be asking questions and evaluating what you’re doing at every step.

  1. Pick two students that are very different heights, and have them stand at opposite corners of the room. Have the kids guess who is taller.
  2. Now pick two students that are very close in height, and do the same thing. Have the two students stand back to back and work out who is actually taller. Now ask the kids: which was easier to guess? Why?
  3. Class discussion: what does it mean to “estimate” a value? What’s the difference between an estimate and a guess? If an estimate is an educated guess, what factors did you use to “educate” your estimate of who was taller? (One student today said that the taller person came further up the board than the shorter person, which was a great way of using comparisons to inform your estimate!)
  4. Have your students make a list of the people in their class who are here today and rank them by height, without talking to each other or comparing answers.
  5. Class discussion: Did you all rank every person the same? Which positions were easiest to rank? Often the tallest and shortest students are really easy to rank, but sometimes there are a few students very close in height that make it difficult. The middle positions tend to be the hardest, and you can have some discussion about why this is.
  6. Ask the class who is the tallest student. Take one answer and then ask if there are any different answers, until you have the set. Then do the same for shortest. You can do some back-to-back measuring at this point to settle these questions.
  7. Ask the class why their answers might be different, and discuss how estimates are not exact.
  8. Now get the class to stand up and sort themselves into height order. You might want to get the tallest and shortest up first, and then gradually fill in the middle one or two students at a time, to avoid chaos.
  9. Class Discussion: How much easier was it to do in person than try to compare them in your head? What made it easier?
  10. Now for the measurement! Put the class into groups of 3-5. Each group picks one person to measure, and every other person in the group should measure that person and write down their height, without telling the other members of their group what height they got. 
  11. Groups compare their results and see how similar they were. Each group should record the size of the range of their measurements. So a group that recorded measurements of 143, 145, and 146 would record 3 as their largest, because the lowest value was 143 and the highest was 146.
  12. Come back together as a class. Class Discussion: How accurate do you think your measurements were?
  13. Class Discussion: Did every student use the same measuring technique? What were some different ones people used?
  14. Class Discussion: How big was the biggest difference between measurements? What factors made the measurements hard? We heard things like:
    1. The person we were measuring was taller than us.
    2. The person was taller than the tape measure (at this point you can explore strategies for solving this problem! Eg measuring against the wall, marking where the tape measure stops, and putting the tape measure above that mark to measure the remaining length, or measuring them lying down on the floor).
    3. It was hard to hold the tape measure straight.
    4. It was hard to hold the tape measure still.
    5. It was hard to read off the exact value because of the distance between the tape measure and the actual top of the person’s head.
    6. The actual measuring part of the tape measure starts a few centimetres in from the start of the tape, getting it exactly in the right spot on the floor is hard!
  15. As a class, brainstorm techniques for making the measurements more accurate.
  16. To wrap up the class, ask them again how accurate they thought their measurements were, and then ask them if they think they were accurate enough? Think of several scenarios where you might need to measure height, and ask how accurate each needs to be. The goal here is to consider that data is rarely completely accurate, but it can still be accurate enough. Eg.
    1. Measuring the length of bed someone needs. Because beds come in fixed sizes you only need to know which range the person fits into.
    2. Measuring whether someone will fit through the doorway. As you are very unlikely to have primary school kids who won’t fit through your doorway, it’s reasonable to think they don’t need to be very accurate! “Are you less than <however tall your doorway is>?” can usually be estimated rather than measured! Consider whether they might know someone for whom this would not be sufficient – eg a professional basketballer.
    3. Measuring whether a cape would fit
    4. Pilots in some aircraft have to be under a certain height to fit in a cockpit
    5. Sailors in a submarine (because the ceilings are low)
    6. What others can you think of?

There are many more questions you can explore using this lesson, and many more types of inaccuracies you could consider. As always, these steps are a starting point, and some points to ponder. You can use a subset of the steps, or expand on them.

If you modify the lesson it would be wonderful if you could share it back by emailing it to contact@adsei.org so that other teachers can learn from your approach.

Primary School Data Science Template

People often assume that Data Science in Schools has to be secondary school only, because how could primary kids do Data Science? The truth is that Data Literacy and Analysis skills can be built in to the curriculum from as young as 5 years old. And it’s really important that kids learn Data and Tech skills early, because by the time they get to secondary school we’ve already lost a lot of them, believing that these skills are too hard, not relevant to them, or just not interesting. We need to show them early on that Data Science is a useful tool that they are more than capable of mastering.

So how can primary kids do data science? Like any other data science project, it’s crucial to put it in context, so the kids can see the point.

So Step One is: Find a problem the kids care about

It might be litter in the playground, traffic at pickup time (or, to put it in a way kids will really relate to – how long they have to wait to be picked up, or how far they have to walk to the car!), or access to play equipment.

Step Two: Measure the problem

Count and identify the litter, time how long people have to wait to be picked up, measure how far people have to walk to the car, or count the number of people who get to use the monkey bars every lunchtime for a week.

Step Three: Analyse the measurements

For younger kids, that might simply mean sorting the rubbish into categories (eg chip packets, icy pole wrappers from the canteen, and sandwich bags or cling wrap from home), or organising the drop off or play equipment measurements by year level or by day. For older kids you might enter it into a spreadsheet and use a formula to calculate some averages over the week or by area or year level.

Step Four: Communicate your results

This is where you graph or visualise your results. For the littlies they can “graph” the results by stacking up blocks to represent the different categories. Green blocks for chip packets, blue ones for icy pole wrappers, etc. This is a great, tangible, exercise in data representation. Older kids can draw graphs or do them in a spreadsheet like Excel or Google Sheets. It helps to get them to draw pictures and labels on their graphs to make them more interesting and compelling.

Step Five: Propose a solution

Think of a way you might solve the problem. For litter the kids might come up with nude food day campaigns, or a change to the way food is available in the canteen – such as using larger chip packets and handing out small paper bags chips in them, instead of lots of small plastic packets. For traffic it might be that pickup times can be staggered by year levels, or older kids might be encouraged to walk further and be picked up a block or two away.

Step 6: Implement your solution

This can be a whole school initiative, and involves a lot of communication, using the graphs from Step Four to tell the community what’s happening and why.

Step 7: Measure again to see how well it worked

This is my favourite step, often sadly missing from political initiatives. Once you’ve tried to fix something, you need to measure it again to see if you actually made any difference.

You can even repeat steps 3 to 7 with several different solutions to compare which ones work better.

I love this template because it is the essence of STEM – It’s a science experiment, devised by the kids, with rigorous measurement and evaluation. Maths and Technology are used in handling the data, and you can use Engineering to design your solution, or even to measure the problem if you’re looking at environmental conditions like heat, noise, or water and want to use some sensors.

You can scale the technology use up or down depending on available resources and where your students are up to. There are no robots with parts to fail. And the best part is that the motivation is built in. The kids are learning that STEM and Data Science are tools you can use to solve real problems in your community. They’re not just a bit of fun that’s not relevant to their futures.

ADSEI is developing more projects like these over the next year, as well as building a network of teachers interested in sharing their ideas and supporting each other to introduce integrated STEM and Data Science in the classroom. Jump onto the mailing list to stay in touch, and feel free to share your own ideas in the comments on this post!