Uncategorized

Raising Heretics

This is the text of my keynote from the NSW ICT Educators Conference in Sydney earlier this year.

I am Dr Linda McIver, founder and ED of the ADSEI, a registered charity dedicated to putting students in the driver’s seat of our data driven world.

Today I want to talk to you about the importance of heresy.

I’m going to take you through the place of heresy in science, as well as the phenomenon known as survivorship bias, and how these relate to the extraordinary claims being made in the field of AI, and then I’m going to talk about how I’m aiming to fix it all.
Much of our Science, Technology, Engineering, and Maths education starts from a foundation of facts and known answers. This teaches our kids that the point of STEM is to Get The RIght Answer, whereas the actual point of real world STEM disciplines is to fix things, understand things, and solve problems. In this talk I will show why I founded the Australian Data Science Education Institute, why we’re dedicated to raising Heretics, and why Heresy is something we desperately need right now, both in the Data Science industry and the world as a whole.

First of all, let’s define our terms. Heresy is an opinion profoundly at odds with what is generally accepted. And heresy has been crucial to our scientific development.
Let’s talk about some historical scientific heresies:

  • In the 1840s Ignaz Semmelweis came up with the radical heresy that doctors washing their hands before (and after) surgeries prevented disease. Prior to this doctors went from autopsies to childbirth without washing their hands or even changing their clothes. And they wondered why people died. The idea that this could cause disease was considered so ludicrous that it took decades for the idea of washing hands to be accepted. By the way Semmelweis was so ridiculed and pilloried that his colleagues committed him to an asylum where he was beaten and died.
  • In a well known heresy, Galileo Gallilei so outraged the church with the idea that the earth revolves around the sun, rather than the other way around, that he was accused of literal heresy and committed to house arrest. He only narrowly escaped death.
  • In 1912 Alfred Wegener began to publicly advocate the idea that the continents moved over time – what became known as continental drift. He, too, was widely ridiculed, and did not live to see his ideas finally vindicated.
  • More recently, Marshall and Warren’s original paper on ulcers being caused by bacteria rather than stress was rejected and consigned to the bottom 10% of submissions. Barry Marshall eventually drank helicobacter pylorii – the bacteria that causes ulcers – to prove it, thus inducing an ulcer which he then cured with antibiotics.

It seems like heresy is a pretty dangerous business!

In fact a lot of scientific breakthroughs have been considered heretical. Especially in medicine!

Now let me digress for a moment to tell the story of the WWII planes that were examined for bullet holes to work out where to armour them. Researchers figured that the places with the most holes – the wings and the fuselage – needed the most armour. They found no holes on engines or fuel tanks so they figured they didn’t need armouring… until statistician Abraham Wald pointed out that the planes they were studying were the ones that made it back. The planes needed armour where none of the planes had holes, because clearly the planes that had holes in those other places (the engine and the fuel tanks, btw) were the ones that DIDN’T COME BACK.

I love this story, because it’s a classic example of the obvious conclusion being dead wrong. In similar results, the introduction of helmets in WWI resulted in more head injuries being treated in the field hospitals, so the first reaction was “stop using helmets!”… what data was missing?

Both of these are examples of survivorship bias – where there is a chunk of data missing from the study. In these cases it’s literally survivor bias because it fails to take into account those who don’t make it back.

Have you heard of HireVue? They’re a Human Resources Tech company that uses artificial intelligence to select or reject candidates in job interviews based on… well, nobody actually knows.

They say it’s a machine, therefore it’s without bias. And we could laugh and snort, but over 100 companies are already using it, including big companies like Hilton and Unilever.

According to Nathan Mondragon, HireVue’s chief industrial-organizational psychologist, “Humans are inconsistent by nature. They inject their subjectivity into the evaluations, But AI can database what the human processes in an interview, without bias. … And humans are now believing in machine decisions over human feedback.”
Which is really just all kinds of disturbing when they make all sorts of claims for it, but can’t actually explain how it is making its decisions.

They say that the system employs “superhuman precision and impartiality to zero in on an ideal employee, picking up on telltale clues a recruiter might miss.”

Of course, HireVue won’t tell us how their algorithm works – in part to protect trade secrets, and in part because they don’t really know…

I am fairly confident I’m not the only one who finds that an incredibly disturbing idea.

Luke Stark, a researcher who studies emotion and AI at Microsoft, describes this as the “charisma of numbers”. If an algorithm assigns a number to a person, we can rank them objectively right? Because numbers are objective. And simple. What could possibly go wrong, reducing a complex and multifaceted human being to a simple numerical rank? (Helllooo ATAR – Australian Tertiary Acceptance Rank…)

I think Cathy O’Neil sums it up beautifully: Models are opinions embedded in mathematics, and algorithms are opinions formalized in code. It’s incredibly important that we dispel this pervasive myth that algorithms are unbiased, objective statements of truth.

This whole HireVue system is a textbook example of survivorship bias: looking only at the people who made it through the same hiring process that we are now calling fatally flawed… and thinking we can predict ideal new hires with only that data. It completely ignores the people who didn’t make it through the initial processes, who might have been amazing.

It also highlights an issue I’ve seen raised again and again in works like “Weapons of Math Destruction”, “Made by Humans,” and “Automating Inequality” – that people believe in numbers, computers, and algorithms over other people, even when they’ve been explicitly told those systems are broken. And I have a story about that, too.
Niels Wouters, a researcher at Melbourne University, some years ago designed a system called Biometric Mirror. It was a deliberately simple, utterly naive machine learning system that took a picture of the user’s face and then claimed to be able to tell a whole lot about the person, just from that picture.

The system spat out a rating of ethnicity, gender, sexual preference, attractiveness, trustworthiness, etc. And Niels created the system to start a conversation with people about how transparently ludicrous it was to believe a system that does this. So he set up booths where people would come, have a photo taken, and read all of this obviously false information about themselves, and then have a conversation about trust and ethics and the issues with Artificial Intelligence. So far so good. A noble goal. But there are two postscripts to this story that are horrifying in their implications.

First of all, Niels would overhear people walking away from the display, having had the conversation about how obviously false the “conclusions” drawn by the system were, saying “But it’s a computer, it must be right, and it doesn’t think I’m attractive…”

And secondly, after speaking publicly about all of the issues with Biometric mirror, Niels was contacted by HR companies wanting to buy it…

So here is where we start to make the connection between education and the tech industry.

One of the problems in Data Science is that we often don’t have a lot of time to challenge even our own results, never mind anyone else’s. The rush to data riches (Step 3, Profit!) means we don’t really have time to be cautiously sceptical. We get a result, report it, and move on to the next dataset. And people are all too willing to believe in those results.

When I asked a group of data scientists if they had ever had to release/report on a result that they felt hadn’t been fully tested, that they couldn’t bet their lives on, around half put the hands up. And then when I asked how many hands would have gone up if it had been anonymous, the other half put up their hands.

So all of the discoveries I mentioned in the first half of this talk were made by people being sceptical. Challenging the status quo. Questioning accepted wisdom. By people who were quite prepared to examine new evidence and consider that “what everybody knows” might be wrong. Of course, we need educated heretics, so that our scepticism is rational and fact based, rather than denialism and wishful thinking, which is what we are seeing quite a lot of now. So education is clearly key.

But let’s consider STEM education. We mostly teach Science, Technology, and Maths in schools as a matter of facts and known outcomes (and, yes, I know there’s one more letter in STEM, but we rarely, if ever, actually teach any Engineering) .

Consider the average school Chemistry experiment. We take known substances, apply a known process, and achieve an expected outcome. What do kids who don’t get the results they expect do then? Do they go back and try to find a reason for their results? Do they ask questions and challenge outcomes?

Nope. they don’t have time for that, and they get no credit for it. They copy their friends’ results. Or they simply adjust the results to get the outcome they expected to get. Marks are allocated for the expected results. For the right graph.

Occasionally we’ll run a prac with unknown reagents and ask the students to identify the inputs. But here, again, marks are for the correct answer.

But this isn’t science education. This is an education in confirmation bias. In finding what you are supposed to find. In seeing what you expect to see. It is the exact opposite of the way science should work. Science should be about disproving theories. And you only accept a theory as plausible when you have tried your hardest to disprove it, and failed.

Maths is much the same. The emphasis is on correct answers and known outcomes. On cookie cutter processes that produce the same result every time.

Technology education is often even worse. With a severe shortage of teachers with programming skills, we tend to default to education using toys. Drawing pretty pictures. Making robots follow lines. Writing the same code. Producing the same output.

What if we could teach with experiments where we don’t know the answers?
Well with data, we can easily do that. Can we find a dataset that hasn’t been fully analysed and thoroughly understood? I could probably hit a dozen with a bread roll from where I’m standing.

How do you mark it, then, when you don’t know the right answer? You mark the process. You mark the testing. You ask the students to test and challenge their answers really thoroughly. You give points for their explanation of how they know their answer is right, for how they confirmed it by trying their hardest to prove it wrong.

It has been said, most famously by Grace Hopper, that the most dangerous phrase in the English language is “we’ve always done it that way”. Now, more than ever, we need people who challenge the status quo, who come up with new ideas, who are prepared to be heretical.

By teaching Data Science in the context of real projects, where the outcome isn’t known, we can actually teach kids to challenge their own thinking and their own results. We can teach them to think critically and analyse the information they’re presented with. We can teach them to demand higher standards of validation and reproducibility.

The trouble with this is that it requires a significant amount of setup work. Finding the datasets isn’t hard, but making sense of them can be really challenging – for example when I downloaded a vote dataset from the AEC and tried to find someone who could explain to me how the two dimensional senate ballot paper translated to a one dimensional data string, I literally couldn’t find anyone at the AEC who knew. I mean… presumably there is someone! But I couldn’t find them. It took me hours and hours to make sense of the dataset and design a project that would engage the kids, and give them room to stretch their wings and really fly.

The only reason I was able to commit that kind of time is that I was only teaching part time, so I used my own time to build these engaging projects. In year 10 we did projects on climate, on elections, on microbats. In year 11 we worked with scientists to solve their computational and data needs, in fields like marine biology, conservation ecology, neuroscience, astrophysics and psychology. The possibilities are truly endless.

But a teacher with a full time load doesn’t have the capacity to take on that kind of extra work. It’s just too time consuming, even if they have the skills to start with.

So that’s why I created the Australian Data Science Education Institute, or ADSEI. To develop project ideas and lesson plans that empower kids to explore data and become rational sceptics. To develop their data literacy, critical thinking, and technical skills in the context of projects they really care about. And also to provide professional development training to teachers right across the curriculum – not just in Digital Technologies – to integrate real data science into their teaching. To use data to make sense of the world.

At ADSEI we have created projects where kids use real datasets to explore the world. To solve problems in their own environments and communities, and most importantly: to measure and evaluate their solutions to see if they worked. We’ve got projects that do things like:

  • calculate how much carbon is embodied by the trees on their school grounds and then do various comparisons of the school’s carbon emissions from electricity.
  • construct a set of criteria for good science journalism and then evaluate a bunch of different sources according to those criteria and visualise the results
  • analyse the litter on the school grounds, find ways to fix it, and then analyse it again to see if they worked
  • record and analyse the advertising they see around them in a week and explore its impact on their behaviour
  • use solar energy production & power usage data to explore a household’s impact on the environment
  • use the happiness index data to explore world differences in measures like income inequality and social support
  • use data from scientific observational studies to learn about whales, turtles, climate, and more

When I was teaching Computer Science at John Monash Science School in Melbourne – a school for kids who are passionate about science, who you might be forgiven for assuming were already engaged with tech – we started by teaching them with toys. We had them draw pretty pictures, and program robots to push each other out of circles. And the number one piece of feedback we got was “This isn’t relevant to me. Why are you making me do this? I’m never going to use it.”

When we shifted to teaching the same coding skills – variables, if statements, for loops, etc – in the context of Data Science, using real datasets and authentic problems, that feedback disappeared and instead we heard “this is so important. This is so useful. I’m using this in my other subjects.” and the number one thing I live to hear when teaching tech: “I didn’t know I could do this!”

So not only does teaching tech skills in the context of data science teach the kids that STEM skills empower them to solve problems and find out more about their own world, it gives them the motivation to succeed. To actually learn the skills and put them to good use.

And make no mistake, motivation is the single most important factor in learning.

So Data Science empowers students to learn technical and other STEM skills in the context of real problems. It gives them the capacity to create positive change in their own communities – and to prove that they have. It teaches them to communicate their results.

And most importantly, it teaches them that this is something they can all do.

And that point is crucial, because at the moment we have hordes of students – even at a high performing STEM school like John Monash – believing that tech is not something they can do. Not something that interests them. Not something that’s relevant to them.

Which means that we are continuing to get the same kinds of people choosing to go into tech who have been choosing it for decades now. We are actively perpetuating the stereotypes, because those stereotypes are now so strong that everyone believes that only those types of people should or can go into tech.

One of my friends who works in data science recently met someone who, on learning her occupation, literally said to her: “You work in tech. So, are you on the spectrum?”
Because if I ask you to picture a computer scientist, or a data scientist, chances are you will imagine a young white male who is on the spectrum.

Current figures suggest that women make up as little as 15% of the Data Science industry.
And it’s lack of diversity in the tech industry that leads to systems like the HireVue AI – because there are not enough voices in the room prepared to say things like “Um, have we really thought this through?” or “What are the ethical issues with doing that?”

It also leads to tech solutions that work beautifully for the types of people represented on the development team, but that have serious limitations for everyone else.

And lest you think that women simply aren’t cut out for tech, and there isn’t actually any bias in the field, allow me to remind you of the 2016 study of open source code on github that found that code submitted by a woman was more likely to be accepted than code submitted by a man, but only if the woman’s gender was not identifiable from her github id.

ADSEI’s work isn’t going to turn every student into a data scientist. But it will give kids the option of being data scientists, who wouldn’t have had it otherwise. Because they will understand the power of data science, and they will know that it’s something they can do. And that is phenomenally empowering.

1 thought on “Raising Heretics”

Leave a Reply