Blog

Raising Heretics

This is the text of my keynote from the NSW ICT Educators Conference in Sydney earlier this year.

I am Dr Linda McIver, founder and ED of the ADSEI, a registered charity dedicated to putting students in the driver’s seat of our data driven world.

Today I want to talk to you about the importance of heresy.

I’m going to take you through the place of heresy in science, as well as the phenomenon known as survivorship bias, and how these relate to the extraordinary claims being made in the field of AI, and then I’m going to talk about how I’m aiming to fix it all.
Much of our Science, Technology, Engineering, and Maths education starts from a foundation of facts and known answers. This teaches our kids that the point of STEM is to Get The RIght Answer, whereas the actual point of real world STEM disciplines is to fix things, understand things, and solve problems. In this talk I will show why I founded the Australian Data Science Education Institute, why we’re dedicated to raising Heretics, and why Heresy is something we desperately need right now, both in the Data Science industry and the world as a whole.

First of all, let’s define our terms. Heresy is an opinion profoundly at odds with what is generally accepted. And heresy has been crucial to our scientific development.
Let’s talk about some historical scientific heresies:

  • In the 1840s Ignaz Semmelweis came up with the radical heresy that doctors washing their hands before (and after) surgeries prevented disease. Prior to this doctors went from autopsies to childbirth without washing their hands or even changing their clothes. And they wondered why people died. The idea that this could cause disease was considered so ludicrous that it took decades for the idea of washing hands to be accepted. By the way Semmelweis was so ridiculed and pilloried that his colleagues committed him to an asylum where he was beaten and died.
  • In a well known heresy, Galileo Gallilei so outraged the church with the idea that the earth revolves around the sun, rather than the other way around, that he was accused of literal heresy and committed to house arrest. He only narrowly escaped death.
  • In 1912 Alfred Wegener began to publicly advocate the idea that the continents moved over time – what became known as continental drift. He, too, was widely ridiculed, and did not live to see his ideas finally vindicated.
  • More recently, Marshall and Warren’s original paper on ulcers being caused by bacteria rather than stress was rejected and consigned to the bottom 10% of submissions. Barry Marshall eventually drank helicobacter pylorii – the bacteria that causes ulcers – to prove it, thus inducing an ulcer which he then cured with antibiotics.

It seems like heresy is a pretty dangerous business!

In fact a lot of scientific breakthroughs have been considered heretical. Especially in medicine!

Now let me digress for a moment to tell the story of the WWII planes that were examined for bullet holes to work out where to armour them. Researchers figured that the places with the most holes – the wings and the fuselage – needed the most armour. They found no holes on engines or fuel tanks so they figured they didn’t need armouring… until statistician Abraham Wald pointed out that the planes they were studying were the ones that made it back. The planes needed armour where none of the planes had holes, because clearly the planes that had holes in those other places (the engine and the fuel tanks, btw) were the ones that DIDN’T COME BACK.

I love this story, because it’s a classic example of the obvious conclusion being dead wrong. In similar results, the introduction of helmets in WWI resulted in more head injuries being treated in the field hospitals, so the first reaction was “stop using helmets!”… what data was missing?

Both of these are examples of survivorship bias – where there is a chunk of data missing from the study. In these cases it’s literally survivor bias because it fails to take into account those who don’t make it back.

Have you heard of HireVue? They’re a Human Resources Tech company that uses artificial intelligence to select or reject candidates in job interviews based on… well, nobody actually knows.

They say it’s a machine, therefore it’s without bias. And we could laugh and snort, but over 100 companies are already using it, including big companies like Hilton and Unilever.

According to Nathan Mondragon, HireVue’s chief industrial-organizational psychologist, “Humans are inconsistent by nature. They inject their subjectivity into the evaluations, But AI can database what the human processes in an interview, without bias. … And humans are now believing in machine decisions over human feedback.”
Which is really just all kinds of disturbing when they make all sorts of claims for it, but can’t actually explain how it is making its decisions.

They say that the system employs “superhuman precision and impartiality to zero in on an ideal employee, picking up on telltale clues a recruiter might miss.”

Of course, HireVue won’t tell us how their algorithm works – in part to protect trade secrets, and in part because they don’t really know…

I am fairly confident I’m not the only one who finds that an incredibly disturbing idea.

Luke Stark, a researcher who studies emotion and AI at Microsoft, describes this as the “charisma of numbers”. If an algorithm assigns a number to a person, we can rank them objectively right? Because numbers are objective. And simple. What could possibly go wrong, reducing a complex and multifaceted human being to a simple numerical rank? (Helllooo ATAR – Australian Tertiary Acceptance Rank…)

I think Cathy O’Neil sums it up beautifully: Models are opinions embedded in mathematics, and algorithms are opinions formalized in code. It’s incredibly important that we dispel this pervasive myth that algorithms are unbiased, objective statements of truth.

This whole HireVue system is a textbook example of survivorship bias: looking only at the people who made it through the same hiring process that we are now calling fatally flawed… and thinking we can predict ideal new hires with only that data. It completely ignores the people who didn’t make it through the initial processes, who might have been amazing.

It also highlights an issue I’ve seen raised again and again in works like “Weapons of Math Destruction”, “Made by Humans,” and “Automating Inequality” – that people believe in numbers, computers, and algorithms over other people, even when they’ve been explicitly told those systems are broken. And I have a story about that, too.
Niels Wouters, a researcher at Melbourne University, some years ago designed a system called Biometric Mirror. It was a deliberately simple, utterly naive machine learning system that took a picture of the user’s face and then claimed to be able to tell a whole lot about the person, just from that picture.

The system spat out a rating of ethnicity, gender, sexual preference, attractiveness, trustworthiness, etc. And Niels created the system to start a conversation with people about how transparently ludicrous it was to believe a system that does this. So he set up booths where people would come, have a photo taken, and read all of this obviously false information about themselves, and then have a conversation about trust and ethics and the issues with Artificial Intelligence. So far so good. A noble goal. But there are two postscripts to this story that are horrifying in their implications.

First of all, Niels would overhear people walking away from the display, having had the conversation about how obviously false the “conclusions” drawn by the system were, saying “But it’s a computer, it must be right, and it doesn’t think I’m attractive…”

And secondly, after speaking publicly about all of the issues with Biometric mirror, Niels was contacted by HR companies wanting to buy it…

So here is where we start to make the connection between education and the tech industry.

One of the problems in Data Science is that we often don’t have a lot of time to challenge even our own results, never mind anyone else’s. The rush to data riches (Step 3, Profit!) means we don’t really have time to be cautiously sceptical. We get a result, report it, and move on to the next dataset. And people are all too willing to believe in those results.

When I asked a group of data scientists if they had ever had to release/report on a result that they felt hadn’t been fully tested, that they couldn’t bet their lives on, around half put the hands up. And then when I asked how many hands would have gone up if it had been anonymous, the other half put up their hands.

So all of the discoveries I mentioned in the first half of this talk were made by people being sceptical. Challenging the status quo. Questioning accepted wisdom. By people who were quite prepared to examine new evidence and consider that “what everybody knows” might be wrong. Of course, we need educated heretics, so that our scepticism is rational and fact based, rather than denialism and wishful thinking, which is what we are seeing quite a lot of now. So education is clearly key.

But let’s consider STEM education. We mostly teach Science, Technology, and Maths in schools as a matter of facts and known outcomes (and, yes, I know there’s one more letter in STEM, but we rarely, if ever, actually teach any Engineering) .

Consider the average school Chemistry experiment. We take known substances, apply a known process, and achieve an expected outcome. What do kids who don’t get the results they expect do then? Do they go back and try to find a reason for their results? Do they ask questions and challenge outcomes?

Nope. they don’t have time for that, and they get no credit for it. They copy their friends’ results. Or they simply adjust the results to get the outcome they expected to get. Marks are allocated for the expected results. For the right graph.

Occasionally we’ll run a prac with unknown reagents and ask the students to identify the inputs. But here, again, marks are for the correct answer.

But this isn’t science education. This is an education in confirmation bias. In finding what you are supposed to find. In seeing what you expect to see. It is the exact opposite of the way science should work. Science should be about disproving theories. And you only accept a theory as plausible when you have tried your hardest to disprove it, and failed.

Maths is much the same. The emphasis is on correct answers and known outcomes. On cookie cutter processes that produce the same result every time.

Technology education is often even worse. With a severe shortage of teachers with programming skills, we tend to default to education using toys. Drawing pretty pictures. Making robots follow lines. Writing the same code. Producing the same output.

What if we could teach with experiments where we don’t know the answers?
Well with data, we can easily do that. Can we find a dataset that hasn’t been fully analysed and thoroughly understood? I could probably hit a dozen with a bread roll from where I’m standing.

How do you mark it, then, when you don’t know the right answer? You mark the process. You mark the testing. You ask the students to test and challenge their answers really thoroughly. You give points for their explanation of how they know their answer is right, for how they confirmed it by trying their hardest to prove it wrong.

It has been said, most famously by Grace Hopper, that the most dangerous phrase in the English language is “we’ve always done it that way”. Now, more than ever, we need people who challenge the status quo, who come up with new ideas, who are prepared to be heretical.

By teaching Data Science in the context of real projects, where the outcome isn’t known, we can actually teach kids to challenge their own thinking and their own results. We can teach them to think critically and analyse the information they’re presented with. We can teach them to demand higher standards of validation and reproducibility.

The trouble with this is that it requires a significant amount of setup work. Finding the datasets isn’t hard, but making sense of them can be really challenging – for example when I downloaded a vote dataset from the AEC and tried to find someone who could explain to me how the two dimensional senate ballot paper translated to a one dimensional data string, I literally couldn’t find anyone at the AEC who knew. I mean… presumably there is someone! But I couldn’t find them. It took me hours and hours to make sense of the dataset and design a project that would engage the kids, and give them room to stretch their wings and really fly.

The only reason I was able to commit that kind of time is that I was only teaching part time, so I used my own time to build these engaging projects. In year 10 we did projects on climate, on elections, on microbats. In year 11 we worked with scientists to solve their computational and data needs, in fields like marine biology, conservation ecology, neuroscience, astrophysics and psychology. The possibilities are truly endless.

But a teacher with a full time load doesn’t have the capacity to take on that kind of extra work. It’s just too time consuming, even if they have the skills to start with.

So that’s why I created the Australian Data Science Education Institute, or ADSEI. To develop project ideas and lesson plans that empower kids to explore data and become rational sceptics. To develop their data literacy, critical thinking, and technical skills in the context of projects they really care about. And also to provide professional development training to teachers right across the curriculum – not just in Digital Technologies – to integrate real data science into their teaching. To use data to make sense of the world.

At ADSEI we have created projects where kids use real datasets to explore the world. To solve problems in their own environments and communities, and most importantly: to measure and evaluate their solutions to see if they worked. We’ve got projects that do things like:

  • calculate how much carbon is embodied by the trees on their school grounds and then do various comparisons of the school’s carbon emissions from electricity.
  • construct a set of criteria for good science journalism and then evaluate a bunch of different sources according to those criteria and visualise the results
  • analyse the litter on the school grounds, find ways to fix it, and then analyse it again to see if they worked
  • record and analyse the advertising they see around them in a week and explore its impact on their behaviour
  • use solar energy production & power usage data to explore a household’s impact on the environment
  • use the happiness index data to explore world differences in measures like income inequality and social support
  • use data from scientific observational studies to learn about whales, turtles, climate, and more

When I was teaching Computer Science at John Monash Science School in Melbourne – a school for kids who are passionate about science, who you might be forgiven for assuming were already engaged with tech – we started by teaching them with toys. We had them draw pretty pictures, and program robots to push each other out of circles. And the number one piece of feedback we got was “This isn’t relevant to me. Why are you making me do this? I’m never going to use it.”

When we shifted to teaching the same coding skills – variables, if statements, for loops, etc – in the context of Data Science, using real datasets and authentic problems, that feedback disappeared and instead we heard “this is so important. This is so useful. I’m using this in my other subjects.” and the number one thing I live to hear when teaching tech: “I didn’t know I could do this!”

So not only does teaching tech skills in the context of data science teach the kids that STEM skills empower them to solve problems and find out more about their own world, it gives them the motivation to succeed. To actually learn the skills and put them to good use.

And make no mistake, motivation is the single most important factor in learning.

So Data Science empowers students to learn technical and other STEM skills in the context of real problems. It gives them the capacity to create positive change in their own communities – and to prove that they have. It teaches them to communicate their results.

And most importantly, it teaches them that this is something they can all do.

And that point is crucial, because at the moment we have hordes of students – even at a high performing STEM school like John Monash – believing that tech is not something they can do. Not something that interests them. Not something that’s relevant to them.

Which means that we are continuing to get the same kinds of people choosing to go into tech who have been choosing it for decades now. We are actively perpetuating the stereotypes, because those stereotypes are now so strong that everyone believes that only those types of people should or can go into tech.

One of my friends who works in data science recently met someone who, on learning her occupation, literally said to her: “You work in tech. So, are you on the spectrum?”
Because if I ask you to picture a computer scientist, or a data scientist, chances are you will imagine a young white male who is on the spectrum.

Current figures suggest that women make up as little as 15% of the Data Science industry.
And it’s lack of diversity in the tech industry that leads to systems like the HireVue AI – because there are not enough voices in the room prepared to say things like “Um, have we really thought this through?” or “What are the ethical issues with doing that?”

It also leads to tech solutions that work beautifully for the types of people represented on the development team, but that have serious limitations for everyone else.

And lest you think that women simply aren’t cut out for tech, and there isn’t actually any bias in the field, allow me to remind you of the 2016 study of open source code on github that found that code submitted by a woman was more likely to be accepted than code submitted by a man, but only if the woman’s gender was not identifiable from her github id.

ADSEI’s work isn’t going to turn every student into a data scientist. But it will give kids the option of being data scientists, who wouldn’t have had it otherwise. Because they will understand the power of data science, and they will know that it’s something they can do. And that is phenomenally empowering.

Measuring with Added Data Science – Primary School Lesson

You can add a little Data Science into any lesson, but Measurement in Primary School is just crying out for a little added Data Science. And when I say Added Data Science, I really mean added critical thinking and scepticism. Here is a Grade 6 lesson that I just trialled at Gillen Primary School in Alice Springs, where we took a basic measurement lesson on height and injected some cool data concepts. This lesson might be worth splitting over two lesson times, depending on how the discussion goes.

The goal here is to be asking questions and evaluating what you’re doing at every step.

  1. Pick two students that are very different heights, and have them stand at opposite corners of the room. Have the kids guess who is taller.
  2. Now pick two students that are very close in height, and do the same thing. Have the two students stand back to back and work out who is actually taller. Now ask the kids: which was easier to guess? Why?
  3. Class discussion: what does it mean to “estimate” a value? What’s the difference between an estimate and a guess? If an estimate is an educated guess, what factors did you use to “educate” your estimate of who was taller? (One student today said that the taller person came further up the board than the shorter person, which was a great way of using comparisons to inform your estimate!)
  4. Have your students make a list of the people in their class who are here today and rank them by height, without talking to each other or comparing answers.
  5. Class discussion: Did you all rank every person the same? Which positions were easiest to rank? Often the tallest and shortest students are really easy to rank, but sometimes there are a few students very close in height that make it difficult. The middle positions tend to be the hardest, and you can have some discussion about why this is.
  6. Ask the class who is the tallest student. Take one answer and then ask if there are any different answers, until you have the set. Then do the same for shortest. You can do some back-to-back measuring at this point to settle these questions.
  7. Ask the class why their answers might be different, and discuss how estimates are not exact.
  8. Now get the class to stand up and sort themselves into height order. You might want to get the tallest and shortest up first, and then gradually fill in the middle one or two students at a time, to avoid chaos.
  9. Class Discussion: How much easier was it to do in person than try to compare them in your head? What made it easier?
  10. Now for the measurement! Put the class into groups of 3-5. Each group picks one person to measure, and every other person in the group should measure that person and write down their height, without telling the other members of their group what height they got. 
  11. Groups compare their results and see how similar they were. Each group should record the size of the range of their measurements. So a group that recorded measurements of 143, 145, and 146 would record 3 as their largest, because the lowest value was 143 and the highest was 146.
  12. Come back together as a class. Class Discussion: How accurate do you think your measurements were?
  13. Class Discussion: Did every student use the same measuring technique? What were some different ones people used?
  14. Class Discussion: How big was the biggest difference between measurements? What factors made the measurements hard? We heard things like:
    1. The person we were measuring was taller than us.
    2. The person was taller than the tape measure (at this point you can explore strategies for solving this problem! Eg measuring against the wall, marking where the tape measure stops, and putting the tape measure above that mark to measure the remaining length, or measuring them lying down on the floor).
    3. It was hard to hold the tape measure straight.
    4. It was hard to hold the tape measure still.
    5. It was hard to read off the exact value because of the distance between the tape measure and the actual top of the person’s head.
    6. The actual measuring part of the tape measure starts a few centimetres in from the start of the tape, getting it exactly in the right spot on the floor is hard!
  15. As a class, brainstorm techniques for making the measurements more accurate.
  16. To wrap up the class, ask them again how accurate they thought their measurements were, and then ask them if they think they were accurate enough? Think of several scenarios where you might need to measure height, and ask how accurate each needs to be. The goal here is to consider that data is rarely completely accurate, but it can still be accurate enough. Eg.
    1. Measuring the length of bed someone needs. Because beds come in fixed sizes you only need to know which range the person fits into.
    2. Measuring whether someone will fit through the doorway. As you are very unlikely to have primary school kids who won’t fit through your doorway, it’s reasonable to think they don’t need to be very accurate! “Are you less than <however tall your doorway is>?” can usually be estimated rather than measured! Consider whether they might know someone for whom this would not be sufficient – eg a professional basketballer.
    3. Measuring whether a cape would fit
    4. Pilots in some aircraft have to be under a certain height to fit in a cockpit
    5. Sailors in a submarine (because the ceilings are low)
    6. What others can you think of?

There are many more questions you can explore using this lesson, and many more types of inaccuracies you could consider. As always, these steps are a starting point, and some points to ponder. You can use a subset of the steps, or expand on them.

If you modify the lesson it would be wonderful if you could share it back by emailing it to contact@adsei.org so that other teachers can learn from your approach.

The importance of scepticism

One of the things ADSEI does in its lesson plans is ask the question: What is wrong with this data?

This is a really crucial question, because there is no such thing as a perfect dataset. All data has issues. Often its not the data you want, it’s simply the data you were able to get. For example:

  • whale observations tell you how many whales were seen, when what you really want to know is how many whales were there. Some whales might have breached but not been observed (shades of Schrödinger’s Whale), or swum by without breaching, or even been spotted twice but accidentally counted as two whales when it was really just the one.
  • speed cameras tell you the instantaneous speed of the car when what police really want to know is: has that car exceeded the speed limit at any time on this trip?
  • counting the litter found in the schoolyard tells you how much litter you found, not how much litter was dropped – some of it may have blown away or hidden under things. It also only tells you how much litter was there that day. What if a year level was out on excursion, or it was a wet day timetable…

And even when the data is actually what you want, there may be data that’s missing or flawed for various reasons. For example:

  • Facial recognition systems that were trained on images of faces that were almost exclusively white and male.
  • Phone polls that can’t include people with unlisted numbers.
  • Internet polls that can’t include people without internet access.
  • Surveys where people don’t or can’t tell the truth – for example about healthy eating, or sexuality, or where people don’t actually know the truth, for example about why they did things, or things they don’t remember (like what did you have for breakfast yesterday? Or how often do you eat broccoli?).
  • Skipped data where someone forgot to record a daily observation or the system went down and didn’t record any values.

Consider the reporting around the Corona virus. We have a reported death rate of around 2% which is highly speculative, because we have no idea how many mild cases of corona virus there are out there that are not being identified or reported. Some sources report the numbers and stress the uncertainty, while others report them as solid facts.

This is a kind of scepticism and critical thinking that we don’t often leave room for – in education, business, or journalism. Often we are in such a rush to get the “right” answer that we don’t have time to pause and evaluate the data we’re working with, to consider the flaws and uncertainty that are built in to any dataset, and any analysis.

If we can teach our students, from pre-school onwards, to question their data, to ask “how many ways is this data flawed” rather than assuming the data is perfect, then perhaps we can build a world which centers critical thinking and evaluates evidence.

This is why using real datasets rather than nice clean sets of fake numbers is crucially important to teaching data science. Because real world datasets are never nice, clean, and straightforward. There is no need for scepticism and critical thinking in textbook examples. But kids who have used real data in their learning are equipped to tackle real world problems.

Can you share some examples of flawed data? What consequences have you seen from people assuming data is perfect?

 

Using Real Data Projects to Engage Kids with STEM

I want to start by asking you a question: What gets you out of bed in the morning? What really motivates you?

For me it’s the chance to make a difference in the world. It’s wanting to leave the world a better place than I found it.

And that’s something that STEM skills are perfect for. They are for problem solving, for designing better ways to do things. For bringing clean water, clean power, increased food production, solutions to climate change, safer transport, personalised medicine, and a whole host of innovations to the world.

But when I first started teaching in a high school – a science school, no less – we were teaching “STEM” as “fun stuff”. Drawing pretty pictures. Making robots follow a line. Playing with toys.

How many of us are motivated, I mean really motivated, by toys? Some of us are, especially technical people! But those are generally the people we’ve already GOT in tech! I’m much more interested in the people we haven’t got yet.

All too often we ask those kids who are not already into tech to get out of bed for the chance to have fun. And fun is great – I like to have fun, we all do! And not all of my fun is finding an interesting new dataset and analysing the hell out of it, I promise. I do have other ways of having fun besides writing an interesting new Python script. Really I do. But fun doesn’t get me out of bed in the morning. Fun is a hobby. A diversion. A toy. That’s not what we need kids to understand about STEM.

We are handing our kids a world in desperate need of creative solutions. Of innovation and entrepreneurship. Of change.

And we’re telling them that STEM is fun! It’s for designing 3D jewellery. It’s sparkly. It’s pink. It’s useless.

We are doing kids a huge disservice. They’re kids, therefore fun is the way to reach them, right? It’s like saying we want more women in tech so we’re going to paint some things pink and offer some courses in the chemistry of makeup (a real suggestion that was made at an actual school). It’s like saying “women do hardware too, let’s sell them some pink hammers.” (and that’s also a real example)

When we were teaching computing using “fun toys” the overwhelming feedback I was getting – from science students – was “Why are you making me do this? It’s not relevant to me. I don’t want to do it.”

Can you guess what happened when we made the year 10 computer science course a data science course instead of a “fun toys” course? We were teaching the same basic coding skills. We still had them learning about selection, iteration, variables, and functions. But now we were using real datasets and finding real questions to answer, real problems to solve. Do you know what happened?

Suddenly they could see the point. They found it useful. They found themselves using the skills in other subjects, especially in project work. And the numbers who went on to the year 11 elective computer science subject increased by around 30%, with double the number of girls.

And none of it was pink!

That first data science course I had a student who was super interested in politics, and there was a federal election, so we used data from the Australian Electoral Commission. Turns out you can download csv files containing every single vote from any Australian election.

We used the senate votes for Victoria for the 2016 Federal election. Over 3 million lines of csv, they contained polling booth, electorate, and a 151 position comma separated string containing the contents of every box on each ballot paper.

3 million lines of csv won’t even open in excel, so the kids had to program just to open the file. They learned about using a small section of the file in order to test their code, so that it didn’t take ages to run. They learned about what questions a dataset could answer.

They found their own questions – from which party’s voters were more likely to follow the how to vote cards, to where Pauline Hanson voters came from. They asked questions about their own electorate or polling booth and how they compared to the whole state. About female representation and share of the below-the-line vote. About preference flows and about how polling compares to actual results. Every student asked a different question, which meant that every student had to write different code to find the answer (goodbye plagiarism!).

And then the important part happened: they had to visualise their results. To create an image, more interesting than an ordinary graph, that conveyed their results in a convincing, valid, and compelling way.

They learned about channels of information, about the human visual system and attention. About colour blindness and the problems with the rainbow scale. They learned which types of graph are appropriate for different types of data, and how to customise their graphs so that they don’t mislead their audience.

As well as learning to analyse and visualise data themselves, they also learned to be critical data thinkers, reviewing graphs and statistics they are presented with using critical questions like “How was that data collected? What was the sample size? And where is the zero on that scale?”

We have a tendency to bend at the knees when presented with statistics and graphs. It seems to automagically make information more credible. But they are very easy to manipulate. So it’s crucial, in this era of fake news and anti-science, that our kids learn to be critical thinkers.

Another reason we need our kids to learn data science skills is the increasing dominance of Big Data and Machine Learning in every aspect of our lives. They are determining our healthcare and our access to home loans. They’re directing our traffic and influencing our consumption and behaviour – even our votes! They’re controlling our justice systems and our borders. But how many of you really feel like you have a good understanding of how the algorithms that do these things actually work? How many of you are confident in the fairness, impartiality, and accuracy of these systems?

And this is a highly educated audience. Think about that for a moment. These systems are running our lives and we have no say in how they operate. We don’t even understand them.

So it’s crucial that we educate upcoming generations to have informed, intelligent conversations about these systems. So that we can have that long delayed community conversation around the way we manage our data – and the way it manages us.

And to do that, we need to engage kids with data in the classroom. To show them its relevance, and to build their Data Science and technological skills.

The problem with finding cool datasets and building them into interesting lessons is that it’s hugely time consuming and highly skilled work. When I used the electoral data it took me hours to make sense of the dataset. I couldn’t even find anyone in the electoral commission who could explain it to me, so I had to derive it from first principles. The only reason I had the capacity to do that is that I was part time, so I used my own time, unpaid, to find the dataset, make sense of it, and build a project around it. Most teachers simply don’t have the time to do that – or, to be honest, the skills.

It’s also important to acknowledge that student motivation is not the only issue we face in teaching tech in schools. The problems are many. Tech has an image problem almost as bad as teaching does! So kids don’t see themselves as the type of people who go into tech (and this affects boys as well as girls).

We attract the kinds of people into tech that we already have – generally people with a very narrow personality and background distribution. This conference is obviously full of the exceptions to that rule. 🙂 But it’s a real problem if you want innovative solutions that meet the needs of everyone, not just the tech nerds of the world.

We lack skilled teachers, in part because the correlation between that classic tech personality type and the kind of person who loves to teach seems to be, frankly, quite low, but also because if you have tech skills you can EASILY earn a LOT more and work a LOT less hard by NOT going in to teaching. But we also have a large cohort of teachers who are flat out terrified of technology. So if we force those teachers to teach our shiny new Digital Technologies curriculum, they can’t help but convey that fear to their students.

That’s why I founded the Australian Data Science Education Institute (which, by the way, is a registered charity). To find and make sense of the datasets, to build cool projects around them that are aligned with the curriculum, and to train teachers in the skills they need to incorporate data science into their teaching. We start from where teachers are and build their skills gradually, in the context of their own disciplines.

We don’t expect them all to program on day one. We start with spreadsheet skills and projects that both teachers and students find relevant and interesting.

Using Data Science teaches kids why STEM matters, and gives them the opportunity to use STEM skills to change the world. So we use this template for finding, analysing, and solving problems in the local community.

  • Find a problem
  • Measure it
  • Analyse the measurements
  • Communicate the results
  • Propose a solution
  • Implement the solution
  • MEASURE IT AGAIN

And that’s the crucial part that we need to make the default position anywhere where we try new things: That we measure & analyse them to see if they work. Because in governments, in schools, in businesses: too often we see new programs implemented as a matter of ideology, and the only “assessment” that happens is for the champion of the program to say “It was awesome!”

And when you say “How do you know?” Everyone goes suspiciously quiet and changes the subject.

Incidentally, that’s why ADSEI collects feedback data on all of its courses, and why we’re also building a feedback mechanism for our online resources.

We also have a template for exploring global issues:

  • Find a dataset
  • Explore & Understand it – and this means understanding the domain, a fact we tend to lose sight of.
  • Find a question it can answer
  • Analyse it to find the answer
  • Communicate your results

ADSEI’s ultimate goal, of course, is to put itself out of business. To build Data Science into the way teachers are trained to teach. To build a community of Data Scientists and teachers who can support each other by sharing resources, project ideas, and cool datasets.

I think my job is safe for the moment!

For now we have grants from the Victorian Department of Education and Training, Google, and the Great Barrier Reef Foundation. We’ve developed teaching resources for Monash University, CSIRO, and the Digital Technologies Hub. We have delivered workshops and talks at conferences and schools, and we are working with the wonderful people at Pawsey Supercomputing Centre and the West Australian Marine Science Institute.

And ADSEI has only been in existence for 18 months.

Over the next few months we’ll be running workshops in Perth, Melbourne, and Alice Springs.

Next year in October we’ll also be running the Inaugural International Conference on Education and Outreach in Data Science and High Performance Computing, with the support of the awesome Australasian eResearch Organisation – Sponsors welcome!

So if any of this sounds like a mission you can get behind, join the slack channel, check out the website, send me an email (linda@adsei.org) or tweet at me wildly. Because Data Literacy and Data Science skills are something all kids need to experience, before they decide that Data Science is too hard, too boring, or not relevant to them!

If Data Science is going to drive us to the future, I want to put all of our kids in the driver’s seat!

Primary School Data Science Template

People often assume that Data Science in Schools has to be secondary school only, because how could primary kids do Data Science? The truth is that Data Literacy and Analysis skills can be built in to the curriculum from as young as 5 years old. And it’s really important that kids learn Data and Tech skills early, because by the time they get to secondary school we’ve already lost a lot of them, believing that these skills are too hard, not relevant to them, or just not interesting. We need to show them early on that Data Science is a useful tool that they are more than capable of mastering.

So how can primary kids do data science? Like any other data science project, it’s crucial to put it in context, so the kids can see the point.

So Step One is: Find a problem the kids care about

It might be litter in the playground, traffic at pickup time (or, to put it in a way kids will really relate to – how long they have to wait to be picked up, or how far they have to walk to the car!), or access to play equipment.

Step Two: Measure the problem

Count and identify the litter, time how long people have to wait to be picked up, measure how far people have to walk to the car, or count the number of people who get to use the monkey bars every lunchtime for a week.

Step Three: Analyse the measurements

For younger kids, that might simply mean sorting the rubbish into categories (eg chip packets, icy pole wrappers from the canteen, and sandwich bags or cling wrap from home), or organising the drop off or play equipment measurements by year level or by day. For older kids you might enter it into a spreadsheet and use a formula to calculate some averages over the week or by area or year level.

Step Four: Communicate your results

This is where you graph or visualise your results. For the littlies they can “graph” the results by stacking up blocks to represent the different categories. Green blocks for chip packets, blue ones for icy pole wrappers, etc. This is a great, tangible, exercise in data representation. Older kids can draw graphs or do them in a spreadsheet like Excel or Google Sheets. It helps to get them to draw pictures and labels on their graphs to make them more interesting and compelling.

Step Five: Propose a solution

Think of a way you might solve the problem. For litter the kids might come up with nude food day campaigns, or a change to the way food is available in the canteen – such as using larger chip packets and handing out small paper bags chips in them, instead of lots of small plastic packets. For traffic it might be that pickup times can be staggered by year levels, or older kids might be encouraged to walk further and be picked up a block or two away.

Step 6: Implement your solution

This can be a whole school initiative, and involves a lot of communication, using the graphs from Step Four to tell the community what’s happening and why.

Step 7: Measure again to see how well it worked

This is my favourite step, often sadly missing from political initiatives. Once you’ve tried to fix something, you need to measure it again to see if you actually made any difference.

You can even repeat steps 3 to 7 with several different solutions to compare which ones work better.

I love this template because it is the essence of STEM – It’s a science experiment, devised by the kids, with rigorous measurement and evaluation. Maths and Technology are used in handling the data, and you can use Engineering to design your solution, or even to measure the problem if you’re looking at environmental conditions like heat, noise, or water and want to use some sensors.

You can scale the technology use up or down depending on available resources and where your students are up to. There are no robots with parts to fail. And the best part is that the motivation is built in. The kids are learning that STEM and Data Science are tools you can use to solve real problems in your community. They’re not just a bit of fun that’s not relevant to their futures.

ADSEI is developing more projects like these over the next year, as well as building a network of teachers interested in sharing their ideas and supporting each other to introduce integrated STEM and Data Science in the classroom. Jump onto the mailing list to stay in touch, and feel free to share your own ideas in the comments on this post!