Measuring with Added Data Science – Primary School Lesson

You can add a little Data Science into any lesson, but Measurement in Primary School is just crying out for a little added Data Science. And when I say Added Data Science, I really mean added critical thinking and scepticism. Here is a Grade 6 lesson that I just trialled at Gillen Primary School in Alice Springs, where we took a basic measurement lesson on height and injected some cool data concepts. This lesson might be worth splitting over two lesson times, depending on how the discussion goes.

The goal here is to be asking questions and evaluating what you’re doing at every step.

  1. Pick two students that are very different heights, and have them stand at opposite corners of the room. Have the kids guess who is taller.
  2. Now pick two students that are very close in height, and do the same thing. Have the two students stand back to back and work out who is actually taller. Now ask the kids: which was easier to guess? Why?
  3. Class discussion: what does it mean to “estimate” a value? What’s the difference between an estimate and a guess? If an estimate is an educated guess, what factors did you use to “educate” your estimate of who was taller? (One student today said that the taller person came further up the board than the shorter person, which was a great way of using comparisons to inform your estimate!)
  4. Have your students make a list of the people in their class who are here today and rank them by height, without talking to each other or comparing answers.
  5. Class discussion: Did you all rank every person the same? Which positions were easiest to rank? Often the tallest and shortest students are really easy to rank, but sometimes there are a few students very close in height that make it difficult. The middle positions tend to be the hardest, and you can have some discussion about why this is.
  6. Ask the class who is the tallest student. Take one answer and then ask if there are any different answers, until you have the set. Then do the same for shortest. You can do some back-to-back measuring at this point to settle these questions.
  7. Ask the class why their answers might be different, and discuss how estimates are not exact.
  8. Now get the class to stand up and sort themselves into height order. You might want to get the tallest and shortest up first, and then gradually fill in the middle one or two students at a time, to avoid chaos.
  9. Class Discussion: How much easier was it to do in person than try to compare them in your head? What made it easier?
  10. Now for the measurement! Put the class into groups of 3-5. Each group picks one person to measure, and every other person in the group should measure that person and write down their height, without telling the other members of their group what height they got. 
  11. Groups compare their results and see how similar they were. Each group should record the size of the range of their measurements. So a group that recorded measurements of 143, 145, and 146 would record 3 as their largest, because the lowest value was 143 and the highest was 146.
  12. Come back together as a class. Class Discussion: How accurate do you think your measurements were?
  13. Class Discussion: Did every student use the same measuring technique? What were some different ones people used?
  14. Class Discussion: How big was the biggest difference between measurements? What factors made the measurements hard? We heard things like:
    1. The person we were measuring was taller than us.
    2. The person was taller than the tape measure (at this point you can explore strategies for solving this problem! Eg measuring against the wall, marking where the tape measure stops, and putting the tape measure above that mark to measure the remaining length, or measuring them lying down on the floor).
    3. It was hard to hold the tape measure straight.
    4. It was hard to hold the tape measure still.
    5. It was hard to read off the exact value because of the distance between the tape measure and the actual top of the person’s head.
    6. The actual measuring part of the tape measure starts a few centimetres in from the start of the tape, getting it exactly in the right spot on the floor is hard!
  15. As a class, brainstorm techniques for making the measurements more accurate.
  16. To wrap up the class, ask them again how accurate they thought their measurements were, and then ask them if they think they were accurate enough? Think of several scenarios where you might need to measure height, and ask how accurate each needs to be. The goal here is to consider that data is rarely completely accurate, but it can still be accurate enough. Eg.
    1. Measuring the length of bed someone needs. Because beds come in fixed sizes you only need to know which range the person fits into.
    2. Measuring whether someone will fit through the doorway. As you are very unlikely to have primary school kids who won’t fit through your doorway, it’s reasonable to think they don’t need to be very accurate! “Are you less than <however tall your doorway is>?” can usually be estimated rather than measured! Consider whether they might know someone for whom this would not be sufficient – eg a professional basketballer.
    3. Measuring whether a cape would fit
    4. Pilots in some aircraft have to be under a certain height to fit in a cockpit
    5. Sailors in a submarine (because the ceilings are low)
    6. What others can you think of?

There are many more questions you can explore using this lesson, and many more types of inaccuracies you could consider. As always, these steps are a starting point, and some points to ponder. You can use a subset of the steps, or expand on them.

If you modify the lesson it would be wonderful if you could share it back by emailing it to so that other teachers can learn from your approach.

The importance of scepticism

One of the things ADSEI does in its lesson plans is ask the question: What is wrong with this data?

This is a really crucial question, because there is no such thing as a perfect dataset. All data has issues. Often its not the data you want, it’s simply the data you were able to get. For example:

  • whale observations tell you how many whales were seen, when what you really want to know is how many whales were there. Some whales might have breached but not been observed (shades of Schrödinger’s Whale), or swum by without breaching, or even been spotted twice but accidentally counted as two whales when it was really just the one.
  • speed cameras tell you the instantaneous speed of the car when what police really want to know is: has that car exceeded the speed limit at any time on this trip?
  • counting the litter found in the schoolyard tells you how much litter you found, not how much litter was dropped – some of it may have blown away or hidden under things. It also only tells you how much litter was there that day. What if a year level was out on excursion, or it was a wet day timetable…

And even when the data is actually what you want, there may be data that’s missing or flawed for various reasons. For example:

  • Facial recognition systems that were trained on images of faces that were almost exclusively white and male.
  • Phone polls that can’t include people with unlisted numbers.
  • Internet polls that can’t include people without internet access.
  • Surveys where people don’t or can’t tell the truth – for example about healthy eating, or sexuality, or where people don’t actually know the truth, for example about why they did things, or things they don’t remember (like what did you have for breakfast yesterday? Or how often do you eat broccoli?).
  • Skipped data where someone forgot to record a daily observation or the system went down and didn’t record any values.

Consider the reporting around the Corona virus. We have a reported death rate of around 2% which is highly speculative, because we have no idea how many mild cases of corona virus there are out there that are not being identified or reported. Some sources report the numbers and stress the uncertainty, while others report them as solid facts.

This is a kind of scepticism and critical thinking that we don’t often leave room for – in education, business, or journalism. Often we are in such a rush to get the “right” answer that we don’t have time to pause and evaluate the data we’re working with, to consider the flaws and uncertainty that are built in to any dataset, and any analysis.

If we can teach our students, from pre-school onwards, to question their data, to ask “how many ways is this data flawed” rather than assuming the data is perfect, then perhaps we can build a world which centers critical thinking and evaluates evidence.

This is why using real datasets rather than nice clean sets of fake numbers is crucially important to teaching data science. Because real world datasets are never nice, clean, and straightforward. There is no need for scepticism and critical thinking in textbook examples. But kids who have used real data in their learning are equipped to tackle real world problems.

Can you share some examples of flawed data? What consequences have you seen from people assuming data is perfect?


Using Real Data Projects to Engage Kids with STEM

I want to start by asking you a question: What gets you out of bed in the morning? What really motivates you?

For me it’s the chance to make a difference in the world. It’s wanting to leave the world a better place than I found it.

And that’s something that STEM skills are perfect for. They are for problem solving, for designing better ways to do things. For bringing clean water, clean power, increased food production, solutions to climate change, safer transport, personalised medicine, and a whole host of innovations to the world.

But when I first started teaching in a high school – a science school, no less – we were teaching “STEM” as “fun stuff”. Drawing pretty pictures. Making robots follow a line. Playing with toys.

How many of us are motivated, I mean really motivated, by toys? Some of us are, especially technical people! But those are generally the people we’ve already GOT in tech! I’m much more interested in the people we haven’t got yet.

All too often we ask those kids who are not already into tech to get out of bed for the chance to have fun. And fun is great – I like to have fun, we all do! And not all of my fun is finding an interesting new dataset and analysing the hell out of it, I promise. I do have other ways of having fun besides writing an interesting new Python script. Really I do. But fun doesn’t get me out of bed in the morning. Fun is a hobby. A diversion. A toy. That’s not what we need kids to understand about STEM.

We are handing our kids a world in desperate need of creative solutions. Of innovation and entrepreneurship. Of change.

And we’re telling them that STEM is fun! It’s for designing 3D jewellery. It’s sparkly. It’s pink. It’s useless.

We are doing kids a huge disservice. They’re kids, therefore fun is the way to reach them, right? It’s like saying we want more women in tech so we’re going to paint some things pink and offer some courses in the chemistry of makeup (a real suggestion that was made at an actual school). It’s like saying “women do hardware too, let’s sell them some pink hammers.” (and that’s also a real example)

When we were teaching computing using “fun toys” the overwhelming feedback I was getting – from science students – was “Why are you making me do this? It’s not relevant to me. I don’t want to do it.”

Can you guess what happened when we made the year 10 computer science course a data science course instead of a “fun toys” course? We were teaching the same basic coding skills. We still had them learning about selection, iteration, variables, and functions. But now we were using real datasets and finding real questions to answer, real problems to solve. Do you know what happened?

Suddenly they could see the point. They found it useful. They found themselves using the skills in other subjects, especially in project work. And the numbers who went on to the year 11 elective computer science subject increased by around 30%, with double the number of girls.

And none of it was pink!

That first data science course I had a student who was super interested in politics, and there was a federal election, so we used data from the Australian Electoral Commission. Turns out you can download csv files containing every single vote from any Australian election.

We used the senate votes for Victoria for the 2016 Federal election. Over 3 million lines of csv, they contained polling booth, electorate, and a 151 position comma separated string containing the contents of every box on each ballot paper.

3 million lines of csv won’t even open in excel, so the kids had to program just to open the file. They learned about using a small section of the file in order to test their code, so that it didn’t take ages to run. They learned about what questions a dataset could answer.

They found their own questions – from which party’s voters were more likely to follow the how to vote cards, to where Pauline Hanson voters came from. They asked questions about their own electorate or polling booth and how they compared to the whole state. About female representation and share of the below-the-line vote. About preference flows and about how polling compares to actual results. Every student asked a different question, which meant that every student had to write different code to find the answer (goodbye plagiarism!).

And then the important part happened: they had to visualise their results. To create an image, more interesting than an ordinary graph, that conveyed their results in a convincing, valid, and compelling way.

They learned about channels of information, about the human visual system and attention. About colour blindness and the problems with the rainbow scale. They learned which types of graph are appropriate for different types of data, and how to customise their graphs so that they don’t mislead their audience.

As well as learning to analyse and visualise data themselves, they also learned to be critical data thinkers, reviewing graphs and statistics they are presented with using critical questions like “How was that data collected? What was the sample size? And where is the zero on that scale?”

We have a tendency to bend at the knees when presented with statistics and graphs. It seems to automagically make information more credible. But they are very easy to manipulate. So it’s crucial, in this era of fake news and anti-science, that our kids learn to be critical thinkers.

Another reason we need our kids to learn data science skills is the increasing dominance of Big Data and Machine Learning in every aspect of our lives. They are determining our healthcare and our access to home loans. They’re directing our traffic and influencing our consumption and behaviour – even our votes! They’re controlling our justice systems and our borders. But how many of you really feel like you have a good understanding of how the algorithms that do these things actually work? How many of you are confident in the fairness, impartiality, and accuracy of these systems?

And this is a highly educated audience. Think about that for a moment. These systems are running our lives and we have no say in how they operate. We don’t even understand them.

So it’s crucial that we educate upcoming generations to have informed, intelligent conversations about these systems. So that we can have that long delayed community conversation around the way we manage our data – and the way it manages us.

And to do that, we need to engage kids with data in the classroom. To show them its relevance, and to build their Data Science and technological skills.

The problem with finding cool datasets and building them into interesting lessons is that it’s hugely time consuming and highly skilled work. When I used the electoral data it took me hours to make sense of the dataset. I couldn’t even find anyone in the electoral commission who could explain it to me, so I had to derive it from first principles. The only reason I had the capacity to do that is that I was part time, so I used my own time, unpaid, to find the dataset, make sense of it, and build a project around it. Most teachers simply don’t have the time to do that – or, to be honest, the skills.

It’s also important to acknowledge that student motivation is not the only issue we face in teaching tech in schools. The problems are many. Tech has an image problem almost as bad as teaching does! So kids don’t see themselves as the type of people who go into tech (and this affects boys as well as girls).

We attract the kinds of people into tech that we already have – generally people with a very narrow personality and background distribution. This conference is obviously full of the exceptions to that rule. 🙂 But it’s a real problem if you want innovative solutions that meet the needs of everyone, not just the tech nerds of the world.

We lack skilled teachers, in part because the correlation between that classic tech personality type and the kind of person who loves to teach seems to be, frankly, quite low, but also because if you have tech skills you can EASILY earn a LOT more and work a LOT less hard by NOT going in to teaching. But we also have a large cohort of teachers who are flat out terrified of technology. So if we force those teachers to teach our shiny new Digital Technologies curriculum, they can’t help but convey that fear to their students.

That’s why I founded the Australian Data Science Education Institute (which, by the way, is a registered charity). To find and make sense of the datasets, to build cool projects around them that are aligned with the curriculum, and to train teachers in the skills they need to incorporate data science into their teaching. We start from where teachers are and build their skills gradually, in the context of their own disciplines.

We don’t expect them all to program on day one. We start with spreadsheet skills and projects that both teachers and students find relevant and interesting.

Using Data Science teaches kids why STEM matters, and gives them the opportunity to use STEM skills to change the world. So we use this template for finding, analysing, and solving problems in the local community.

  • Find a problem
  • Measure it
  • Analyse the measurements
  • Communicate the results
  • Propose a solution
  • Implement the solution

And that’s the crucial part that we need to make the default position anywhere where we try new things: That we measure & analyse them to see if they work. Because in governments, in schools, in businesses: too often we see new programs implemented as a matter of ideology, and the only “assessment” that happens is for the champion of the program to say “It was awesome!”

And when you say “How do you know?” Everyone goes suspiciously quiet and changes the subject.

Incidentally, that’s why ADSEI collects feedback data on all of its courses, and why we’re also building a feedback mechanism for our online resources.

We also have a template for exploring global issues:

  • Find a dataset
  • Explore & Understand it – and this means understanding the domain, a fact we tend to lose sight of.
  • Find a question it can answer
  • Analyse it to find the answer
  • Communicate your results

ADSEI’s ultimate goal, of course, is to put itself out of business. To build Data Science into the way teachers are trained to teach. To build a community of Data Scientists and teachers who can support each other by sharing resources, project ideas, and cool datasets.

I think my job is safe for the moment!

For now we have grants from the Victorian Department of Education and Training, Google, and the Great Barrier Reef Foundation. We’ve developed teaching resources for Monash University, CSIRO, and the Digital Technologies Hub. We have delivered workshops and talks at conferences and schools, and we are working with the wonderful people at Pawsey Supercomputing Centre and the West Australian Marine Science Institute.

And ADSEI has only been in existence for 18 months.

Over the next few months we’ll be running workshops in Perth, Melbourne, and Alice Springs.

Next year in October we’ll also be running the Inaugural International Conference on Education and Outreach in Data Science and High Performance Computing, with the support of the awesome Australasian eResearch Organisation – Sponsors welcome!

So if any of this sounds like a mission you can get behind, join the slack channel, check out the website, send me an email ( or tweet at me wildly. Because Data Literacy and Data Science skills are something all kids need to experience, before they decide that Data Science is too hard, too boring, or not relevant to them!

If Data Science is going to drive us to the future, I want to put all of our kids in the driver’s seat!

Primary School Data Science Template

People often assume that Data Science in Schools has to be secondary school only, because how could primary kids do Data Science? The truth is that Data Literacy and Analysis skills can be built in to the curriculum from as young as 5 years old. And it’s really important that kids learn Data and Tech skills early, because by the time they get to secondary school we’ve already lost a lot of them, believing that these skills are too hard, not relevant to them, or just not interesting. We need to show them early on that Data Science is a useful tool that they are more than capable of mastering.

So how can primary kids do data science? Like any other data science project, it’s crucial to put it in context, so the kids can see the point.

So Step One is: Find a problem the kids care about

It might be litter in the playground, traffic at pickup time (or, to put it in a way kids will really relate to – how long they have to wait to be picked up, or how far they have to walk to the car!), or access to play equipment.

Step Two: Measure the problem

Count and identify the litter, time how long people have to wait to be picked up, measure how far people have to walk to the car, or count the number of people who get to use the monkey bars every lunchtime for a week.

Step Three: Analyse the measurements

For younger kids, that might simply mean sorting the rubbish into categories (eg chip packets, icy pole wrappers from the canteen, and sandwich bags or cling wrap from home), or organising the drop off or play equipment measurements by year level or by day. For older kids you might enter it into a spreadsheet and use a formula to calculate some averages over the week or by area or year level.

Step Four: Communicate your results

This is where you graph or visualise your results. For the littlies they can “graph” the results by stacking up blocks to represent the different categories. Green blocks for chip packets, blue ones for icy pole wrappers, etc. This is a great, tangible, exercise in data representation. Older kids can draw graphs or do them in a spreadsheet like Excel or Google Sheets. It helps to get them to draw pictures and labels on their graphs to make them more interesting and compelling.

Step Five: Propose a solution

Think of a way you might solve the problem. For litter the kids might come up with nude food day campaigns, or a change to the way food is available in the canteen – such as using larger chip packets and handing out small paper bags chips in them, instead of lots of small plastic packets. For traffic it might be that pickup times can be staggered by year levels, or older kids might be encouraged to walk further and be picked up a block or two away.

Step 6: Implement your solution

This can be a whole school initiative, and involves a lot of communication, using the graphs from Step Four to tell the community what’s happening and why.

Step 7: Measure again to see how well it worked

This is my favourite step, often sadly missing from political initiatives. Once you’ve tried to fix something, you need to measure it again to see if you actually made any difference.

You can even repeat steps 3 to 7 with several different solutions to compare which ones work better.

I love this template because it is the essence of STEM – It’s a science experiment, devised by the kids, with rigorous measurement and evaluation. Maths and Technology are used in handling the data, and you can use Engineering to design your solution, or even to measure the problem if you’re looking at environmental conditions like heat, noise, or water and want to use some sensors.

You can scale the technology use up or down depending on available resources and where your students are up to. There are no robots with parts to fail. And the best part is that the motivation is built in. The kids are learning that STEM and Data Science are tools you can use to solve real problems in your community. They’re not just a bit of fun that’s not relevant to their futures.

ADSEI is developing more projects like these over the next year, as well as building a network of teachers interested in sharing their ideas and supporting each other to introduce integrated STEM and Data Science in the classroom. Jump onto the mailing list to stay in touch, and feel free to share your own ideas in the comments on this post!

Computing with Purpose

I am increasingly angry about the pinkification of Computer Science and Engineering. Pinkification is the presentation of computing and STEM skills as being about 3D printing jewellery, drawing pretty pictures, or somehow involving fluffy animals, in a desperate attempt to interest girls. Aside from how insulting it is to assert that painting something pink and sticking a few sparkly things on it is a great way to attract girls – because we are fundamentally shallow, apparently – there are many pinkified programmes trying to attract girls to CS and, as far as I can tell, they are having very little impact on diversity in Computing and other technical fields.

This doesn’t surprise me. For years I taught at a science school and my boss insisted that we had to teach Computer Science to the year 10s through a “fun” approach – drawing pretty pictures, making robots follow lines, this sort of thing. And the single most common feedback was “why are you making me do this? It’s just not relevant to me!”

Bear in mind these were science students, and if science students think computing isn’t relevant to them, then we’re really doing something wrong. Computing is integral to science now, and I am constantly meeting scientists who lament their lack of computing skills, and tell me it’s limiting their work in the worst way.

When we finally shifted to teaching computing skills in the context of data science – using real data sets and authentic problems, giving the students the opportunity to make a real difference in the world – the change was dramatic. We studied everything from election results to microbats, from climate change to neuroscience, and the number of girls choosing to pursue further study in computing doubled (from 5 to 10 – still distressingly low, but a big step forward!), but it wasn’t just girls who got more interested. A lot of the boys could finally see the point of computing. Data Science engaged kids in computing skills in a way the “fun stuff” never did.


You see, the lack of girls in CS is only the easily measurable side of our lack of diversity. The big problem is that we are, for the most part, only getting the types of people in computing that we already have. The stereotypical kids who are already interested in computing, have been coding more or less from birth, and who really aren’t interested in much else.

We need to motivate a much wider range of people to at least try computing, to see if it’s something they might be interested in. We also need everyone to be Data Literate so that we can think critically about data and graphs we’re shown, and so that we can engage in intelligent conversations as a society about which kinds of Data Science are ok and which ones we’re not comfortable with.

Motivation is key, but I have come to the conclusion that fun isn’t actually terribly motivating. It interests me that “fun” is often still seen as the best way to attract kids to a subject. At best, if it actually is fun, using the “fun” approach to STEM skills may introduce it as a hobby, or a fun way to spend a few hours, but it’s hardly inspiring as a career choice, because it lacks a sense of purpose. It also sells our kids painfully short – they like fun, sure, but more than anything kids today are worried about their future, and the future of the planet. They want to make a difference.

Nicky Ringland, one of the greatest change makers in Computer Science Education in Australia, recently sent a tweet that finally gave me the phrase I’ve been looking for. She said girls get very engaged in “Computing with Purpose”, and added that the bonus is you also engage more boys with this kind of computing, not just girls.

That’s it. That’s what attracted me to computing. It’s what engaged my students with Data Science, and it’s why I started the Australian Data Science Education Institute. To show teachers and students that computing has a purpose. It’s not just something that’s been randomly jammed into the curriculum. It’s not about teaching to an exam. It’s not something to do because the teacher told you to. It’s something you can actually use to understand and even change the world.

Now that is a purpose everyone can get behind!