Raising Heretics – why should we?

This is an excerpt from the Introduction to Raising Heretics: Teaching Kids to Change the World, which is due for publication on August 1st. We’ll publish more excerpts here from time to time, so check back for more!

In this book, I want to show you how Data Science Education is key to nurturing a rationally sceptical, creative, ethical, problem solving population who can save the world.
I’m going to do that by looking at the problems we have in the Data Science and Technology communities today, and how those communities are shaping our world – problems and all – in Chapter 1: “Who’s in Charge?”

Given that Data Science is in the driver’s seat, taking us towards a future we are not yet equipped to understand, Chapter 2: “The Shape of the Future”, talks about what the future could look like if everyone had enough data literacy to form evidence based policy, support high quality science, and have a say in the shape of our future.

Of course, if we want an evidence based society that treats science with respect, we need to understand how science actually works. Too often a change in our understanding of something – whether it’s climate change, a virus, or our diet – leads us to think that science got it wrong. Scientists, however, know that this is how science progresses; by improving our understanding of complex systems. That means that sometimes what we think we know about science today turns out to be wrong tomorrow. This is science at its best. Unfortunately there is a perception in the wider community that science is solved. And science education reinforces that idea quite firmly. Chapter 3: “Science is Solved”, looks at the way we (mis)understand science, and how we can fix it.

I’m then going to explore the issues with our current education system in more depth. There is no such thing as perfect data, yet we treat data with more reverence than it deserves. Our entire education system is built on the idea of being measurable, yet all too often “measurable” winds up being the opposite of “meaningful”. Chapter 4: “Measurable or Meaningful, pick one,” considers how we got here, and how we can create an education system that focuses on meaningful outcomes, and develops our students into rational, ethical heretics.
All of these goals require us to get comfortable with the idea of uncertainty. To be prepared to challenge the status quo, query accepted wisdom, and even to question our own findings. Chapter 5: “Accepting the Unexpected,” focuses on why uncertainty is important, and how we can get comfortable with it, especially in education.

Why should you take my word for it? Chapter 6: “Projects with Impact,” goes into detail about how Data Science projects work, with case studies from my own teaching, and Chapter 7 outlines templates for Data Science projects involving community projects and more global issues, with examples of units ADSEI has created right across the curriculum, from Humanities to STEM.
Finally, how do we get there from here? Chapter 8: “What now?” maps out what we need to do to overhaul our education system and raise all of our children to be rational heretics, so that they can understand the world, and then save it.

Win for your school!

Raising Heretics book cover - a person looking sceptical surrounded by objects

We are thrilled to announce that every copy of Raising Heretics pre-ordered here goes into the draw to win a $5000 school consultation package for your school! If you want your kids’ school, the school you teach at, or a local school, to have the opportunity to develop projects that build kids’ STEM and Data Science skills while solving real problems in their own communities, buy a copy of Raising Heretics: Teaching Kids to Change the World and enter the draw by filling out the form!

If you’ve already bought your copy, you can still fill out the form.

Terms and Conditions

  • Every copy of the book purchased through this page earns one entry into the draw
  • Any donation over $30 through the same page qualifies for one extra entry into the draw
  • The winner will be drawn by Dr Linda McIver at the online launch on August 1st and announced on this blog the same week.
  • The winning school has three weeks to accept the package in writing to, otherwise another winner will be drawn
  • All projects developed as part of the winning package will be shared on the ADSEI website under a creative commons attribution non commercial license
  • The package is valid until December 2021. If the school cannot make use of the package in this time and notifies ADSEI, another winner will be drawn if enough time remains.
  • The winning school agrees to publicity around the resulting programs over the course of 2021 and 2022.

2021 Seagrass Data Visualization Competition

Seagrass in Moreton Bay

The Australian Data Science Education Institute (ADSEI) and Science Under Sail Australia (SUSA), with support from the GRBF and Reef Trust Partnership, are thrilled to announce our 2021 Data Visualization Competition, open to all Australian Secondary School students.

Using Data Collected by SUSA’s citizen science programme for schools in 2019 and 2020, your challenge is to create effective and compelling visualisations of the different benthic habitats – including seagrass, coral and bare substrate – found on the Great Barrier Reef.
Choose either the Main Project or a Data Analysis Project.

Main Project
Present the locations of the survey sites on a map with the ability to zoom in and out to see data at the whole of GBR scale as well as zoom in and look at a single bay or island
Provide options to show people what the benthic habitats look like at different locations – using our photos or videos (2000 videos available)
Provide a visual link (colours, shading or lines around multiple sites) that link survey sites with similar benthic habitats.

Data Analysis Projects
What depths are seagrasses observed at in different regions of the GBR
Does the max depth seagrass are observed at correlate with the GBR water quality (clarity) data – available through GBRMPA (If you need helped getting this data contact ADSEI)
Look for other variables that predict or correlate with observation of seagrass – or living hard coral?

The results will be judged by SUSA and ADSEI. The student with the best project will receive a $100 gift voucher. Second and Third place will receive $50 gift vouchers. All projects that meet the criteria will be displayed on the ADSEI website and publicized on social media

While students retain the copyright, submitting the project to this competition grants SUSA, ADSEI, and the GBRF the right to display the work publicly in any form in perpetuity.

Entries are due June 30th 2021.

Register now for free!

For queries, please email

Getting to the hard part

Design Thinking is a bit of a buzz term in education circles these days, and it’s great to see kids developing their creative thinking processes. I think design thinking becomes a problem, though, when it skips the hard part. Just as no battle plan survives contact with the enemy, no solution survives actually being implemented – at least not without modifications, adaptations, and plenty of “oh heck, we never considered that” moments.

And even if we get as far as implementing a solution, we have a terrible tendency to think that’s the end of the project. It turns out that there’s another hard part that comes after actually solving the problem, which is testing how well you’ve solved the problem, and actively looking for the flaws. This is a crucial step for several reasons.

From a learning perspective, the how well does this work and where does it fail? step avoids the trap we all tend to fall into at times, which is the assumption that our solution is perfect, and that having implemented it we can just walk away. Just as there’s no such thing as a perfect dataset, there is no such thing as a perfect solution. This evaluation phase is important for figuring out what works, what doesn’t work, and how we can improve on our design. The assumption is that there are problems, and we need to look for them, instead of the assumption being that it’s perfect until proven otherwise (if anyone happens to be paying attention).

Evaluation is also important because sometimes our solutions actually make things worse for some people. Some years ago Monash University went smoke free. In an attempt to cater to smokers, they created designated smoking points around the edge of the campus. Unfortunately some of those smoking points were right next to the main cycling path to and from the university, which meant cyclists who were riding hard uphill then drew clouds of cigarette smoke into their lungs. Without an evaluation and feedback phase, this was never noticed by the powers that be, and those smoking points remained in place for years. As far as I know they’re still there.

Then there are unintended consequences – it might seem a no brainer to ban plastic straws in order to reduce plastic pollution, but it turns out that bendy plastic straws are crucial to some people with certain types of disabilities. If students are designing a solution to plastic pollution and declare a straw ban the way to go, they might never figure out that there are drawbacks to this cunning plan. And people who never encounter the drawbacks to their cunning plans tend to go on to implement cunning plans on a national or even global scale that sound like a great idea at the time, but that turn out to be disastrous in practice.

Cunning plans like introducing cane toads to Australia to control the beetles eating sugar cane. Or introducing rabbits for hunting. Or using thalidomide for treating morning sickness. Using Chlorofluorocarbons (CFCs) as refrigerants and propellants. Or even burning fossil fuels. Those all worked out so well for us…

One thing that has shocked me while doing research for my book, Raising Heretics, is the lack of tracking when a new medical treatment is released into the community. When someone devises a new surgical or drug treatment, there is no formal mechanism for tracking and monitoring the patients to ensure that they do well on this new treatment, after the initial clinical trials. Look at the use of trans vaginal mesh, which has had horrific consequences for many women. In many cases those women were not believed by their doctors when they reported painful and debilitating after-effects from the surgery, and no-one was keeping track to see if there was a problem.

New medical treatments – both drug and surgical – typically go through clinical trials where patients are carefully selected and have no complicating conditions. Results from those trials determine whether a treatment will be approved, but they may also be the last time that treatment is ever studied or monitored in any way.

This means that issues such as the birth defects resulting from thalidomide can take years, or even decades, to be identified, which results in vast amounts of unnecessary suffering.

Now imagine if we routinely evaluated the impact of new products, programmes, treatments, and policies. Imagine if, when you come up with a solution, your first question is not “does it work?” but instead “how well does it work and where will it fail?” If you don’t ask “is it right or wrong?” but instead “how is it flawed?” And instead of asking “Does this work for me?” we ask “who doesn’t this work for?” Or, even more importantly, “who might this harm?”

Imagine if those questions were built into every project we do at school, so that kids learn that their answers aren’t simply right or wrong, they are complicated and need to be evaluated. So that they learn that they are not perfect, and neither is their work, but that everything can be evaluated, tested, and improved.

Imagine if our politicians, our business leaders, and our school leadership teams knew that. How different would the world look?

Diversity isn’t just about women

Some years ago, at a reunion with some old friends, one of their children came charging up to me and said, in a most outraged voice, “Your son thinks he’s a girl!”

I laughed and said “My son IS a girl.” Which made his little brain explode. He was confused because Sol had short hair and was wearing trousers, which did not fit with his internal image of a girl. But that conversation looks a little different now, since a few months later, when Sol was 10, they came out to us as non binary. This means they don’t identify as male or female, and being misgendered – ie identified as male or as female – is really traumatic for them.

At the time I don’t think I’d even heard the term. But more and more young people are identifying as gender queer or non binary, and because I write about it a lot, and advocate fiercely, people tend to talk to me about it. A couple of weeks ago I had two people approach me ON THE SAME DAY saying their kids had just come out to them. Knowing we were a few years further down the track, they were looking to us for support and advice. And every time I talk or write about it, more people contact me, relating to the topic. This is not contagion. This is empowerment. As there’s more representation, more kids feel safe to come out.

But this is not my personal blog, this is a blog about Data Science Education. So why am I telling you this? Because all of the well intentioned Girls in STEM programmes out there are explicitly excluding an already marginalised and often traumatised group. Non-binary and gender queer kids often have to fight simply to identify as themselves. They have to carve out a safe space in their families, in their schools, in their social lives. Asking them to choose a programme for girls, or, indeed, one for boys, is either directly causing them trauma, or explicitly excluding them.

The other reason that Girls in STEM programmes are a problem is that women are only the obvious part of our diversity problem. By trying to build the number of women and girls in STEM, we are only tackling the easy part – though it’s not that easy, judging by the sheer volume of women in STEM programmes and the persistently stubborn failure of the numbers to actually shift.

The problem is that we consistently attract the kinds of people to tech that we already have. We are missing big chunks of the population – boys included. Boys who don’t see themselves as nerdy, who don’t see the point of tech. Girls who don’t see it as relevant to them. Non binary and gender queer kids who don’t see themselves as represented or welcome in any of the tech programmes available to them.

If we had true diversity in technology and data science, we’d have a range of ethnic & cultural backgrounds, as well as people with a wide range of physical abilities. We’d have people on our design teams that are mobility compromised, vision impaired, with allergies, with varied gender identities and sexualities, with every possible skin tone and body shape. We’d have people who act differently, dress differently, think differently, and have different needs… I have headphones that don’t work very well with long hair, for goodness’ sake! Guess who was on that design team?

Fortunately, there’s a fix.

When I was teaching, and made the switch from teaching year 10 kids tech skills with toys to teaching them with data science, we not only doubled the girls in the elective year 11 computer science subject the following year, we also had a big jump in the number of boys who enrolled.

The key to encouraging a diverse range of people into STEM careers is kids learning that STEM skills are tools you can use to change the world. At ADSEI, we create projects and train teachers to empower kids to create change in their own communities. From five years old to 18 and beyond, we use STEM and Data skills to create change.

And we do it in the classroom, as part of the core curriculum. STEM is not an elective. Data literacy and STEM skills are something everyone needs in order to make sense of our rapidly changing, crisis-ridden world. We need to teach our kids to be critical data thinkers.

And the more kids learn data science and STEM skills from the start – the more they believe they can do it, and know that it’s worth doing – the more diverse our STEM workforce will become. It’s that simple.