Design Thinking is a bit of a buzz term in education circles these days, and it’s great to see kids developing their creative thinking processes. I think design thinking becomes a problem, though, when it skips the hard part. Just as no battle plan survives contact with the enemy, no solution survives actually being implemented – at least not without modifications, adaptations, and plenty of “oh heck, we never considered that” moments.
And even if we get as far as implementing a solution, we have a terrible tendency to think that’s the end of the project. It turns out that there’s another hard part that comes after actually solving the problem, which is testing how well you’ve solved the problem, and actively looking for the flaws. This is a crucial step for several reasons.
From a learning perspective, the how well does this work and where does it fail? step avoids the trap we all tend to fall into at times, which is the assumption that our solution is perfect, and that having implemented it we can just walk away. Just as there’s no such thing as a perfect dataset, there is no such thing as a perfect solution. This evaluation phase is important for figuring out what works, what doesn’t work, and how we can improve on our design. The assumption is that there are problems, and we need to look for them, instead of the assumption being that it’s perfect until proven otherwise (if anyone happens to be paying attention).
Evaluation is also important because sometimes our solutions actually make things worse for some people. Some years ago Monash University went smoke free. In an attempt to cater to smokers, they created designated smoking points around the edge of the campus. Unfortunately some of those smoking points were right next to the main cycling path to and from the university, which meant cyclists who were riding hard uphill then drew clouds of cigarette smoke into their lungs. Without an evaluation and feedback phase, this was never noticed by the powers that be, and those smoking points remained in place for years. As far as I know they’re still there.
Then there are unintended consequences – it might seem a no brainer to ban plastic straws in order to reduce plastic pollution, but it turns out that bendy plastic straws are crucial to some people with certain types of disabilities. If students are designing a solution to plastic pollution and declare a straw ban the way to go, they might never figure out that there are drawbacks to this cunning plan. And people who never encounter the drawbacks to their cunning plans tend to go on to implement cunning plans on a national or even global scale that sound like a great idea at the time, but that turn out to be disastrous in practice.
Cunning plans like introducing cane toads to Australia to control the beetles eating sugar cane. Or introducing rabbits for hunting. Or using thalidomide for treating morning sickness. Using Chlorofluorocarbons (CFCs) as refrigerants and propellants. Or even burning fossil fuels. Those all worked out so well for us…
One thing that has shocked me while doing research for my book, Raising Heretics, is the lack of tracking when a new medical treatment is released into the community. When someone devises a new surgical or drug treatment, there is no formal mechanism for tracking and monitoring the patients to ensure that they do well on this new treatment, after the initial clinical trials. Look at the use of trans vaginal mesh, which has had horrific consequences for many women. In many cases those women were not believed by their doctors when they reported painful and debilitating after-effects from the surgery, and no-one was keeping track to see if there was a problem.
New medical treatments – both drug and surgical – typically go through clinical trials where patients are carefully selected and have no complicating conditions. Results from those trials determine whether a treatment will be approved, but they may also be the last time that treatment is ever studied or monitored in any way.
This means that issues such as the birth defects resulting from thalidomide can take years, or even decades, to be identified, which results in vast amounts of unnecessary suffering.
Now imagine if we routinely evaluated the impact of new products, programmes, treatments, and policies. Imagine if, when you come up with a solution, your first question is not “does it work?” but instead “how well does it work and where will it fail?” If you don’t ask “is it right or wrong?” but instead “how is it flawed?” And instead of asking “Does this work for me?” we ask “who doesn’t this work for?” Or, even more importantly, “who might this harm?”
Imagine if those questions were built into every project we do at school, so that kids learn that their answers aren’t simply right or wrong, they are complicated and need to be evaluated. So that they learn that they are not perfect, and neither is their work, but that everything can be evaluated, tested, and improved.
Imagine if our politicians, our business leaders, and our school leadership teams knew that. How different would the world look?