Site icon Australian Data Science Education Institute

Consider the impact

At ADSEI, we build data science projects that empower students to solve problems in their own communities. One of my favourite things about these projects is the evaluation phase. Kids work to understand a problem, design and implement a solution, and then evaluate the solution, looking for ways it can be improved. Central to the evaluation phase are the questions: “Who is helped by this solution?” and “Who is (or might be) harmed by it?”

These are questions that should routinely be asked about new kinds of technology, but rarely are. For example, when Artificial Intelligence is newly applied to hiring decisions, policing, or evaluating someone’s credit score, we should automatically sit down and do a deep dive into everything that could possibly go wrong. Unfortunately, the race to market combined with proprietary technology means that systems are not only flung into production without this kind of evaluation, they are, for the most part, not even evaluated or systematically monitored once they are in use. What’s more, a key characteristic of AI systems is typically that no-one can explain how they do what they do. Which raises the question: Are they doing what we think they are?

There are many examples of AI systems not doing what we thought they were doing, such as, X-Ray analysers that detected the frequency of pneumonia at the hospital the images came from, rather than whether a patient actually had pneumonia or not. The trouble with AI is that, while it is exceptionally good at detecting patterns, it is not necessarily detecting the patterns we want it to, and it’s frequently impossible to know what it’s really doing.

There is a sad almost-joke that any new tech will be used for stalking within a month of deployment, (for example, bag tracking tags like the Apple Airtag, or running apps like Strava) and while individual harms can be terrifying, AI and Data Science have the potential to magnify the harms to cause significant society-wide trauma to entire categories of people, without any form of oversight or transparency. Rather than dealing with the harms caused by new technology if/when they are identified, what if we routinely examined our solutions for potential harms?

The surprisingly radical part of the evaluation phase of ADSEI projects is that we assume there are harms. Students get marks for identifying the problems with their solutions, rather than getting the highest score for producing a perfect, textbook answer. Real world problems don’t have perfect, textbook answers. Real world solutions have downsides as well as upsides. And complex, multifaceted problems such as those we are trying to tackle with AI and Data Science are going to need complex, multifaceted, and carefully tested solutions.

Which is why we need to be training our kids – and ourselves! – to routinely examine our work and figure out who it helps, who it harms, and how we can improve it.

Exit mobile version