Data Science Explainer

Machines are unbiased, and other bedtime stories

We are quite comfortable, these days, with the idea that computer programs do what they are told. Line by line, step by step, computers can only do what we tell them. If we tell them not to pay attention to gender, race, or appearance, then they won’t, right? Bias is magically gone!

Unfortunately Artificial Intelligence systems, or more accurately, Machine Learning systems, aren’t quite that simple. In fact, we don’t usually know what they are doing at all – and they can’t explain it to us. Machine Learning takes “We tell the machine exactly what to do” and converts it to “We tell the machine what information to work with.” We don’t even tell it what to pay attention to, which can be a problem when, for example, a machine learning system intended to recognise pictures of horses learns, instead, that the pictures it is looking for frequently have the watermark of a popular horse image site, or when systems designed to detect covid in CT scans actually learn to detect patients who are lying down.

Sometimes these kinds of mistakes are detected, because researchers evaluate the system rigorously, or program it to show the parts of the image it is using to make its decision. The sad reality is that, for commercial systems, there is no market advantage in rigorous evaluation. In fact, careful testing is a drag on the key advantage, which is being first to market. Better to get your system out there fast and drink from the surprisingly deep well of cash allocated to enthusiasm for AI, than test it to make sure that it doesn’t do unexpectedly horrifying things.

You can think of a machine learning system as an information sorter. We pour chunks of information of different importance into the sorter, and it gives us a result. In the training process we tell it, in various different ways, whether that result was good or not. According to that feedback, it gives some types of information priority, to maximise the likelihood of getting the right result next time. The trouble is, we often have no way of knowing what information it’s prioritising. Maybe it’s learning to recognise a horse from hooves, fetlocks, and noses, or maybe it’s just recognising a watermark.

As noted by Machine Language Fairness expert Laura Summers, “our attempts to observe what machine learning models pay attention to in the data can lead to further misunderstandings. “Attention” doesn’t map as a concept from human thinking to machine process, and our desire to infer human-like logic as we attempt to decode the black box is unhelpful. As we can see from the horse and self driving car examples, Machine Learning systems don’t “think” or have context and mental models like humans. The more we try to understand them through this lens, the more likely we are to be led astray.”

So when we use machine learning for really important things like recruitment and health, we need to be immensely cautious and rationally sceptical of the results that we get.

As I’ve pointed out before, machine learning systems are brittle – meaning they break in surprising ways, and they break suddenly. Where human performance tends to degrade slowly – for example, as it gets darker at twilight, our vision gets progressively worse – these systems tend to be fine right up until they break catastrophically. Look at self driving cars that use machine learning. They constantly surprise us with the things they can’t handle – like the cars in San Francisco after a recent storm that completely failed to react to emergency services tape closing the road, and got tangled up in downed power lines. The system had clearly never been trained, or tested, on emergency services tape. The first time it encounters it, it fails to cope. The next time it’s more likely to know what it means, because now emergency services tape is in its training data.

When self driving cars break, there’s typically immediate feedback. They crash, or get stuck, or do bizarrely unexpected things. Some of the other ways machine learning systems break, though, are less obvious, because they don’t necessarily look broken just from the output, and this is the danger zone.

Let’s use AI recruitment systems as an example. There are many of them around, including HireVue, which I wrote about in Raising Heretics, and a system from a Melbourne based startup called Sapia.ai. Like all of these systems, the key selling point is reduced cost of recruiting, but the marketing information shouts loudly about the systems being unbiased. Indeed, the founder and chief executive of Sapia.ai boldly says that “AI is the only way to remove bias.”

Unfortunately, though, all too often, AI is actually a way to bake bias in, and even to magnify it, without us ever being able to interrogate the decision making process to detect it. No researchers are going to be allowed near commercial software, after all! Similarly, the company will not allow examination of its training dataset, nor expose its algorithm – these are its commercial edge.

Imagine a recruitment system that learns from feedback on its decisions as it becomes “smarter” (by which I mean collects more data). The system recommends a woman for a software development job – the first woman it has even seen apply for a job like this. The candidate is eventually rejected, on the grounds that she lacks “cultural fit”, which is an excuse often used for not hiring “people like us”. This outcome is fed back into the recruitment system, which, rather than learning key lessons about this company’s cultural fit, accidentally learns that women should not be selected for software development jobs.

So far, this outcome isn’t really worse with the AI involved, because it would likely have happened anyway. But now you have a system used by more and more companies, which has learned not to select women for software roles. It won’t tell you that, though. It just doesn’t let women through its filter, and the companies using it lament the lack of qualified women. Meanwhile the system is learning that men do get recruited, and the bias becomes baked in. What does a successful candidate look like? A man.

Now add in candidates for whom English is a second language, who use slightly unfamiliar idioms, or occasionally choose the wrong word. Their communication skills may be excellent, but to the system they don’t look like successful candidates, so they get rejected by the filter. “Oh!” moans the company concerned, “We’d love to hire culturally and linguistically diverse candidates, there are just no qualified ones applying!” … but they are applying, they’re just not making it through the system’s filters. Which are based on who made it through before.

Do you see the problem yet? Machine learning systems are starting to look just like white men hiring other white men, only this time it’s hidden under an impenetrable, commercial-in-confidence algorithm with an undisclosable training set, and it’s impossible to interrogate, and almost impossible to prove.

Unfortunately, blind faith in machine learning systems seems to be the default, where rational scepticism is desperately needed. We must learn to demand evidence for the claims made about Machine Learning systems behaviour, and we must demand that they be critically and rigorously evaluated before being put into real world use.

Blind faith tells us AI is the answer to human frailties like bias, when the evidence suggests that AI is actually human frailties digitised, disguised, and magnified. The idea that machines are perfect and unbiased is a bedtime story. It’s time to wake up.

* For more on this topic, I strongly recommend Made By Humans, by Ellen Broad, and Automating Inequality, by Virginia Eubanks.

Leave a Reply