Fairness is not the default

KJ Pittl from Google spoke brilliantly at C3DIS (The Collaborative Conference on Computational and Data Intensive Science) about fairness in Machine Learning in May. Although I’ve thought and read a lot about this topic, her talk was electrifying. I want to try to capture here the points that I thought were key, and none registers more strongly than this one:

“Humans have not got a history of being fair. Fair is not the default.”

To back up this point, KJ used the following slides, which really speak for themselves.


I am almost certain that none of these situations came about by malicious intent. They were just design decisions by a small group of people, for a small group of people, and they simply assumed it would work for everyone the way it worked for them.

But right there, that’s why we urgently need diversity in tech, and in data science. Because as long as the groups that are designing our future are largely homogeneous, they won’t be able to say “But are there any people of colour in our image set?” – a question that could have averted this:

Screen Shot 2018-08-03 at 5.00.03 pm

or to say “Hey, do you know that blind people won’t be able to use this device to enter their PINs?”

Or “But what happens if you’re in a wheelchair or pushing a pram?”

or “What if you’re homeless?” “What if you have kids?” “What if you’re part time?” “What if English isn’t your native language?” “What if your eyesight isn’t great?” “What if you have food allergies?” “What if you’re a refugee?” “What if you don’t have a car?” or any one of the myriad questions that might prevent us from designing a future that inadvertently locks a section of our population out.

Diversity helps us design better solutions, but it also helps us ask important questions of the solutions we have. And given that, by default, our systems will not be fair, inclusive, or equitable, we really want to make sure those questions get asked.

Leave a Reply