Artificial Intelligence, climate, Data Science Explainer, social justice, Tech Industry

Making Sense of AI

So you haven’t got time to read through hefty books about the AI industry. You’ve had fun with image generators, and you find chatbots useful although occasionally alarming. But there’s information firing at you from all directions, and it’s wildly contradictory, alarming, exciting, and horrifying, all at the same time. What the heck are you supposed to do?

I don’t claim to have all the answers, but here’s what is, I hope, a reasonably sane outline of the issues, and your place in them.

It’s important to remember that we are not responsible for the behaviour of the AI industry. We did not make their choices, and we have limited capacity to influence them. Using ChatGPT to write an email or DallE to spit out an image for you does not make you responsible for their crimes.

That said, being aware of the ethical issues of these systems might change your attitude to them. Or it might not. Realistically, you’re not fixing the problem with your individual choices. But by having conversations about the problem, we just might be able to help shift the world towards better solutions.

One thing that has not made the headlines nearly enough is that the vast amount of data scraped from the internet and fed into chatbots and image generators contains… well… it’s the internet. It includes some pretty horrific stuff. To avoid the reputational damage that would follow from their chatbots reproducing that kind of content, companies like OpenAI hire people to “clean” their data. “Cleaning” is a very benign word for hiring desperate people on subsistence wages (if they’re lucky) to look at incredibly traumatic content for hours on end, and flag it for deletion. Check out The AI Con or Empire of AI for more detail, but be prepared to lose your lunch. It’s pretty traumatising.

What about the theft of work on a mind boggling scale? Well, a huge amount has been said about that already, so I’m not going to go into great detail except to say that the AI industry’s claim that we can’t make progress without theft is quite nonsensical. We can make progress, and many researchers ARE making progress, on smaller systems that do one thing really well. The massive scale of data being stolen in order to make big systems that do many things, but none of particularly well (and some things extraordinarily badly), is not the only way to make progress.

Next, will we be left behind if we don’t jump on the runaway AI train? Nope. Absolutely not. True, Chatbots can save us some time in specific circumstances, but they can never be trusted not to output a stream of very plausible but remarkably harmful garbage. This means that the train we desperately need to catch is not the AI train, it is the critical thinking train. If you want to be the lawyer who submits non existent cases as precedent, the doctor who prescribes non existent medicine, the manager who deploys a chatbot that makes call volumes skyrocket and customer satisfaction plummet, or the programmer who releases a system with catastrophic security holes, then sure, have blind faith in Chatbots. For everyone else, the need to fact and quality check AI Chatbots’ output requires more expertise and critical thinking than ever.

Chatbots are not currently solving new problems or doing things for us that we could not do otherwise. Some smaller AI systems that have been trained to do one single job – such as detecting cancer in CT scans – have excellent levels of accuracy. But even they need human backup. The “human in the loop” model that automates tedious tasks and supports human expertise has great potential. The “AI does it all” model that allows companies to sack all their staff and reap vast rewards remains the pipe dream of billionaires. Companies that have tried it are finding themselves in strife. Make no mistake, this push towards huge systems that replace skilled workers is for the benefit of billionaires’ bank accounts, not the benefit of humanity.

What about sustainability? Is my ChatGPT query going to burn down the climate? Well… no… but also kind of yes. The training of these systems has the most impact on climate – and it’s shockingly bad. Your individual queries are a drop in the bucket, though the question is one of scale. Our individual queries add up fast, but more importantly, those companies that are trying to create “all things to all people“, constantly scaling systems are deliberately choosing a massively unsustainable approach to energy and water use over the much lower impact of small, focused systems that do one thing really well.

Ultimately, Governments, Companies, and society as a whole will need to decide whether the exploitation of vulnerable people for content filtering, the theft of creative work, and the catastrophic climate impact are acceptable sacrifices. I can’t see any way in which they could possibly be. Can you?

Leave a Reply