Site icon ADSEI – Teaching Kids to Change the World

Ethical uses of AI

Linda McIver sitting on a couch, leaning on the back of the couch with one arm and staring solemnly into the camera.

I’ve seen a lot of talk about the appropriate use of AI in education. About students using it and passing the work off as theirs. About whether we should be encouraging students to cite their use of it, and how much they need to adapt the output to call it their own. In a recent workshop I ran for teachers about AI, one participant asked me whether I knew of any good resources that cover the ethical use of AI in the classroom, and I realised that we’re just not talking about the elephant in the room.

There is no ethical way to use AI. At least not the Large Language Models (LLMs) we’re mostly talking about these days when we talk about AI.

I’m going to unpack that in a moment, but first let me stress that I am not, for a moment, calling teachers who use AI unethical. The ethical responsibility here does not lie with users of LLMs. It lies firmly with the industry that has created and promoted the use of these machines in wholly unethical ways.

So why do I claim that there is no ethical way to use LLMs? There are three main reasons, from where I sit. All of these points refer to the current generation of LLMs. Future systems may be different, but there’s no evidence of that yet!

  1. LLMs are trained on stolen work. Whether it’s image or text generation, the companies developing these systems ran roughshod over copyright, ignored explicit statements denying them access, and simply consumed the creative work of everyone without permission. And then used that work for commercial purposes. I run a charity that has produced a considerable amount of creative work. I am also an author, and an educator. I explicitly tag my work with copyright notices that prohibit using it for commercial purposes. Yet my work was taken, and used, and will be again. This is clearly unethical.
  2. LLMs use horrendous amounts of power. I’m not going to put figures here because they are complex and highly speculative, and I don’t want to perpetuate made up numbers! But when you factor in the training, plus the energy used to run these systems, they are emitting dangerous levels of CO2 that are directly accelerating climate change.
  3. LLMs use horrendous amounts of water, both as a consequence of the energy they generate, but also to cool their systems. In a world where water is becoming increasingly scarce, this is an environmental cost we really cannot afford.

There are other ethical issues around the use of AI, (and I haven’t even touched on the false results) but for me, these three are insurmountable.

Again, I am not, for a moment, accusing the users of LLMs of being unethical. The blame for all three of these issues lies firmly at the feet of the companies developing, promoting, and growing these systems.

When a company markets a system as a search engine, knowing full well that it returns wrong answers as much as 60% of the time, trains it on stolen data, and knowingly accelerates climate change in the pursuit of profit and power, it’s absurd to blame the people who believe that marketing.

Ever since I read “Dark PR” by Grant Ennis, (go read it, it’s revelatory) I’ve been thinking a lot about our emphasis on individual action. Grant shows very clearly how making consumers responsible for fixing corporate actions – whether it’s by recycling their excess and environmentally problematic packaging, or boycotting them – is a focus that’s actively designed to free corporations from consequences and responsibility. It’s not our responsibility not to use AI, it’s the AI companies’ responsibility to train, market, and maintain it in an ethical, sustainable fashion. That said, I still don’t want its grubby, wrong, power hungry footprint on my own work!

Teachers are wildly overworked, so I completely understand them using chatbots to help them generate lesson plans and ideas, to save them time and energy. Add to that the pervasive sense that if you’re not using AI you’re going to be left behind, and it would be most unreasonable to suggest that it’s unethical to use these tools. Particularly when organisations are increasingly demanding that their staff use AI, even if they don’t know why! (I’m hearing more and more cases of this, and it’s horrifying!)

As Laura Summers put it to me the other day, the marketing model of AI companies is 100% FOMO – Fear Of Missing Out. You have to use AI because everyone else is using AI, and you will be left behind if you don’t! It’s cunning, effective, and an outright lie.

It’s clear that the AI industry is unethical. We just need to be sure we blame the liars, not the folks being lied to.

 

 

Exit mobile version