Artificial Intelligence, social justice, Tech Industry

Schrödinger’s AI

Two years ago I wrote a piece called “There’s no such thing as Artificial Intelligence” in response to the hype around chatGPT and the idea that Artificial General Intelligence, or human-like intelligence was “just around the corner”.

The hype was pretty spectacular. In the one corner we had AI hype merchants (or “Merchants of Slop” as I like to think of them) raving about how intelligent their machines were, how they were displaying emergent behaviour (doing things they hadn’t been programmed to do) and how they were going to solve all of humanity’s wicked problems, such as climate change.

Ostensibly in the opposing corner, we had the doom merchants saying we needed to stop developing AI because it was inevitably going to go all skynet and kill us all. (Many people in this corner, I am pretty sure, were simply using the doom scenario to hype the idea that AI was actually about to be intelligent, so it wasn’t really the opposing corner at all, but that’s another blog…)

A few voices tried to explain how the tech actually worked, and why Large Language Models (LLMs) weren’t anything like intelligent, or even a step on the road to intelligence, but, in fact, an evolutionary dead end. We didn’t get much airtime, but received a startling amount of backlash, even so. My friend Lilly Ryan sometimes calls herself Cassandra in the coal mine, and flying in the face of the relentlessly positive narrative around AI often feels extremely Cassandraic. Doomed to speak the truth and never be believed.

Rather naively, I assumed that once it became obvious that LLMs like ChatGPT, Claude, and others, couldn’t even reliably give correct answers, much less solve novel problems, the excitement would die down, and the companies developing the systems would probably die, too. Sadly, the cat of AI inaccuracy is both out of the bag and firmly still in the bag, thus creating an unexpected new version of Schrödinger’s cat. We both know that LLM’s generate fake news and use them in search of truth, at the same time. 

It’s a rather starting level of cognitive dissonance. And trying to call it out is exhausting. It really helps when you hear other voices singing the same song, so I was heartened on Tuesday by hearing Professor Emily Bender speak about her book, co-authored with Alex Hanna: The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Professor Bender is a linguist, and has a real knack for fighting hype with eloquent names (the term “stochastic parrot” is one of hers). Among other things, she hates anthropomorphism of LLMs, and to combat it, she calls them “conversation simulators”, “synthetic text extruding machines,” and my personal favourite: “racist piles of linear algebra”.

Professor Bender’s approach is that by countering the hype, calling out the issues, and refusing the narrative of inevitability around AI, we can create the future we want, rather than the future that OpenAI, Anthrophic, Meta, and their many frenemies are trying to create for us. Because, let’s be real, the future they want is not a utopia where AI solves all our problems. It’s a future where they are even more unimaginably rich and powerful than they are now. That’s it. That’s the dream.

Creating the future we want is, of course, ADSEI’s whole mission, and what gets me out of bed in the morning – as well as, on occasion, sending me back to bed to hide under the doona on the tough days. Although it was written before the LLM frenzy, Chapter 2 of my book, Raising Heretics: Teaching Kids to Change the World contrasts the world we have with the world we could have, if only we went for evidence based policy, and rationally critical evaluation of our systems.

It’s the world we’re working towards, here at ADSEI, but some days it feels a long way off. It’s hard to see the progress when you look out over the world and find the air thick with hype and lies.

And yet. Not everyone is breathing it in. Some people, like Professor Bender before going on stage to speak, are still masking against covid, and some people are also masking against the hype and the lies. Refusing to drink the kool aid, if you will. My “There’s no such thing as AI” piece is the most popular piece of writing I’ve ever done. Over two years later, it still gets many new hits every day.  And maybe it’s preaching to the converted, but maybe it’s also a useful piece for the converted to share with the undecided.

I asked Professor Bender how we reach the folks who aren’t reading her book or listening to her podcast, and her response was “We will never reach everyone. I’m not trying to convince Sam Altman. But every person you do reach matters.”

I probably need to frame that and put it on my office wall. Every person you do reach matters. So share the posts that resonate. Argue the case. Ask difficult questions (my favourite kind), like “where’s the evidence for that?” and “Have you fact checked that?” and “How are we testing for bias in the system?” and “Who has evaluated the quality of the output? How are we measuring it?”

The more we push back, the more we make space for others to push back, ask difficult questions, and resist the hype. And just because we can’t always see the impact of our efforts, doesn’t mean it doesn’t exist.

5 thoughts on “Schrödinger’s AI”

  1. I love this so much. I’ve just come from a conference where far too many people kept saying “We have to learn to live with AI” in one form or another and it was like accepting the inevitability of losing fingers to an unguarded bandsaw.

  2. I really love that Emily explicitly calls out the inevitability narrative. We really do not have to accept a racist pile of stolen linear algebra as inevitable. We can, in fact, say no!

  3. Brilliant! Not only do we now have another item in our Schrodinger collection but you’ve so coherently pointed to the vast BS and it’s foundational reasons.

Leave a Reply