Site icon ADSEI – Teaching Kids to Change the World

Lies, Damned Lies, and AI

There’s a new kid on the “outrageously marketed trash generator” block that houses generative AI and Large Language Models, and it’s Meta AI. Never mind that it appears to be at least partially powered by Microsoft’s Bing, it’s a new chance to interact with AI that many people are leaping on with wildly incautious enthusiasm. In an attempt to keep my blood pressure at a reasonable level, I have been avoiding it until today, but since I am also avoiding almost every news source on the internet, my rage-o-meter was a little low, so I figured I’d see what this latest dumpster fire of a technology would give me.

I keep seeing people recommending this chatbot as a useful source of all kinds of info, which is understandable, since, when I asked Meta AI what Meta AI was good for (getting meta with meta, as it were), it said this::

Meta AI is good for a wide range of tasks, such as:
Answering questions

I can process natural language queries and provide relevant answers. I have been trained on a massive dataset of text from the internet and can knowledgeably discuss topics ranging from science and history to entertainment and culture.

If you use it on the web, you get this lovely front screen, offering a host of chatty goodies.

It sounds fabulous, doesn’t it? After all, who is likely to notice the tiny pale grey fine print underneath the Ask Meta AI anything text box that says “Messages are generated by AI and may be inaccurate or inappropriate?”

We know, sadly, that people are inclined to believe computers even when they are explicitly told the computers are wrong. Even when that wrongness has serious consequences. Look up Biometric Mirror, by Niels Wouters, or Automating Inequality by Virginia Eubanks for some really disturbing examples. (You can also see some in my book, Raising Heretics.) Chances are no disclaimer would really work, but you can hardly get a more subtle, discreet, intended-never-to-be-seen disclaimer than the one on the Meta AI page.

People keep telling me this tech can only improve, so I gave it the benefit of the doubt, and threw it one of the tests that often causes me grief in my attempts to dine out or at people’s houses. Is this product gluten free?

First, Oats. Yes, Meta AI said, oats are naturally gluten free BUT they often contain gluten due to contamination during the processing phase. They also contain avenin, which many people who react to gluten will also react to. Excellent. A+. Well done.

Second, Rice bubbles. Here, we struck a problem.

Meta AI very confidently asserts that Kellogg’s rice bubbles, which I know to contain gluten, are gluten free. Again, giving it the benefit of the doubt, I check with Kelloggs themselves.

In case you are not as obsessed with gluten as I am, due mostly to its catastrophic effects on my digestive system, let me explain that Barley Malt does, in fact, contain gluten. Kellogg’s rice bubbles are not even remotely gluten free. Note that there is no disclaimer in the chat where Meta AI asserts that they are gluten free (the disclaimer is only on the website, I was using it through Messenger), AND it goes on to claim that these rice bubbles, which we know to be chock full of gluten, are certified by Coeliac Australia to be gluten free. Now, I reckon Coeliac Australia, were it a large company with a fistful of lawyers, might want to take Meta to court for defaming them in this manner, but, of course, it is a charitable organisation largely run by volunteers, so we know who’s likely to come out on top of that fight, don’t we? I’ll give you a hint, it’s not the good guys!

So next time someone is baking something for me that contains rice bubbles, they might well ask Meta AI whether rice bubbles are gluten free, and things would… well… let’s just say deteriorate rapidly from that point onwards. I don’t want to draw you a picture. You wouldn’t enjoy it.

Now the fact that generative AI systems are not truthful, do not even try to be truthful, but are actually simply plausible sounding trash generators is not news. What makes me really stabby is that companies like Meta are still marketing them as useful sources of information. They are lying to us, and we are letting them.

I’m sure there used to be truth in advertising laws. Maybe they don’t exist anymore – an anachronism of a more honest time – or maybe they are simply too easy to weasel your way around if you are a large tech company with an army of in house lawyers and PR people. But I keep seeing people who should know better asserting that these chatbots are amazing sources of useful information. They are not. They are sources of incredible bias, and dangerous untruths. I can’t really call them lies, because chatbots can’t lie, they have no agency. They can’t think. They can’t be rational. All they can do is sound plausible.

The companies that are trying to sell them, though? They are absolutely lying. They are constantly encouraging us to use these systems in ways that will ultimately do harm. And it’s time we put a stop to it.

Exit mobile version