We are currently drowning in tech industry marketing hype. We make purchases, develop curriculum, and even create government policy on the basis of what the tech industry tells us its products can do.
This smart watch will change your life and make you healthier.
This chatbot will save you time and make your writing better.
This app will make you more mindful/teach you German in 30 days/help you lose weight/keep your children safe.
This AI chatbot can do your research for you! And it’s better than the last one. (The next one will be better still.)
This system will keep your data secure.
It’s easy to get caught up in the hype, to buy a product, buy into the idea, plan classroom activities, or invest in the company, that makes the most attractive claims. But there’s a two word sentence that could protect us, if only we wielded it more often.
Prove it.
If your smart watch actually tracks sleep and activity accurately, show us the testing that you’ve done. Tell us how you calculate your sleep score, or your stress score, or your “readiness/body battery/other made up metric” score. Show us how your smart watch’s sleep score compares with in-lab sleep testing. Tell us where it’s not so accurate, and where it’s more accurate. (Most step counters, for example, are great with brisk walking, but hopelessly inaccurate with casual ambling, or even riding in a car!)
Prove it.
If your AI is truly better than the last version, or your competitors’ product, prove it. Open up the premium version to rigorous, objective testing, and publish the results. (For an excellent discussion of the issues with AI Benchmarking, check out this episode of Mystery AI Hype Theatre 3000.) Be open about what it can do, and what it can’t. Don’t advertise it as a search engine, when that’s literally not what it does.
It is important to keep in mind that chatbots are not intelligent. They don’t understand the content they are fed, nor the content they produce. Nor are they search engines, returning facts out of a database of stored information. They are statistical models that use the vast amounts of training data they have been fed to predict the most likely next word, next sentence, or next paragraph, using your initial prompt as a starting point.
Prove it.
If your app successfully improves mental health, or helps people lose weight, or teaches them a language, prove it. Once again, open it up to rigorous testing, and publish the results. Show where it works, and where it doesn’t. Who it helps, and who it harms.
Prove it.
If your system is completely secure (and it’s very likely that there is no such thing as “completely secure” – security is an ongoing arms race between industry and hackers), then prove it. Hire penetration testers to test the heck out of it, and publish the results.
You might detect a theme. It’s all about rigorous, systematic, objective testing. Not testing done by the company selling the product. Not testing done using a test that the system knows about in advance and has been optimised to pass (like Volkswagen’s emissions testing). Testing done under real world conditions, by independent third parties without conflicts of interest. Testing according to criteria that change, like the real world does, and that are not predictable, just like the real world. Testing that the companies don’t know the details of in advance.
This is basic science, really. Test the theory, figure out where it hits the mark and where it falls short. For any solution, technological or otherwise, figure out who it helps and who it harms.
But somehow we have built an economic system where this kind of testing, this kind of openness, is “not possible” because that kind of information is “commercially sensitive.” We prioritise companies’ commercial advantage over consumer wellbeing. Over knowing whether these expensive devices actually work, or whether they are just snake oil.
There’s nothing to stop us from demanding proof. From legislating transparency. Particularly in areas like health and education, which are too important to risk prioritising commercial interests over consumer safety. I don’t mind you selling products into the health and education fields, but I want to be sure they are safe, effective, and do what they claim. That seems like a low bar, but it’s one that technology rarely clears. And it’s incredibly unlikely that it ever will, unless we make it mandatory.
Company rights long ago overtook human rights, and societal well being. Perhaps it’s time to rethink that race. For human rights to overtake company rights. For the environment to overtake profit. And for social good to be a more important measure than the economy.
