Site icon ADSEI – Teaching Kids to Change the World

Informed Consent

Linda McIver sitting on a couch, leaning on the back of the couch with one arm and staring solemnly into the camera.

I was in a medical appointment today where the doctor said at the start of the appointment that she uses AI to transcribe the appointment. She also uses it to draft the letter after the appointment. She said that the patient could opt out if they wanted. After a moment’s thought, the patient said they weren’t comfortable with it, though I notice the doctor did not switch anything off. You have to hope she was waiting for consent to turn it on.

I really feel this information, particularly the name of the software, should have been provided in advance of the appointment, if there was any chance of informed consent. For informed consent to be possible, the person asked to consent should know exactly what they are consenting to. In the case of using AI to transcribe a medical appointment, informed consent should surely require, at a minimum, the answer to these questions:

But, of course, we did not have those questions prepared, because the patient had no idea that AI might be used, and that they were going to be asked to consent. They had no opportunity to think about it, reflect on it, and form an informed opinion.

Many of those questions are relevant to other systems used by medical practices as well, but the really significant change here is the training question. Does your private conversation between you and your doctor get fed into an AI system and used to train its behaviour? If so, how much of that private conversation remains in the system in some form?

I looked up the company to see what their privacy statement said, and found an opening you could drive the Titanic through:

“We may disclose personal information to third parties that help us deliver our services or improve Heidi (including to software developers, information technology and communication suppliers and our business partners) or as required or permitted by law.”

Needless to say, this information was not disclosed to the patient at the time of the requested “consent”. To be honest, I’d be surprised if the doctor herself was aware of it.

This idea that it’s ok if it’s “to improve the system” covers an incredibly broad range of potential misdeeds.

This prompts a conversation that the AI industry would really much rather did not take place. What does informed consent look like, when it comes to AI? When companies such as Anthropic, Microsoft, Google, and Open AI scrape the web and use the contents of hundreds of millions of web pages to train their systems (including this one), should they get consent first? All of the material on this website is clearly labelled as shared under a Creative Commons Attribution Non Commercial license, which prohibits its use for commercial purposes, but it didn’t stop them scraping it.

For the personal information in a private consultation with a doctor, surely the bar should be far, far higher than for information publicly available on the web!

Should companies using AI for support chatbots, or transcription purposes, be obtaining informed consent from their customers before doing so?

Is informed consent for AI even possible for most users, given that most folks have no idea what AI is, how it works, or how it is trained?

Should companies who build AI systems be expected to be transparent about what they do with our data, and how our data is stored in their systems? Who is held accountable if some of our personal data leaks out in future versions of the system?

Many of these questions don’t have easy answers, but that makes it even more important that we should be asking them, opening the debate, and figuring out what we, as a society, feel are acceptable, ethical, and humane answers, as opposed to commercially attractive ones. And then we must hold the companies to a far higher standard than they are currently choosing for themselves.

At the moment, the AI industry is operating at the wild frontier of both our laws and our ethics. We let them continue out there, unchecked, at our own peril.

Exit mobile version