I was in a medical appointment today where the doctor said at the start of the appointment that she uses AI to transcribe the appointment. She also uses it to draft the letter after the appointment. She said that the patient could opt out if they wanted. After a moment’s thought, the patient said they weren’t comfortable with it, though I notice the doctor did not switch anything off. You have to hope she was waiting for consent to turn it on.
I really feel this information, particularly the name of the software, should have been provided in advance of the appointment, if there was any chance of informed consent. For informed consent to be possible, the person asked to consent should know exactly what they are consenting to. In the case of using AI to transcribe a medical appointment, informed consent should surely require, at a minimum, the answer to these questions:
- What software are they using?
- What is the software’s privacy policy?
- Where will the transcription be stored? How secure is that storage?
- How long will it be stored for? (The doctor said “around two weeks, then I delete it”)
- Will the transcription be used for training the AI? (in which case the term “delete” is arguably misleading, because that information remains in the system in some form)
- Who has access to the transcriptions? Does the company providing the AI have the ability to access them?
But, of course, we did not have those questions prepared, because the patient had no idea that AI might be used, and that they were going to be asked to consent. They had no opportunity to think about it, reflect on it, and form an informed opinion.
Many of those questions are relevant to other systems used by medical practices as well, but the really significant change here is the training question. Does your private conversation between you and your doctor get fed into an AI system and used to train its behaviour? If so, how much of that private conversation remains in the system in some form?
I looked up the company to see what their privacy statement said, and found an opening you could drive the Titanic through:
“We may disclose personal information to third parties that help us deliver our services or improve Heidi (including to software developers, information technology and communication suppliers and our business partners) or as required or permitted by law.”
Needless to say, this information was not disclosed to the patient at the time of the requested “consent”. To be honest, I’d be surprised if the doctor herself was aware of it.
This idea that it’s ok if it’s “to improve the system” covers an incredibly broad range of potential misdeeds.
This prompts a conversation that the AI industry would really much rather did not take place. What does informed consent look like, when it comes to AI? When companies such as Anthropic, Microsoft, Google, and Open AI scrape the web and use the contents of hundreds of millions of web pages to train their systems (including this one), should they get consent first? All of the material on this website is clearly labelled as shared under a Creative Commons Attribution Non Commercial license, which prohibits its use for commercial purposes, but it didn’t stop them scraping it.
For the personal information in a private consultation with a doctor, surely the bar should be far, far higher than for information publicly available on the web!
Should companies using AI for support chatbots, or transcription purposes, be obtaining informed consent from their customers before doing so?
Is informed consent for AI even possible for most users, given that most folks have no idea what AI is, how it works, or how it is trained?
Should companies who build AI systems be expected to be transparent about what they do with our data, and how our data is stored in their systems? Who is held accountable if some of our personal data leaks out in future versions of the system?
Many of these questions don’t have easy answers, but that makes it even more important that we should be asking them, opening the debate, and figuring out what we, as a society, feel are acceptable, ethical, and humane answers, as opposed to commercially attractive ones. And then we must hold the companies to a far higher standard than they are currently choosing for themselves.
At the moment, the AI industry is operating at the wild frontier of both our laws and our ethics. We let them continue out there, unchecked, at our own peril.

Dear Dr. McIver,
One of the flaws of our society (which has always been there) is the lack of interest by that same society. In short: we do not really care. AI is the new religion and we are not aware of the weight of AI. Most of us are unaware of the interest of companies that develope AI. Unfortunatly the same goes for organisations that use that AI. They do not know and they do not seem to care. It is there and we will use it, regardless of the ethics.
Not just AI but the whole of ict and the internet is a blackbox for most people. We fail to see the interests of companies such as Google, Meta, etc. are not always our interests. Maybe it was like that in the beginning, but as soon as their impact on us has gotten such a level where we can not surive without their systems, they put the weight on THEIR interest. We fail to see this (or do not want to see).
I have been working in ict and I have seen this development take place. Also houw ict has become chaos. We keep building and building and connecting al sorts of systems, at some point (if ever) nobody knows how systems work anymore. If something fails it might end up in a domino-effect (we had some events like that here in The Netherlands recently).
The future…? Like deaths in traffic are concidered normal, we will have to get used to ‘accidents’ in ict and the internet. I am glad I do not work in ict anymore. I do not want to be guilty in building more chaos.
Cheers, Cor Faber