AI bot sends confidential info to Ontario hospital patients after recording doctors’ meeting: IPC – Canadian HR Reporter


Privacy watchdog makes several recommendations after data breach involving Otter.ai
An Ontario hospital's privacy breach involving an AI transcription tool reveals how organizational oversights can undermine even the strongest data protection intentions.
According to an investigation into the incident, the breach resulted from "two critical security gaps."
First, a former physician used his personal email address in a meeting group, contrary to hospital policy. Second, the meeting organizer did not remove the physician from the meeting invite following his departure in June 2023, according to the Information and Privacy Commissioner of Ontario.
As a result, when the physician installed Otter.ai on a personal device in September 2024, the transcription tool was able to access the rounds meeting invite via the physician's personal digital calendar.
On Sept. 23, 2024, the AI tool automatically joined a virtual hepatology rounds meeting attended by hospital physicians, according to the IPC. The breach went undetected until a meeting summary and access to a transcript of the recording was automatically emailed to participants after the meeting.
Otter.ai — which uses artificial intelligence to transcribe spoken words into text and is designed to allow users to obtain detailed meeting notes and summaries — distributed the sensitive material automatically.
The exposure of protected health information was substantial. The IPC reported that during the meeting, the personal health information of seven patients was discussed. The compromised information included patient names, sex, physician's name, diagnoses, medical notes, and treatment information, the watchdog confirmed.
Distribution of the breach extended beyond current staff. Of the 65 users on the recipient list, the IPC noted that 12 of whom were no longer employed by the hospital. While 53 responded confirming they either deleted the email or never received it, the remaining departing employees could not confirm deletion, leaving questions about the final disposition of sensitive data.
Canadian employers appear to be showing increased interest in conducting job interviews using artificial intelligence (AI) technology, according to a recent report.
Teresa Scassa, Canada research chair in information law and policy at the University of Ottawa, explained to the Globe & Mail that the rise of agentic AI tools – which can act independently – poses an even greater threat.
She noted that in this case, it wasn't clear whether the physician even realized the Otter.ai bot had attended the meeting on his behalf and recorded it – or whether it had done so for other meetings as well.
“That’s a whole level of autonomy and independence that we haven’t been prepared for but need to start thinking about,” she said.
The law firm Gowlings notes that Canadian employers navigating the use of AI must do so within a growing patchwork of legal and regulatory frameworks. As of Jan. 1, 2026, certain Ontario employers will be required to disclose the use of AI systems during the hiring process in publicly advertised job postings.
But broader protections remain limited, said the law firm, noting that there is no federal legislation that regulates the use of AI in the commercial or employment context and the European Union has moved faster than North America.
Recent events at Google, where the company reversed a policy requiring employees to share personal data with a third-party AI tool in order to access health benefits, have sparked a broader conversation about how organizations can protect employee information while leveraging new technologies.  
In response, the hospital implemented multiple protective measures. It blocked users from using AI scribe tools such as Otter.ai and deepseek.com while on-site via a firewall configuration.
The institution also updated its training to emphasize the consequences of unauthorized tool use, including specific language that a privacy breach may result in disciplinary action, reporting to regulatory college and the Information and Privacy Commissioner and may result in personal fines of up to $200,000 and organizational fines of up to $1,000,000.
The hospital's revised policies now require staff to make it a practice to review meeting participant lists for any inclusion of unapproved AI tools or automated agents, and remove them from meetings before proceeding or before discussing any PHI/PI/CCI, according to the IPC letter.
The IPC made several recommendations beyond the hospital's immediate responses. It urged the hospital to submit a formal request to Otter.ai to delete any personal health information (PHI) of hospital patients retained from the Sept. 23, 2024 meeting, noting that the hospital should not have relied solely on the former physician to request deletion.
The IPC also recommended the hospital update its privacy breach protocol to require the Privacy Office to directly and immediately contact third-party organizations to request the deletion of any PHI collected without authorization.
The commission further advised the hospital to conduct an audit of the hospital's employee and physician offboarding process to verify that proper procedures are in place to ensure all access to hospital information systems, including access to calendar invites, is immediately revoked upon departure. Additionally, the IPC recommended technically enforce the use of a 'lobby' for all virtual meetings in which PHI is discussed by requiring the host to manually approve each participant.
 

source

Jesse
https://playwithchatgtp.com