2 Sources
2 Sources
[1]
Silent cyber threats: How shadow AI could undermine Canada's digital health defences
University of Toronto provides funding as a member of The Conversation CA-FR. Across Canada, doctors and nurses are quietly using public artificial-intelligence (AI) tools like ChatGPT, Claude, Copilot and Gemini to write clinical notes, translate discharge summaries or summarize patient data. But even though these services offer speed and convenience, they also pose unseen cyber-risks when sensitive health information is no longer controlled by the hospital. Emerging evidence suggests this behaviour is becoming more common. A recent ICT & Health Global article cited a BMJ Health & Care Informatics study showing that roughly one in five general practitioners in the United Kingdom reported using generative-AI tools such as ChatGPT to help draft clinical correspondence or notes. While Canadian-specific data remain limited, anecdotal reports suggest that similar informal uses may be starting to appear in hospitals and clinics across the country. This phenomenon, known as shadow AI, refers to the use of AI systems without formal institutional approval or oversight. In health-care settings, it refers to well-intentioned clinicians entering patient details into public chatbots that process information on foreign servers. Once that data leaves a secure network, there is no guarantee where it goes, how long it is stored, or whether it may be reused to train commercial models. A growing blind spot Shadow AI has quickly become one of the most overlooked threats in digital health. A 2024 IBM Security report found that the global average cost of a data breach has climbed to nearly US$4.9 million, the highest on record. While most attention goes to ransomware or phishing, experts warn that insider and accidental leaks now account for a growing share of total breaches. In Canada, the Insurance Bureau of Canada and the Canadian Centre for Cyber Security have both highlighted the rise of internal data exposure, where employees unintentionally release protected information. When those employees use unapproved AI systems, the line between human error and system vulnerability blurs. Are any of these documented cases in health settings? While experts point to internal data exposure as a growing risk in health-care organizations, publicly documented cases where the root cause is shadow AI use remain rare. However, the risks are real. Unlike malicious attacks, these leaks happen silently, when patient data is simply copy-and-pasted into a generative AI. No alarms sound, no firewalls are tripped, and no one realizes that confidential data has crossed national borders. This is how shadow AI can bypass every safeguard built into an organization's network. Why anonymization isn't enough Even if names and hospital numbers are removed, health information is rarely truly anonymous. Combining clinical details, timestamps and geographic clues can often allow re-identification. A study in Nature Communications showed that even large "de-identified" datasets can be matched to individuals with surprising accuracy when cross-referenced with other public information. Public AI models further complicate the issue. Tools such as ChatGPT or Claude process inputs through cloud-based systems that may store or cache data temporarily. While providers claim to remove sensitive content, each has its own data-retention policy and few disclose where those servers are physically located. For Canadian hospitals subject to the Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial privacy laws, this creates a legal grey zone. Everyday examples hiding in plain sight Consider a nurse using an online translator powered by generative AI to help a patient who speaks another language. The translation appears instant and accurate -- yet the input text, which may include the patient's diagnosis or test results, is sent to servers outside Canada. Another example involves physicians using AI tools to draft patient follow-up letters or summarize clinical notes, unknowingly exposing confidential information in the process. A recent Insurance Business Canada report warned that shadow AI could become "the next major blind spot" for insurers. Because the practice is internal and voluntary, most organizations have no metrics to measure its scope. Hospitals that do not log AI usage cannot audit what data has left their systems or who sent it. Bridging the gap between policy and practice Canada's health-care privacy framework was designed long before the arrival of generative AI. Laws like the PIPEDA and provincial health-information acts regulate how data is collected and stored but rarely mention machine-learning models or large-scale text generation. As a result, hospitals are forced to interpret existing rules in a rapidly evolving technological environment. Cybersecurity specialists argue that health organizations need three layers of response: 1- AI-use disclosure in cybersecurity audits: Routine security assessments should include an inventory of all AI tools being used, sanctioned or otherwise. Treat generative-AI usage the same way organizations handle "bring-your-own-device" risks. 2- Certified "safe AI for health" gateways: Hospitals can offer approved, privacy-compliant AI systems that keep all processing within Canadian data centres. Centralizing access allows oversight without discouraging innovation. 3- Data-handling literacy for staff: Training should make clear what happens when data is entered into a public model and how even small fragments can compromise privacy. Awareness remains the strongest line of defence. These steps won't eliminate every risk, but they begin to align front-line practice with regulatory intent, protecting both patients and professionals. The road ahead The Canadian health-care sector is already under pressure from staffing shortages, cyberattacks and growing digital complexity. Generative AI offers welcome relief by automating documentation and translation, yet its unchecked use could erode public trust in medical data protection. Policymakers now face a choice: either proactively govern AI use within health institutions or wait for the first major privacy scandal to force reform. The solution is not to ban these tools but to integrate them safely. Building national standards for "AI-safe" data handling, similar to food-safety or infection-control protocols, would help ensure innovation doesn't come at the expense of patient confidentiality. Shadow AI isn't a futuristic concept; it's already embedded in daily clinical routines. Addressing it requires a co-ordinated effort across technology, policy and training, before Canada's health-care system learns the hard way that the most dangerous cyber threats may come from within.
[2]
Silent cyber threats: How shadow AI could undermine Canada's digital health defenses
Across Canada, doctors and nurses are quietly using public artificial-intelligence (AI) tools like ChatGPT, Claude, Copilot and Gemini to write clinical notes, translate discharge summaries or summarize patient data. But even though these services offer speed and convenience, they also pose unseen cyber-risks when sensitive health information is no longer controlled by the hospital. Emerging evidence suggests this behavior is becoming more common. A recent ICT & Health Global article cited a BMJ Health & Care Informatics study showing that roughly one in five general practitioners in the United Kingdom reported using generative-AI tools such as ChatGPT to help draft clinical correspondence or notes. While Canadian-specific data remain limited, anecdotal reports suggest that similar informal uses may be starting to appear in hospitals and clinics across the country. This phenomenon, known as shadow AI, refers to the use of AI systems without formal institutional approval or oversight. In health-care settings, it refers to well-intentioned clinicians entering patient details into public chatbots that process information on foreign servers. Once that data leaves a secure network, there is no guarantee where it goes, how long it is stored, or whether it may be reused to train commercial models. A growing blind spot Shadow AI has quickly become one of the most overlooked threats in digital health. A 2024 IBM Security report found that the global average cost of a data breach has climbed to nearly US$4.9 million, the highest on record. While most attention goes to ransomware or phishing, experts warn that insider and accidental leaks now account for a growing share of total breaches. In Canada, the Insurance Bureau of Canada and the Canadian Center for Cyber Security have both highlighted the rise of internal data exposure, where employees unintentionally release protected information. When those employees use unapproved AI systems, the line between human error and system vulnerability blurs. Are any of these documented cases in health settings? While experts point to internal data exposure as a growing risk in health-care organizations, publicly documented cases where the root cause is shadow AI use remain rare. However, the risks are real. Unlike malicious attacks, these leaks happen silently, when patient data is simply copy-and-pasted into a generative AI. No alarms sound, no firewalls are tripped, and no one realizes that confidential data has crossed national borders. This is how shadow AI can bypass every safeguard built into an organization's network. Why anonymization isn't enough Even if names and hospital numbers are removed, health information is rarely truly anonymous. Combining clinical details, timestamps and geographic clues can often allow re-identification. A study in Nature Communications showed that even large "de-identified" datasets can be matched to individuals with surprising accuracy when cross-referenced with other public information. Public AI models further complicate the issue. Tools such as ChatGPT or Claude process inputs through cloud-based systems that may store or cache data temporarily. While providers claim to remove sensitive content, each has its own data-retention policy and few disclose where those servers are physically located. For Canadian hospitals subject to the Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial privacy laws, this creates a legal gray zone. Everyday examples hiding in plain sight Consider a nurse using an online translator powered by generative AI to help a patient who speaks another language. The translation appears instant and accurate -- yet the input text, which may include the patient's diagnosis or test results, is sent to servers outside Canada. Another example involves physicians using AI tools to draft patient follow-up letters or summarize clinical notes, unknowingly exposing confidential information in the process. A recent Insurance Business Canada report warned that shadow AI could become "the next major blind spot" for insurers. Because the practice is internal and voluntary, most organizations have no metrics to measure its scope. Hospitals that do not log AI usage cannot audit what data has left their systems or who sent it. Bridging the gap between policy and practice Canada's health-care privacy framework was designed long before the arrival of generative AI. Laws like the PIPEDA and provincial health-information acts regulate how data is collected and stored but rarely mention machine-learning models or large-scale text generation. As a result, hospitals are forced to interpret existing rules in a rapidly evolving technological environment. Cybersecurity specialists argue that health organizations need three layers of response: 1. AI-use disclosure in cybersecurity audits: Routine security assessments should include an inventory of all AI tools being used, sanctioned or otherwise. Treat generative-AI usage the same way organizations handle "bring-your-own-device" risks. 2. Certified 'safe AI for health' gateways: Hospitals can offer approved, privacy-compliant AI systems that keep all processing within Canadian data centers. Centralizing access allows oversight without discouraging innovation. 3. Data-handling literacy for staff: Training should make clear what happens when data is entered into a public model and how even small fragments can compromise privacy. Awareness remains the strongest line of defense. These steps won't eliminate every risk, but they begin to align front-line practice with regulatory intent, protecting both patients and professionals. The road ahead The Canadian health-care sector is already under pressure from staffing shortages, cyberattacks and growing digital complexity. Generative AI offers welcome relief by automating documentation and translation, yet its unchecked use could erode public trust in medical data protection. Policymakers now face a choice: either proactively govern AI use within health institutions or wait for the first major privacy scandal to force reform. The solution is not to ban these tools but to integrate them safely. Building national standards for "AI-safe" data handling, similar to food-safety or infection-control protocols, would help ensure innovation doesn't come at the expense of patient confidentiality. Shadow AI isn't a futuristic concept; it's already embedded in daily clinical routines. Addressing it requires a coordinated effort across technology, policy and training, before Canada's health-care system learns the hard way that the most dangerous cyber threats may come from within. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Share
Copy Link
Canadian healthcare workers are increasingly using unauthorized AI tools like ChatGPT for clinical tasks, creating significant cybersecurity risks as patient data is processed on foreign servers without institutional oversight. This 'shadow AI' phenomenon poses silent threats that bypass traditional security measures.

Across Canada's healthcare system, a silent digital revolution is taking place that could undermine years of cybersecurity investments. Doctors and nurses are increasingly turning to public artificial intelligence tools like ChatGPT, Claude, Copilot, and Gemini to streamline their daily tasks, from writing clinical notes to translating discharge summaries and summarizing patient data
1
. While these tools offer unprecedented speed and convenience, they create a dangerous blind spot in healthcare cybersecurity.Emerging evidence suggests this behavior is becoming increasingly common. A recent study published in BMJ Health & Care Informatics found that roughly one in five general practitioners in the United Kingdom reported using generative AI tools such as ChatGPT to help draft clinical correspondence or notes
2
. While Canadian-specific data remains limited, anecdotal reports indicate that similar informal uses are beginning to appear in hospitals and clinics across the country.This phenomenon, known as "shadow AI," refers to the use of AI systems without formal institutional approval or oversight. In healthcare settings, it specifically describes well-intentioned clinicians entering patient details into public chatbots that process information on foreign servers
1
. Once that sensitive data leaves a secure hospital network, there is no guarantee where it goes, how long it is stored, or whether it may be reused to train commercial models.Unlike traditional cyberattacks, shadow AI creates silent vulnerabilities. When patient data is simply copy-and-pasted into a generative AI system, no alarms sound, no firewalls are triggered, and no one realizes that confidential information has crossed national borders
2
. This is precisely how shadow AI can bypass every safeguard built into an organization's network infrastructure.Shadow AI has quickly become one of the most overlooked threats in digital health security. According to a 2024 IBM Security report, the global average cost of a data breach has climbed to nearly US$4.9 million, representing the highest figure on record
1
. While most cybersecurity attention focuses on ransomware attacks or phishing schemes, experts warn that insider and accidental leaks now account for a growing share of total breaches.In Canada, both the Insurance Bureau of Canada and the Canadian Centre for Cyber Security have highlighted the rise of internal data exposure, where employees unintentionally release protected information
2
. When employees use unapproved AI systems, the distinction between human error and system vulnerability becomes increasingly blurred.Related Stories
Many healthcare workers believe that removing names and hospital numbers makes patient information safe for AI processing, but this assumption is dangerously flawed. Health information is rarely truly anonymous, as combining clinical details, timestamps, and geographic clues can often allow re-identification of patients
1
. A study published in Nature Communications demonstrated that even large "de-identified" datasets can be matched to individuals with surprising accuracy when cross-referenced with other publicly available information.Public AI models further complicate this issue. Tools such as ChatGPT or Claude process inputs through cloud-based systems that may store or cache data temporarily
2
. While providers claim to remove sensitive content, each platform maintains its own data-retention policy, and few disclose where their servers are physically located.The risks of shadow AI manifest in seemingly innocent everyday scenarios. Consider a nurse using an online translator powered by generative AI to help a patient who speaks another language. While the translation appears instant and accurate, the input text—which may include the patient's diagnosis or test results—is transmitted to servers outside Canada
1
.Similarly, physicians using AI tools to draft patient follow-up letters or summarize clinical notes may unknowingly expose confidential information in the process. A recent Insurance Business Canada report warned that shadow AI could become "the next major blind spot" for insurers
2
.For Canadian hospitals subject to the Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial privacy laws, this creates a complex legal gray zone. Canada's healthcare privacy framework was designed long before the arrival of generative AI, with existing laws regulating how data is collected and stored but rarely mentioning machine-learning models or large-scale text generation.
Summarized by
Navi
[1]
1
Business and Economy

2
Technology

3
Policy and Regulation
