The increasing adoption of artificial intelligence (AI) tools in healthcare has brought to light a significant risk known as shadow AI, which poses major threats to data security. Shadow AI refers to AI solutions that employees utilize without the knowledge or approval of their organization’s IT departments. This lack of oversight can lead to vulnerabilities, governance gaps, and potential legal consequences, ultimately heightening the risk of data breaches.

“What makes shadow AI particularly dangerous is its invisibility and autonomy,” said Vishal Kamat, vice president of data security at IBM. He emphasized that these tools can learn and generate outputs without clear traceability, making it difficult for security teams to manage them effectively. The challenge lies not only in identifying unauthorized tools but also in understanding how they interact with sensitive workflows and data.

Impact of Shadow AI on Data Security

The rise of shadow AI is a growing concern in the healthcare sector. According to a 2025 survey conducted by enterprise healthcare operations software company symplr, 86% of IT executives reported instances of shadow IT within their health systems, an increase from 81% in 2024. Kamat noted that common forms of shadow IT often arise from employees seeking to enhance efficiency, such as using personal cloud storage or unapproved messaging apps that handle sensitive information.

While the motivation behind shadow IT may be to improve workflows, it frequently results in disruptions. Shadow AI compounds these issues by introducing a layer of risk. Employees might deploy open-source large language models (LLMs) in enterprise environments or upload confidential patient data to public generative AI platforms without oversight, leading to data leakage and potential regulatory violations.

IBM’s 2025 “Cost of a Data Breach” report highlighted the serious implications of shadow AI, revealing that 20% of surveyed organizations experienced a breach linked to shadow AI, which is significantly higher than incidents involving sanctioned AI. Organizations grappling with high levels of shadow AI reported an average breach cost increase of $200,000, surpassing many other factors contributing to data breaches.

Kamat pointed out that even well-meaning experimentation with unsanctioned tools can unleash serious security and compliance risks. In healthcare, where breaches of patient data and algorithmic bias can have dire consequences, the risks are particularly pronounced. The report indicated that the most compromised data in shadow AI incidents was customers’ personally identifiable information, while intellectual property was affected in 40% of these cases.

Strategies for Mitigating Shadow AI Risks

Despite the potential benefits of AI in healthcare, including enhanced workflows and expedited revenue cycles, successful implementation hinges on proper oversight. Kamat stressed that healthcare organizations must prioritize visibility into unauthorized applications and AI usage. He advocated for tools that continuously monitor data flows, especially those involving sensitive patient information.

Once unauthorized tools are identified, organizations should assess their risks and integrate them into a formal review process. Communication plays a vital role in this process, as employees need to be aware of approved AI tools and understand responsible usage. Without ongoing communication, even well-structured policies may be overlooked or disregarded.

More than 60% of organizations included in IBM’s report lacked governance policies to manage AI or detect shadow AI. Implementing stringent access controls and conducting regular audits can significantly reduce the risk of data breaches while ensuring compliance with privacy regulations.

“As healthcare evolves, shadow AI isn’t just a technical risk; it endangers compliance and patient safety,” Kamat noted. With stringent mandates like HIPAA and increasing public scrutiny, proactive governance is essential not only for meeting regulatory standards but also for maintaining patient trust. Organizations must navigate these challenges carefully to harness the potential of AI while safeguarding sensitive data.