Risk Management Tools & Resources

 


Artificial Intelligence Risks: Data Privacy and Security

Laura M. Cascella, MA

Artificial Intelligence Risks: Data Privacy and Security

As with other health information technologies, such as electronic health records and patient portals, artificial intelligence (AI) raises concerns about data privacy and security — particularly in an era where cyberattacks are rampant and patients' protected health information (PHI) is highly valuable to identity thieves and cyber criminals.

With the digitalization of health information, the healthcare industry has faced growing challenges with securing increasing amounts of sensitive and confidential information while adhering to federal and state privacy and security regulations. AI presents similar challenges because of its dichotomous nature — it requires massive quantities and diverse types of digital data but is vulnerable to data breaches.

The momentum of AI development further complicates matters because current privacy and security practices and standards might not account for AI capabilities. For example, an article in AMA Journal of Ethics explains that current methods for de-identifying data are ineffective "in the context of large, complex data sets when machine learning algorithms can re-identify a record from as few as 3 data points."1 The authors also note that AI algorithms can be susceptible to cyberattacks, which could pose threats to patient safety and data integrity.

Dr. Georgia Tourassi, distinguished scientist and director of the Health Data Sciences Institute at the Department of Energy's Oak Ridge National Laboratory, discussed the privacy and security challenges associated with AI in testimony to the U.S. House of Representatives' Committee on Science, Space, and Technology, noting that:

Access to large amounts of data is fundamental to AI . . . However, liberating and providing access to this data is both a technological and a policy challenge  . . . medical data cannot be siloed and must be combined with other data points, such as those providing context on a patient's living conditions . . . At the same time, the sheer volume, variability, and sensitive nature of the personal data being collected require newer, extensive, secure, and sustainable computational infrastructure and algorithms.2

As part of her statement, Dr. Tourassi also posed ethical questions that need to be considered in relation to privacy and security in an AI-enabled healthcare environment, such as how to define boundaries between research and commercial use of patient data and how to determine who owns the intellectual property of data-driven AI algorithms.3

Protecting patient privacy and securing digital data will continue to be a fundamental risk issue as AI becomes more mainstream in healthcare, raising numerous legal and ethical questions. Thus, it will be incumbent on healthcare leaders, AI developers, policymakers, data scientists, and other experts to identify vulnerabilities and consider innovative and proactive strategies to address them.

To learn more about other challenges and risks associated with AI, see Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare.

Endnotes



1 Crigger, E., & Khoury C. (2019, February). Making policy on augmented intelligence in health care. AMA Journal of Ethics, 21(2), E188-191. doi: 10.1001/amajethics.2019.188

2 Hearing on Artificial Intelligence: Societal and Ethical Implications: Hearings Before the Committee on Science, Space, and Technology, House of Representatives (2019) (Statement of Georgia Tourassi). Retrieved from https://science.house.gov/imo/media/doc/Tourassi%20Testimony.pdf

3 Ibid.

MedPro Twitter

 

View more on Twitter