Risk Management Tools & Resources

 


Artificial Intelligence Risks: Training and Education

Laura M. Cascella, MA

Artificial Intelligence Risks: Training and Education

Training and education are imperative in many facets of healthcare from understanding clinical systems to improving technical skill to understanding regulations and professional standards. Technology often presents unique training challenges because of the ways in which it disrupts existing workflow patterns, alters clinical practice, and creates both predictable and unforeseen challenges.

The emergence of artificial intelligence (AI), its anticipated expansion in healthcare, and its sheer scope point to significant training and educational needs for medical students and practicing healthcare providers. These needs go far beyond developing technical skills with AI programs and systems; rather, they call for a shift in the paradigm of medical learning.

An AMA Journal of Ethics article, titled “Reimagining Medical Education in the Age of AI,” discusses how traditional medical education — which focuses on information acquisition, retention, and application — is insufficient, counterproductive, and potentially harmful in the era of digital medicine. The exponential volume of medical information and health data exceeds humans’ cognitive capacity, and technology’s ability to process big data and derive algorithms indicates shifting educational priorities for practitioners.1

The authors of the article suggest refocusing medical education and training on knowledge management rather than information acquisition. Knowledge management would encompass strategies for collaborating with AI, improving communication with patients, and cultivating empathy in an increasingly automated environment. As AI becomes more ingrained in the delivery of healthcare, providers will need to have an understanding of how these systems operate, the mathematical probabilities associated with their outputs, and their potential limitations — as well as how best to communicate this information to patients and families.

AI-enabled precision medicine will result in more personalized diagnostic and treatment capabilities; yet true personalization will remain rooted in the provider–patient interaction.“The ability to interpret [AI] probabilities clearly and sensitively to patients and their families represents an additional—and essential—educational demand that speaks to a vital human, clinical, and ethical need that no amount of computing power can meet."2

Indeed, although machines might make predictions with great accuracy, human decision-making is complex and can draw on individual experience, social factors, cultural and/or religious beliefs, financial considerations, personal preferences, and more. Healthcare providers will need strategies for balancing data analytics with human factors to help patients and families navigate digital medicine and make informed care decisions.

To learn more about other challenges and risks associated with AI, see Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare.

Endnotes



1 Wartman, S. A., & Combs, S. D. (2019, February). Reimagining medical education in the age of AI. AMA Journal of Ethics, 21(2), E146-152. doi: 10.1001/amajethics.2019.146

2 Ibid.

MedPro Twitter

 

View more on Twitter