Risk Management Tools & Resources

 

Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare

Artificial intelligence, or AI, is a burgeoning field in health information technology and a key element in envisioning the future of healthcare. Daily stories trend in the media related to AI applications and their widespread potential for revolutionizing medical practice and patient care. Yet, akin to the promises of electronic health records (EHRs) in the early 21st century, the excitement surrounding AI has no doubt led to a sensationalized view of its capabilities while marginalizing technological and operational challenges as well as safety and ethical concerns.2

Read more
Artificial Intelligence Risks: Biased Data and Functional Issues

One of the major red flags raised about artificial intelligence (AI) is the potential for bias in the data on which machines are trained and — as a result — bias in their algorithms. Bias can occur for various reasons. For example, the data itself might be biased; research has shown racial, gender, socioeconomic, and age-related disparities in medical studies. If machines are trained on data from these studies, their algorithms will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1

Read more
Artificial Intelligence Risks: Black-Box Reasoning

Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system's conclusions or recommendations by examining its programming and coding.

Read more
Artificial Intelligence Risks: Automation Bias


Biased data and algorithms
have been identified as significant ethical and safety concerns with artificial intelligence (AI); however, another type of bias also raises concern — automation bias. Humans, by nature, are vulnerable to cognitive errors resulting from knowledge deficits, faulty heuristics, and affective influences. In healthcare, these cognitive missteps are known to contribute to medical errors and patient harm, particularly in relation to delayed and incorrect diagnoses.

Read more

Artificial Intelligence Risks: Data Privacy and Security

As with other health information technologies, such as electronic health records and patient portals, artificial intelligence (AI) raises concerns about data privacy and security — particularly in an era where cyberattacks are rampant and patients' protected health information (PHI) is highly valuable to identity thieves and cyber criminals.

With the digitalization of health information, the healthcare industry has faced growing challenges with securing increasing amounts of sensitive and confidential information while adhering to federal and state privacy and security regulations. AI presents similar challenges because of its dichotomous nature — it requires massive quantities and diverse types of digital data but is vulnerable to data breaches.

Read more

Artificial Intelligence Risks: Patient Expectations

At the heart of many innovations in healthcare are patients and finding ways to improve the quality of their care and experience. This is perhaps no more true than in the case of artificial intelligence (AI), which offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies and how these advances will change the provider–patient relationship.

A look at other technologies reveals some insights and possible concerns. Electronic health records, for example, have been known to produce issues with communication. When clinicians focus on inputting data into the computer and looking at the screen, patients can feel ignored, dismissed, or disrespected. These issues can depersonalize the patient experience and erode the provider–patient relationship — a concern as well for AI as automation takes on more roles and responsibilities.

Read more

Artificial Intelligence Risks: Training and Education

Training and education are imperative in many facets of healthcare from understanding clinical systems to improving technical skill to understanding regulations and professional standards. Technology often presents unique training challenges because of the ways in which it disrupts existing workflow patterns, alters clinical practice, and creates both predictable and unforeseen challenges.

The emergence of artificial intelligence (AI), its anticipated expansion in healthcare, and its sheer scope point to significant training and educational needs for medical students and practicing healthcare providers. These needs go far beyond developing technical skills with AI programs and systems; rather, they call for a shift in the paradigm of medical learning.

Read more
The Corrosive Effect of Disruptive Behavior on Staff Morale and Patient Care

In any workplace, disruptive and negative behaviors can chip away at workers' confidence, erode trust in leadership, and generally sour the working environment. Healthcare is no exception, and disruptive behavior among healthcare providers and staff is a well-documented problem in various practice settings.

The damage from disruptive behavior takes many forms, but one of the most pernicious consequences is its negative impact on employee morale and job turnover. A 2019 article in the Journal of Nursing Management notes that "Disruptive behaviour within the health care setting is concomitant with decreased productivity, absenteeism, turnover, and decreased patient safety."1

Read more

Pages: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  

MedPro Twitter

 

View more on Twitter