Risk Management Tools & Resources

 


Artificial Intelligence Risks: Automation Bias

Laura M. Cascella, MA, CPHRM

Artificial Intelligence Risks: Automation Bias

Biased data and algorithms have been identified as significant ethical and safety concerns with artificial intelligence (AI); however, another type of bias also raises concern — automation bias. Humans, by nature, are vulnerable to cognitive errors resulting from knowledge deficits, faulty heuristics, and affective influences/situativity. In healthcare, these cognitive missteps are known to contribute to medical errors and patient harm, particularly in relation to delayed and incorrect diagnoses.

When AI is incorporated into clinical practice, healthcare providers might be susceptible to automation bias, which occurs when “clinicians accept the guidance of an automated system and cease searching for confirmatory evidence . . . perhaps transferring responsibility for decision-making onto the machine . . ."1 Similarly, clinicians who use generally reliable technology systems might become complacent and miss potential errors, particularly if they are pressed for time or carrying a heavy workload.

Automation bias might occur as a result of risk homeostasis, a theory that suggests that individuals adjust their behavior based on their perceived level of risk. The less safe an activity seems, the more careful a person will be and vice versa. If AI introduces a perceived level of accuracy or infallibility, clinicians might be more likely to accept incorrect or suboptimal recommends (errors of commission) or fail to act entirely without automated guidance (errors of omission).2

In light of the black-box issues associated with AI, automation bias is particularly concerning if providers rely too heavily on machine learning technology without having a clear indication of how these technologies work, how the end results are produced, and the probability of their accuracy. Further, as disease patterns and standards of care change over time, AI can become less effective if it does not receive and adapt to updated data, posing patient safety threats when providers fail to recognize these discrepancies.

Failure to identify and address errors of commission and omission that occur as a result of automation bias and complacency can perpetuate these issues and lead to patient harm and the erosion of clinicians’ clinical judgment and decision-making skills. Unfortunately, automation bias — much like other cognitive biases — does not have a simple, universal solution. It will likely require a combination of strategies, such as providing ongoing education to raise awareness, using team-based approaches to care, finding novel ways to engage patients and families, experimenting with debiasing techniques, and implementing other best practices as they are identified.

To learn more about other challenges and risks associated with AI, see MedPro’s article Using Artificial Intelligence in Healthcare: Challenges and Risks.

Endnotes



1
Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019, March). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237. doi: 10.1136/bmjqs-2018-008370

2 Gretton, C. (2017, June 24). The dangers of AI in health care: risk homeostasis and automation bias. Towards Data Science. Retrieved from https://towardsdatascience.com/the-dangers-of-ai-in-health-care-risk-homeostasis-and-automation-bias-148477a9080f


MedPro Twitter

 

View more on Twitter