Risk Management Tools & Resources

 


Artificial Intelligence Risks: Biased Data and Functional Issues

Laura M. Cascella, MA

Artificial Intelligence Risks: Biased Data and Functional Issues

One of the major red flags raised about artificial intelligence (AI) is the potential for bias in the data on which machines are trained and — as a result — bias in their algorithms. Bias can occur for various reasons. For example, the data itself might be biased; research has shown racial, gender, socioeconomic, and age-related disparities in medical studies. If machines are trained on data from these studies, their algorithms will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1

In some cases, bias might occur because of a variance in the training data or environment and how the AI program or tool is applied in real life. A recent study in BMJ Quality & Safety refers to this as “distributional shift” and notes that this mismatch can occur because of:

  • Bias in the data training set (e.g., data represent outlying rather than typical cases)
  • Changes in disease patterns over time that are not introduced to the AI system (e.g., data are not updated, so the program continues to rely on the initial data training set)
  • Inappropriate application of an AI system to an unanticipated patient context (e.g., a different population than originally intended) 2

An example of the latter was noted in a recent Health Data Management article that discussed AI-enabled facial analysis systems used to detect pain and monitor disease. An investigation of algorithmic bias showed that these systems did not perform well when used with older adults who had dementia.3

Another important consideration with AI is that machine learning is literal and results oriented — that is, it relies on the data it receives to run algorithms that generate outputs, whereas humans have the ability to see “bigger picture” influences. As a result, AI systems might be rigid in recognizing and adapting to nuances, changes in context, and idiosyncrasies.

This “insensitivity to impact” can prevent AI from factoring in the consequences of false positives and false negatives. The aforementioned BMJ Quality & Safety article notes that although humans’ ability to err on the side of caution might result in a higher number of false positives and apparent decreases in accuracy, “this behaviour alteration in the face of a potentially serious outcome is critical for safety . . ." 4

Other examples of how AI functioning might lead to unintended consequences include:

  • Unsafe failure mode. A program or system makes predictions with limited confidence or insufficient information.
  • Negative side effects. A program or system has narrow functions that are unable to take into account a broader context.
  • Reward hacking. A program or system finds ways to meet specific short-term objectives without achieving long-term goals.
  • Unsafe exploration. A program or system pushes safety boundaries in an attempt to learn new strategies or methods.5

Acknowledgment of issues related to biased data and problems with AI functioning have elevated concerns about the overall safety and reliability of AI technologies. “The rapid pace of change, diversity of different techniques and multiplicity of tuning parameters make it difficult to get a clear picture of how accurate these systems might be in clinical practice or how reproducible they are in different clinical contexts."6

Thus, amid growing enthusiasm for AI, it is imperative that researchers, AI developers, public health experts, clinicians, and others recognize how AI might reinforce existing problems and generate new dilemmas. Failure to identify these issues and work toward viable solutions will have implications for patient safety and quality of care — and, ultimately, will contradict the proposed benefits of AI.

To learn more about other challenges and risks associated with AI, see Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare.

Endnotes



1 Slabodkin, G. (2019, August 13). AI, machine learning algorithms are susceptible to biased data. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-machine-learning-algorithms-are-susceptible-to-biased-data

2 Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019, March). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237. doi: 10.1136/bmjqs-2018-008370

3 Slabodkin, G. (2019, July 27). AI presents host of ethical challenges for healthcare. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-presents-host-of-ethical-challenges-for-healthcare

4 Ibid.

5 Ibid.

6 Challen, Artificial intelligence, bias and clinical safety.

MedPro Twitter

 

View more on Twitter