Risk Management Tools & Resources

 


Artificial Intelligence Risks: Black-Box Reasoning

Laura M. Cascella, MA, CPHRM

Artificial Intelligence Risks: Black-Box Reasoning

Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system’s conclusions or recommendations by examining its programming and coding.

However, many of today’s cutting-edge AI technologies — particularly machine learning systems that offer great promise for transforming healthcare — have more opaque reasoning, making it difficult or impossible to determine how they produce results. This unknown functioning is referred to as “black-box reasoning” or “black-box decision-making.” Rather than being programmed to follow commands, black-box programs learn through observation and experience and then create their own algorithms based on training data and desired outputs.1

Over time, if more data are introduced, AI can continue to adjust its reasoning and decision-making. The benefit of evolving AI is increased accuracy; however, by “becoming more autonomous with each improvement, the algorithms by which the technology operates become less intelligible to users and even the developers who originally programmed the technology.”2

Although black-box AI offers compelling capabilities and the potential for significant advances in a number of areas, including disease detection and precision treatment, it also presents serious concerns. Lack of transparency about “how output is derived from input”3 can erode healthcare providers’ confidence in AI systems and create barriers to acceptance and adoption.

One important thing to note, though, is that opaque reasoning might not always generate these concerns. In some situations, how an AI program produces results might be perplexing but not troubling. For example, an article in the AMA Journal of Ethics notes that if an image analysis system can detect cancer with 100 percent accuracy, knowing how it does so is not critical because it is solving a problem that has a “black or white” answer — either cancer is detected or it is not — and the system is doing it with more accuracy than a human.4

However, not all decision-making results in an indisputable conclusion. In situations in which AI produces results (a) with less than 100 percent accuracy, (b) based on unknown but potentially biased algorithms, or (c) in which various factors must be weighed to determine an optimal course of care, providers need an understanding of how the technology works and the level of confidence on which it is drawing its conclusions. Lack of such information might make it difficult for providers to judge the quality and reliability of AI algorithms and might undermine a clinician’s own clinical reasoning and decision-making.

Black-box AI also creates concerns related to liability. In many cases, the technology is emerging and evolving more quickly than standards of care and best practices, leaving healthcare providers with a level of uncertainty about using AI in clinical practice. Additionally, if AI’s functioning is unknown and unpredictable, questions arise about who can and should be held responsible when errors occur that result in patient harm.

The authors of an article about tort liability doctrines and AI also point to the technology’s increasing autonomy as an impending legal challenge. As machines continue to learn and adapt, “fewer parties (i.e., clinicians, health care organizations, and AI designers) actually have control over it, and legal standards founded on agency, control, and foreseeability collapse . . .”5 The authors explain that the number of people involved in the development of AI systems and programs also can make it difficult to assign responsibility for malfunctions or errors, particularly when these technologies are built over many years with input from various experts.

These numerous concerns highlight the obstacles presented by black-box AI systems as well as the shortcomings of current legal principles to address potential AI liability. Much more work remains in relation to defining and implementing transparency requirements, determining system reliability, and building confidence in AI technology. Further, the complexity of AI indicates the need for evolving standards that address malpractice negligence, vicarious liability, and product liability in AI-enabled healthcare delivery.

To learn more about other challenges and risks associated with AI, see MedPro’s article Using Artificial Intelligence in Healthcare: Challenges and Risks.

Endnotes


1 Knight, W. (2017, April 11). The dark secret at the heart of AI. MIT Technology Review. Retrieved from www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

2 Sullivan, H. R., & Schweikart, S. J. (2019, February). Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics, 21(2), E160-166. doi: 10.1001/amajethics.2019.160

3 Anderson, M., & Anderson, S. L. (2019, February). How should AI be developed, validated, and implemented in patient care? AMA Journal of Ethics, 21(2), E125-130.
doi: 10.1001/amajethics.2019.125

4 Ibid.

5 Sullivan, et al., Are current tort liability doctrines adequate for addressing injury caused by AI?

MedPro Twitter

 

View more on Twitter