Risk Management Tools & Resources

 



Artificial Intelligence and Informed Consent

Laura M. Cascella, MA

Artificial Intelligence and Informed Consent

Informed consent, in its basic sense, seems like a fairly straightforward concept. A patient is informed about a proposed test, treatment, or procedure; its benefits and risks; and any alternative options. With this knowledge, the patient decides to either consent or not consent to the recommended plan. In reality, though, informed consent is a more complex process that involves nondelegable duties and varies in scope based on the type of test, treatment, or procedure involved.

When technology is introduced into the mix — particularly advanced technology — the informed consent process can become even more complicated because of additional information that the provider must convey to the patient and that the patient must weigh in his/her decision-making process.

Read more

Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare

Laura M. Cascella, MA

Waiting for Watson: Challenges and Risks in Using Artificial Intelligence in Healthcare

Artificial intelligence, or AI, is a burgeoning field in health information technology and a key element in envisioning the future of healthcare. Daily stories trend in the media related to AI applications and their widespread potential for revolutionizing medical practice and patient care. Yet, akin to the promises of electronic health records (EHRs) in the early 21st century, the excitement surrounding AI has no doubt led to a sensationalized view of its capabilities while marginalizing technological and operational challenges as well as safety and ethical concerns.2

Read more

Artificial Intelligence Risks: Biased Data and Functional Issues

Laura M. Cascella, MA

Artificial Intelligence Risks: Biased Data and Functional Issues

One of the major red flags raised about artificial intelligence (AI) is the potential for bias in the data on which machines are trained and — as a result — bias in their algorithms. Bias can occur for various reasons. For example, the data itself might be biased; research has shown racial, gender, socioeconomic, and age-related disparities in medical studies. If machines are trained on data from these studies, their algorithms will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1

In some cases, bias might occur because of a variance in the training data or environment and how the AI program or tool is applied in real life. A recent study in BMJ Quality & Safety refers to this as “distributional shift” and notes that this mismatch can occur because of:

Read more

Artificial Intelligence Risks: Black-Box Reasoning

Artificial Intelligence Risks: Black-Box Reasoning

Artificial intelligence (AI) systems and programs use data analytics and algorithms to perform functions that typically would require human intelligence and reasoning. Some types of AI are programmed to follow specific rules and logic to produce targeted outputs. In these cases, individuals can understand the reasoning behind a system’s conclusions or recommendations by examining its programming and coding.

However, many of today’s cutting-edge AI technologies — particularly machine learning systems that offer great promise for transforming healthcare — have more opaque reasoning, making it difficult or impossible to determine how they produce results. This unknown functioning is referred to as “black-box reasoning” or “black-box decision-making.” Rather than being programmed to follow commands, black-box programs learn through observation and experience and then create their own algorithms based on training data and desired outputs.1

Read more

Artificial Intelligence Risks: Automation Bias

Artificial Intelligence Risks: Automation Bias

Biased data and algorithms have been identified as significant ethical and safety concerns with artificial intelligence (AI); however, another type of bias also raises concern — automation bias. Humans, by nature, are vulnerable to cognitive errors resulting from knowledge deficits, faulty heuristics, and affective influences. In healthcare, these cognitive missteps are known to contribute to medical errors and patient harm, particularly in relation to delayed and incorrect diagnoses.

When AI is incorporated into clinical practice, healthcare providers might be susceptible to automation bias, which occurs when “clinicians accept the guidance of an automated system and cease searching for confirmatory evidence . . . perhaps transferring responsibility for decision-making onto the machine . . ."1 Similarly, clinicians who use generally reliable technology systems might become complacent and miss potential errors, particularly if they are pressed for time or carrying a heavy workload.

Read more



Artificial Intelligence Risks: Data Privacy and Security

As with other health information technologies, such as electronic health records and patient portals, artificial intelligence (AI) raises concerns about data privacy and security — particularly in an era where cyberattacks are rampant and patients’ protected health information (PHI) is highly valuable to identity thieves and cyber criminals.

With the digitalization of health information, the healthcare industry has faced growing challenges with securing increasing amounts of sensitive and confidential information while adhering to federal and state privacy and security regulations. AI presents similar challenges because of its dichotomous nature — it requires massive quantities and diverse types of digital data but is vulnerable to data breaches.

Read more

Artificial Intelligence Risks: Patient Expectations

At the heart of many innovations in healthcare are patients and finding ways to improve the quality of their care and experience. This is perhaps no more true than in the case of artificial intelligence (AI), which offers vast potential for improving patient outcomes through advances in population health management, risk identification and stratification, diagnosis, and treatment. Yet even with this promise, questions arise about how patients will interact with and react to these new technologies and how these advances will change the provider–patient relationship.

A look at other technologies reveals some insights and possible concerns. Electronic health records, for example, have been known to produce issues with communication. When clinicians focus on inputting data into the computer and looking at the screen, patients can feel ignored, dismissed, or disrespected. These issues can depersonalize the patient experience and erode the provider–patient relationship — a concern as well for AI as automation takes on more roles and responsibilities.

Read more

Artificial Intelligence Risks: Training and Education

Training and education are imperative in many facets of healthcare from understanding clinical systems to improving technical skill to understanding regulations and professional standards. Technology often presents unique training challenges because of the ways in which it disrupts existing workflow patterns, alters clinical practice, and creates both predictable and unforeseen challenges.

The emergence of artificial intelligence (AI), its anticipated expansion in healthcare, and its sheer scope point to significant training and educational needs for medical students and practicing healthcare providers. These needs go far beyond developing technical skills with AI programs and systems; rather, they call for a shift in the paradigm of medical learning.

Read more

Pages: 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  

MedPro Twitter

 

View more on Twitter