Artificial intelligence (AI) is transforming the medical landscape, particularly in the realm of legal liability. A recent study highlights how AI’s integration into clinical workflows can significantly alter the perception of fault in malpractice cases. Researchers from **Penn State College of Medicine**, **Brown University**, and **Seton Hall University School of Law** found that the way AI is utilized by clinicians affects jurors’ decisions regarding liability in cases of patient harm.
Published on **March 10, 2024**, in the journal **Nature Health**, the study examined a hypothetical malpractice case involving a radiologist who failed to detect a brain bleed on a computerized tomography (CT) scan. While AI had correctly identified the abnormality, the radiologist’s interpretation led to irreversible brain damage for the patient. In scenarios where the radiologist reviewed the scan only once after AI feedback, mock jurors sided with the plaintiff nearly **50%** more often than when the radiologist reviewed the scan twice—first independently and then after AI input.
Michael Bruno, a professor of radiology and medicine at **Penn State College of Medicine**, emphasized the dual promise and challenge of AI in healthcare. “AI holds promise to improve the quality and safety of health care and to reduce errors and patient harm, but the risk of legal liability is a potential barrier for investment and development of this technology,” he stated.
Implications of AI in Radiology Workflows
The research team, led by Bruno, convened a summit on “Human Factors and Artificial Intelligence in Healthcare” to discuss the future of AI in medical settings. Brian Sheppard, a law professor at **Seton Hall University**, noted the significance of the findings for stakeholders in the healthcare sector. “This kind of information is vital because you can weigh the cost versus the benefits in a far more informed way,” he explained.
The choice to focus on a radiology case was strategic, as AI integration in this field is more advanced compared to other medical specialties. This makes it a relevant context for investigating the physician-AI interaction. Given that many medical malpractice cases settle out of court, the hypothetical scenario allowed researchers to gain insights into liability perceptions that would otherwise be difficult to obtain.
In total, **282 participants** were recruited to assess one of two scenarios. In the first, the radiologist reviewed the CT scan once after AI flagged it as abnormal and concluded that there was no evidence of bleeding. In the second scenario, the radiologist reviewed the scan twice, once before and once after receiving AI feedback. Participants were asked whether the radiologist met their duty of care. Findings revealed that nearly **75%** of jurors believed the radiologist failed in their duty when they reviewed the CT once. This figure dropped to **53%** when the radiologist reviewed the CT twice.
Challenges and Future Considerations
The researchers suggested that modifying how radiologists interact with AI—specifically, the number of times they review imaging tests—could mitigate legal risks. However, such changes could also introduce new challenges. Grayson Baird, an associate professor of radiology at **Brown University**, pointed out that biases might discourage radiologists from questioning AI outputs. “If you disagree with AI and you’re wrong, this will be used against you,” Baird noted, stressing the repercussions for both practitioners and patients.
The study also raises concerns about the broader implications of AI on healthcare costs. “The cost is then passed on to the patient who now has to deal with the anxiety and discomfort from follow-up care, imaging, or tests,” he added.
While the study did not delve into the reasons behind the relationship between AI use and liability perception, it indicates that context plays a crucial role. Previous research conducted by the team showed that jurors were less likely to find a radiologist liable when they agreed with AI interpretations. Furthermore, the perception of legal liability diminished when AI error rates were disclosed to jurors.
Michael Bernstein, an associate professor of radiology at **Brown University**, concluded that understanding how AI shapes liability perceptions is critical as the technology continues to evolve. “How people perceive AI, and how their perception impacts human liability, is evolving quickly along with the technology. It’s something that we need to pay close attention to,” he said.
This research underscores the importance of continued dialogue and investigation into the intersection of AI technology and medical practice, especially as it relates to legal accountability and patient care.