How Artificial Intelligence Manufacturers Can Protect Themselves Against Future Negligence Claims
Innovative medical devices have changed the healthcare landscape and will continue making dramatic improvements in patient care. Nevertheless, the growth of such devices will inevitably lead to increased litigation over their alleged failures. All companies developing healthcare tech therefore need to consider measures to protect themselves against potential claims.
Any litigation that arises from medtech that uses AI – especially AI used as part of a diagnosis or intervention – is likely to be complicated. Medtech often involves a complex chain of actions involving a number of different parties, ranging from medical device manufacturers to programmers to physicians. If AI is blamed for misdiagnosing a patient, it may be attributed to a series of connected events rather than to a single failure. In such circumstances, personal injury plaintiffs may seek remedies against everyone involved in their care.
This could potentially include the manufacturer who developed and marketed the AI, but might also include the doctor who input data into the AI or interpreted data coming out of the AI. It could also involve suing the local doctor to try to prevent removal to federal court and pursuing litigation in what may be perceived as a more balanced forum.
Added to this complexity is the so-called ‘black box’ challenge relating to the AI itself. Even if it is possible to know what data were input into the AI and what the AI’s final output data were, the exact steps taken by the algorithm to reach the output decision may not always be fully retraced. You cannot always ask AI to explain its output in the same way that you can ask a doctor. In some circumstances, there can be an ability to retrace the parameters, but it can be a challenge to determine the basis for the alleged error or ambiguity over outcome.
There are, however, steps that those developing AI-based medtech can take to minimize risk. First, stay up-to-date on specific regulatory guidance on AI as it emerges and – when applicable, particularly when your device is used for diagnosis – work with regulatory authorities to seek appropriate approvals and input relating to your device. Here a significant threshold question will be whether the software at issue is even regulated by the FDA, or whether it is considered within a safe harbor.
Second, support and substantiate the appropriate role of the treating physician’s medical judgment in patient care. Develop the AI in a way that is explainable, provide documentation or training to users on how the AI works, and consider the role of doctors in providing their own medical judgment rather than relying solely on a recommendation made by the AI. Communicate in a manner that clearly delineates that any final decisions on patient care must rest with the patient’s treating physician.
Third, seek and use advice on minimizing security risks from AI. Patient data requires heightened attention to privacy. For example, if a hacked digital health product injures a patient, product liability may hinge on whether the medical product manufacturer or the software designer was capable of designing a system that is immune to cybersecurity attacks, and to what extent such a defect is reasonably foreseeable, given the general public’s awareness of cybersecurity issues.
_
While no use of AI is risk-free, a manufacturer that considers and mitigates risks at the earliest stages will be best positioned to defend itself with minimal impact on its business.
Sidley Austin LLP provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship.
Attorney Advertising - For purposes of compliance with New York State Bar rules, our headquarters are Sidley Austin LLP, 787 Seventh Avenue, New York, NY 10019, 212.839.5300; One South Dearborn, Chicago, IL 60603, 312.853.7000; and 1501 K Street, N.W., Washington, D.C. 20005, 202.736.8000.