WHAT IF ARTIFICIAL INTELLIGENCE & MACHINE LEARNING MEAN THAT SKYNET IS YOUR HOSPITAL’S NEXT ELECTRONIC HEALTH RECORD

 |  Share

Do you want DE Insights Delivered to Your Inbox? Sign up Today!

Imagine a hospital emergency room ten years from now. Sam Brown, a car crash-victim, has come in with severe, life-threatening injuries. ER admissions staff enter Sam’s information into the hospital’s artificial intelligence (AI)-enhanced electronic health record (EHR) system. From this point forward, every medical device that interacts with the patient (e.g., vital signs monitors, surgical instruments, MRIs, etc.) records and uploads data to Sam’s visit account in the EHR. This is then synced with Sam’s broader EHR file–which includes all digitally available medical history for her prior to that moment. Pre-formulated and highly complex algorithms, the backbone of the system’s AI, compile this data and compare it to information gathered from millions of patients and hundreds of thousands of similar incidents. Dr. John Doe, an ER physician who is battling to keep pace with our patient’s ever- worsening condition while also working through the effects of chronic sleep deprivation, stress, and general burnout, sees an alert on a screen in front of him advising him on suggested next steps for treating the patient’s injuries and associated complications. Exhausted and short on time, Dr. Doe chooses to follow the system’s suggestions, instead of further evaluating the circumstances himself. Sam Brown dies soon after.

Several weeks later, the medical software vendor notifies all of its customer healthcare networks that the vendor’s programmers have identified an unanticipated outcome that can occur under certain conditions. As it turns out, when the AI’s algorithms interact with certain combinations of data from a patient’s current visit, medical history, and the data drawn from millions of other visits (which drive the AI’s machine learning), there is an unexpected output from the algorithm that provides faulty notifications to medical staff about suggested next steps in treating patients. The vendor’s programmers provide Dr. Doe’s hospital with detailed steps to reproduce the issue in their own simulation EHR (which is a mirror image of the real EHR used for patients) while the programmers work on coding a standard solution. Concerned, Dr. Doe asks the hospital’s healthcare IT staff to work through the steps to reproduce the issue, using a simulated version of Sam Brown’s medical record and account from the night she passed away. The IT staff determined that, the AI’s unanticipated output did indeed lead to the flawed medical suggestion that Dr. Doe acted upon—and which may have contributed to Sam Brown’s death.

Who, if anyone, should be legally responsible for the death of a patient like Sam Brown? This is a question lawyers and lawmakers should begin thinking about sooner rather than later. It is unlikely that Dr. Doe in this case will admit he relied solely on the system’s analysis. Because the EHR cannot track events that occur outside it, no audit log will exist that could say whether or not Dr. Doe personally contemplated Sam’s medical needs after seeing the system-suggested treatment. So, if relying on the AI’s diagnosis or proposed treatment plan becomes the de facto new standard of care for medical professionals, who (if anyone) is to blame when the system fails? Even if a human makes the ultimate treatment decision, AI may eventually become the effective decision-maker in many medical contexts, which then raises the further question of whether or not medical professionals will understand (or even be able to find out) how the software is generating a given conclusion about how to treat a patient. Both technical and human problems arising from AI may result in liability for various actors involved with the production and use of EHRs.

TECHNICAL FLAWS

As the medical software industry gears up to implement AI and machine learning in ever more EHR applications, [1] judges, lawyers, and policy-makers would be wise to learn from the growing-pains that accompanied the rapid growth and adoption of EHRs in the United States over the last decade. Hospitals have been fundamentally transformed by the ever-growing use and complexity of EHRs that the Health Information Technology for Economic and Clinical Health (HITECH) Act stimulated beginning in 2008. [2] Companies like Meditech, Epic, Cerner, and Allscripts were forced to play catch up for years as hospitals across the United States scrambled to comply with HITECH’s Meaningful Use requirements

so that their client hospitals could qualify for government reimbursement. [3] At the same time, these companies also had to develop and implement new products to keep pace with both regulatory requirements [4] and customer demands for labor-saving technology. While the industry played this game of catch up, physicians, nurses, technicians, and healthcare IT staff struggled with an incredibly steep learning curve as many needed to learn new systems (sometimes several within a short period of time) [5] while still performing their normal functions in a high-stakes and high-stress environment. [6] This also meant that backlogs of hardcopy medical records often had to be scanned into EHRs manually, which is very laborious. [7] To further complicate matters, integration of electronic medical record systems between vendors was and remains a significant problem because of proprietary coding languages, competition, customized systems, and related factors [8].

At the March 2018 Healthcare Information and Management Systems Society (HIMSS) Global Conference & Exhibition in Las Vegas, Eric Schmidt (the Technical Advisor and former Executive Chairman of Alphabet, Inc. (Google’s parent company)) [9] warned that “for the discussions of AI and machine learning, the decision-maker should not be the computer because it makes mistakes. One of the problems we have with respect to AI right now, is not only do they make a small percentage of errors, but we as an industry cannot explain those errors.” [10] As Schmidt explained, despite training an AI system, developers do not inherently know why errors in judgment result within the system. This is often referred to as the “black box” problem and can result from (amongst other things) insufficient filtration of or control over datasets used in machine learning to calibrate AI; additional complexity resulting from the use of deep neural networks (which rely on multiple layers of filtration and calibration), [11] or simply the limits of the human mind to envision how an exceptionally complex system will work or what results it could generate. For example, even though a medical software AI may have been trained by a worldwide medical knowledge database, the AI has a chance of forming an incorrect conclusion without justification or reason. Because AI comes with a built-in error rate, but is still a major advance in medical efficiency, software service providers need clarity regarding liability. However, because AI systems make inexplicable mistakes, they should not be the ultimate decision-maker. Similar to how physicians may confer with other medical staff regarding a patient’s condition or a suggested treatment plan, medical staff should treat an AI as an additional tool or resource that they must control and monitor. As such, it may be unwise to treat the software companies as responsible when a random mistake occurs.

One solution to the issue of whether or not software companies should be liable when an AI errs may come from a products liability-approach. The learned intermediary rule would function well in this context, as exemplified in Taylor v. Intuitive Surgical Inc, where the court explained that “under the learned intermediary doctrine, the manufacturer satisfies its duty to warn the patient of the risks of its product where it properly warns the prescribing physician.” [12] This places an additional duty on the doctor in relation to the product, as the court noted:

Where a product is available only on prescription or through the services of a physician, the physician acts as a “learned intermediary” between the manufacturer or seller and the patient. It is his duty to inform himself of the qualities and characteristics of those products which he prescribes for or administers to or uses on his patients, and to exercise an independent judgment, taking into account his knowledge of the patient as well as the product. [13]

The learned intermediary doctrine is worth particular consideration in the case of AI-enhanced EHRs (far more so than in the case of physical medical devices) because the doctor is unlikely to be able to fully explain to their patient the potential risks associated with using an AI-driven EHR. Indeed, the closest analogy, from the patient’s point of view, would be the presence of a medical assistant in the room who read the patient’s medical record and made suggestions to the physician. Although the patient might ask such a medical assistant where they went to school, what they specialized in, how many years of experience they have, or whether they have seen similar cases before, they can neither ask this question of the AI nor is the physician likely to be able to speak to the qualifications of the programming team who designed it (much less the integrity of the datasets that the AI’s machine learning relied upon). Nonetheless, manufacturers would still have a duty to inform and warn the hospitals and/or physicians of any risks associated with their AI-enhanced products so they can make informed decisions about when and how to use those products, and can potentially advise patients as well. [14]

Although medical software in general is not yet classified as a medical device, regulations are moving in that direction. In December 2017, the FDA issued its Guidance for Industry and Food and Drug Administration Staff, which endorses such a position. [15] While this is not binding upon the industry, nor does it create any rights for the public (including patients), [16] the idea that medical software should be viewed as a device is one that will become ever more relevant as medical software moves from merely collecting and displaying readings from instruments and past medical documentation, to one where it actively suggests courses of care for the patient based on the same. In such a world, manufacturers can escape strict liability so long as they provide adequate warnings regarding the “inherently dangerous” nature of the software. [17] However, in the meantime, courts and judges may continue to operate under current liability regimes, such as relying heavily on a fact-based approach to determine whether software is a “good” or a “service,” for purposes of (1) establishing liability under Article 2 of the Uniform Commercial Code; [18] (2) evaluating whether there was a breach of implied warranty; [19] or (3) determining whether there was a manufacturing defect in a particular copy of the software. [20]

THE HUMAN FACTOR

In the medical professions, where burnout is a chronic problem, companies are marketing AI-powered EHRs as a way to alleviate physician workload. [21] However, this is also precisely why they are a potential stimulant for medical malpractice suits: time-starved and sleep-deprived medical staff will be more likely to rely on the suggestions and projections of such systems over time, especially as they become a normal part of their workflows. Just as it would now seem redundant for a medical professional to use a mechanical blood pressure-cuff and stethoscope to take a patient’s blood pressure immediately after doing so with a digital monitor; and just as many physicians have come to rely on EHRs rather than paper charts to view patient lab test-results, medication administration records (MARs), and physician documentation, among many other examples. Medical staff will inevitably come to rely on machine learning-informed AI-driven medical suggestions.

Proving a malpractice claim in this instance would be difficult. Absent proof to the contrary, practitioners will likely disclaim complete reliance on a diagnosis software–making it potentially impossible to prove a deviation from a standard of care. But if the software companies successfully shift all liability risk onto the physicians, then an over reliance on the system by the doctor may be the future for medical liability. One potential fix may be to require medical professionals to express their reasoning for their decisions within the patient’s record. This could alleviate the proof issue and protect both doctors and patients from false assertions. If, however, a medical professional simply denies solely relying on the AI system, and a plaintiff cannot find proof to the contrary, then the plaintiff will have suffered a wrongful injury without a means to recover from a defendant. However, this approach comes with additional policy concerns, because time spent charting is time not spent with patients, which has become so problematic that vendors are directing new efforts to assist specifically with charting. [22]

WHERE FROM HERE?

As the medical profession embraces tools using AI, lawyers should begin thinking how medical malpractice claims may change. In the described scenario where an actionable harm is caused, in part, by a faulty AI system, there should be a means for plaintiffs to recover. The utilization of AI in analyzing patient health records, however, presents a potential gap in the fault system where legitimate claims will go uncompensated or place additional pressure on already overworked physicians. In the case of predictive analysis, the AI advising the doctor may become akin to a physician’s assistant, though without the accountability of a human professional. As AI becomes more proficient and widely used in successfully predicting illness and suggesting treatment plans to medical staff, this may force the creation of new rules and precedents for dealing with a system that mimics or takes the place of human judgement and experience. Under current law, it may be impossible for plaintiffs to recover. If the medical software industry continues down the path of treating medical software as a “device,” then the learned intermediary rule would shield the software developer and yet secure the liability upon the doctor and hospital so long as the developer provided adequate warning. Nonetheless, this is but one possible solution, and we as legal professionals may find that still further legal innovation is required to keep pace with ongoing technological innovation.

Though the old stereotype–that the medical and legal professions are both slow to change their ways–may often be true, the world is nonetheless running into an AI-driven future. Whether we like it or not, we have to try and catch up.

——————————————————————–

This DarrowEverett Insight should not be construed as legal advice or a legal opinion on any specific facts or circumstances. This Insight is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. The contents are intended for general informational purposes only, and you are urged to consult your attorney concerning any particular situation and any specific legal question you may have. We are working diligently to remain well informed and up to date on information and advisements as they become available. As such, please reach out to us if you need help addressing any of the issues discussed in this Insight, or any other issues or concerns you may have relating to your business. We are ready to help guide you through these challenging times.

Unless expressly provided, this Insight does not constitute written tax advice as described in 31 C.F.R. §10, et seq. and is not intended or written by us to be used and/or relied on as written tax advice for any purpose including, without limitation, the marketing of any transaction addressed herein. Any U.S. federal tax advice rendered by DarrowEverett LLP shall be conspicuously labeled as such, shall include a discussion of all relevant facts and circumstances, as well as of any representations, statements, findings, or agreements (including projections, financial forecasts, or appraisals) upon which we rely, applicable to transactions discussed therein in compliance with 31 C.F.R. §10.37, shall relate the applicable law and authorities to the facts, and shall set forth any applicable limits on the use of such advice.

1 See Tom Sullivan, Next Up For EHRs: Vendors Adding Artificial Intelligence Into The Workflow, HEALTHCARE IT NEWS (Mar. 13, 2018, 12:03 PM), https://www.healthcareitnews.com/news/next-ehrs-vendors-adding-artificial-intelligence-workflow (stating that “[a]rtificial intelligence and machine learning permeated HIMSS18 such that the dynamic duo was just about everywhere in Las Vegas” and that major EHR vendors such as Allscripts, Athenahealth, Cerner, eClinicalWorks and Epic intend to integrate AI into their products).

2 See generally Julia Adler-Milstein & Ashish K. Jha, HITECH Act Drove Large Gains In Hospital Electronic Health Record Adoption, 36 HEALTH AFFAIRS 1416, 1418-19 (2017), available at https://www.healthaffairs.org/doi/pdf/10.1377/hlthaff.2016.1651 (stating that EHR adoption rates amongst eligible hospitals increased from 3.2 percent annually before the HITECH Act, to a rate of annual increase of 14.2 percent, and that “[w]hen we compared only eligible and ineligible hospitals that did not have an EHR before implementation of the meaningful-use incentives, we found that the EHR adoption rate for eligible hospitals increased by 16.5 percent per year, on average, compared to 5.5 percent for ineligible hospitals”).

3 See generally Kenneth Corbin, EHR Vendors Slammed for Interoperability Struggles: Healthcare Providers Cite Vendors for Failing to Deliver Promised Data Portability in Electronic Health Records, CIO (July 24, 2015, 5:49 AM), https://www.cio.com/article/2952402/ehr/ehr-vendors-slammed-for-interoperability-struggles.html (noting that major EHR vendors have struggled to make their products interoperable, inhibiting hospital networks’ attempts to successfully use EHRs); Brian Eastwood, Does Meaningful Use Need an Overhaul?, CIO (June 30, 2014, 9:00 AM), https://www.cio.com/article/2369036/healthcare/does-meaningful-use-need-an-overhaul.html (pointing to EHR vendors as unprepared ahead of Meaningful Use Stage 2’s September 30, 2014 deadline); Letter from James L. Madara, MD, American Medical Association, to Marilyn B. Tavenner, Administrator, Centers for Medicare & Medicaid Services, & Karen B. DeSalvo, MD, National Coordinator for health Information Technology, Office of the National Coordinator for Health Information Technology (May 8, 2014), https://assets.fiercemarkets.net/public/newsletter/fiercehealthcare/amamuletter.pdf (“Unfortunately, the existing [Meaningful Use] program and many of the EHRs certified for use in meeting the [Electronic Health Records Meaningful Use] program’s requirements stand in the way of” meeting the American Medical Association’s goals of “ensuring physician access to and use of well-developed electronic health records (EHRs)[;]” improving patient care; and driving “practice efficiencies.”). Meaningful Use provides a set of requirements that hospitals have to meet in order to be reimbursed for their new EHRs. See Meaningful Use Introduction, CTRS. FOR DISEASE CONTROL & PREVENTION, (last updated Jan. 18, 2017) https://www.cdc.gov/ehrmeaningfuluse/introduction.html [hereinafter CDC Meaningful Use]. These requirements included certain minimum levels of user participation in the EHR system, data capture and sharing requirements for patient and other relevant information, interoperability between different systems including transmissibility of patient records, and electronic prescription-writing, amongst others. Id.

4 See CDC Meaningful Use, supra note 3 (“Specific to the Stage 2 MU Public Health objectives, the capability to submit electronic data for Immunizations is in the core set for EPs, and the capability to submit electronic data for Immunizations, Reportable Laboratory Results and Syndromic Surveillance are all in the core set for EHRs.”). See generally Corbin, supra note 3 (paraphrasing Senate Health, Employment, Labor and Pensions Committee chairman Lamar Alexander as suggesting to Health and Human Services Secretary Sylvia Mathews Burwell that the government should “hold off on issuing any new mandates on EHRs to allow the industry catch up to the current iteration of the meaningful use standard for Medicare reimbursements”).

5 One of the authors of this article worked in Healthcare Information Technology for nearly five years and observed that many HIT staff were forced to learn multiple EHR systems post-2008, because of changing professional roles (e.g. varying the number of applications they supported, shifting from clinical to IT roles, etc.), transitions between hospitals and job positions, industry consolidation, etc. For more on the physician dissatisfaction with the EHR and the burnout caused by learning curves, see, e.g., Roger Collier, Electronic Health Records Contributing to Physician Burnout, 189 CMAJ E1405-1406 (Nov. 13, 2017) (available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5687935/); Kevin B. O’Reilly, EHR Switch Poses Learning Curve in the Surgical Suite, AMA WIRE (Feb. 20, 2017), https://wire.ama-assn.org/practice-management/ehr-switch-poses-learning-curve-surgical-suite.

6 The Medical Industry is known for having very high rates of burnout. Medscape reported that: in 2013 nearly 40% of physician respondents felt burned out; in 2015 46% of physician respondents felt burned out; and in 2018 42% of physician respondents felt burned out. Carol Peckham, Medscape National Physician Burnout & Depression Report 2018, MEDSCAPE (Jan. 17, 2018), https://www.medscape.com/slideshow/2018-lifestyle-burnout-depression-6009235#1; Carol Peckham, Physician Burnout: It Just Keeps Getting Worse, MEDSCAPE (Jan. 26, 2015), https://www.medscape.com/viewarticle/838437.

7 The author noted this as a common customer issue when he worked in the Medical Software Industry. Additionally, the RAND Corporation included similar findings in its research report on physician satisfaction. See Mark W. Freidberg, et al., Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems, and Health Policy, 3 RAND HEALTH Q. 1, 39 (“We still get things faxed, and so we get the paper. Then the paper we have to … scan it into the system. So it hasn’t really saved us completely from paper. I’ve been in the system now two years, about, and still we have papers. We still have to scan, every day we have to do this.” (alteration in original) (quoting an unnamed physician on the EHR process)) (available at https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR439/RAND_RR439.pdf).

8 See, e.g., Tom Sullivan, eClinicalWorks Clients ‘Left Out in the Cold’ as EHR Vendor Not Complying with DOJ Settlement U.S. Government Got Approximately $125 Million Out of the False Claims Case But What About eClinicalWorks Customers?, HEALTHCAREIT NEWS (May 7, 2018, 11:33 AM), https://www.healthcareitnews.com/news/eclinicalworks-clients-left-out-cold-ehr-vendor-not-complying-doj-settlement (citing and quoting a DOJ settlement corporate integrity agreement, stating that eClinicalWorks is not complying with a DOJ settlement in which they agreed to, amongst other things, transfer customer data to other EHR systems “without penalties or service charges”); Rajiv Leventhal, BREAKING: CMS to Rebrand Meaningful Use Program with New Emphasis on Interoperability, Burden Reduction, HEALTHCARE INFORMATICS (Apr. 24, 2018), https://www.healthcare-informatics.com/article/payment/breaking-cms-overhaul-meaningful-use-program-new-emphasis-interoperability (reaffirming the continuing need for better integration amongst EHR vendors’ systems); Miriam Reisman, EHRs: The Challenge of Making Electronic Data Usable and Interoperable, 42(9) PHARMACY & THERAPEUTICS 572, 572-73 (Sept. 2017) available at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5565131/pdf/ptj4209572.pdf (“Hundreds of government-certified EHR products are in use across the country, each with different clinical terminologies, technical specifications, and functional capabilities. These differences make it difficult to create one standard interoperability format for sharing data. In fact, not even those EHR systems built on the same platform are necessarily interoperable because they are often highly customized to an organization’s unique workflow and preferences.”); Joseph Conn, Q&A: Meditech Founder Pappalardo Says Invention was ‘My Overall Destiny’, MODERN HEALTHCARE (May 23, 2016), https://www.modernhealthcare.com/article/20160523/NEWS/305239999 (quoting the designer of MUMPS and founder of Meditech, Neil Pappalardo, regarding why he created MUMPS and why it is still used); Q&A | Epic CEO Faulkner Tells Why She Wants to Keep Her Company Private, MODERN HEALTHCARE (Mar. 14, 2015), https://www.modernhealthcare.com/article/20150314/MAGAZINE/303149952 (quoting Epic Systems CEO Judy Faulkner during an interview in which she explained why Epic has and will continue to rely on a derivation of the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) language, called “Cache”); The Office of the National Coordinator for Health Information Technology, Moving Forward Towards an Interoperable Learning Health System: Improving Flexibility, Simplicity, Interoperability and Outcomes to Achieve a Better, Smarter and Healthier System, (Mar. 2015), https://www.healthit.gov/sites/default/files/CMS-Stage-3-Meaningful-Use-proposed-rule%20_FactSheet.pdf (noting proposed rules for Meaningful Use Stage 3, to stimulate interoperability within the Medical Software Industry). The author also worked in Magic and MagicCS programming languages (which are derived from MUMPS) when he worked at Meditech.

9 Press Release, Alphabet Investor Relations, Eric Schmidt to Become Technical Advisor to Alphabet (Dec. 21, 2017), https://abc.xyz/investor/news/releases/2017/1221.html.

10 HIMSS TV, Videotape: HIMSS18 Opening Keynote, Eric Schmidt, YOUTUBE (circa minutes 37:00-38:00) (Mar. 16, 2018), https://www.youtube.com/watch?v=ACQes9erfsw.

11 See generally Brett K.Beaulieu-Jones et al., Semi-supervised Learning of the Electronic Health Record for Phenotype Stratification, 64 J. OF BIOMEDICAL INFORMATICS 168, fig.1 (December 2016), https://ac.els-cdn.com/S153204641630140X/1-s2.0-S153204641630140X-main.pdf?_tid=69b54df9-ce85-4a81-804e-d9abaf59f32d&acdnat=1550198940_e9ff78a26db8d5732664142c07dbaff9 (displaying a diagram of the process used for unsupervised and supervised data training); Riccardo Miotto et al., Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records, 6 SCIENTIFIC REP. 1, fig.1 (May 17, 2016), https://www.nature.com/articles/srep26094.pdf (displaying the “[c]onceptual framework used to derive the deep patient representation through unsupervised deep learning of a large EHR data warehouse.”).

12 389 P.3d 517, 524 (Wash. 2017).

13 Id. at 525 (quoting Terhune v. H. Robins Co., 577 P.2d 975, 978 (Wash. 1978)).

14 Id.

15 CTR. FOR DEVICES AND RADIOLOGICAL HEALTH, SOFTWARE AS A MED. DEVICE (SAMD): CLINICAL EVALUATION, FDA at 8 (Dec. 8, 2017), https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM524904.pdf [hereinafter SAMD].

16 SAMD, supra note 15, at 2.

17 Taylor, 389 P.3d at 526-27 (citing RESTATEMENT (SECOND) OF TORTS § 402A CMT. K (AM. LAW INST. 1965)).

18 See, e.g., Winter v. G.P. Putnam’s Sons, 938 F.2d 1033, 1036 (1991) (stating in dictum that software may be considered a “product” in matters of products liability); Lori A. Weber, Bad Bytes: The Application of Strict Products Liability to Computer Software, 66 ST. JOHN’S L. REV. 469, 469-75 (1992) (commenting on the analysis that courts must work through in order to determine whether software is a “good” or a “service” under the Uniform Commercial Code (UCC)).

19 See U.C.C. §§ 2-314-15.

20 RESTATEMENT (THIRD) OF TORTS: PRODUCTS LIABILITY, § 2 (AM. LAW INST. 1998) (“A product is defective when, at the time of sale or distribution, it contains a manufacturing defect, is defective in design, or is defective because of inadequate instructions or warnings. A product: (a) contains a manufacturing defect when the product departs from its intended design even though all possible care was exercised in the preparation and marketing of the product; (b) is defective in design when the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design by the seller or other distributor, or a predecessor in the commercial chain of distribution, and the omission of the alternative design renders the product not reasonably safe; (c) is defective because of inadequate instructions or warnings when the foreseeable risks of harm posed by the product could have been reduced or avoided by the provision of reasonable instructions or warnings by the seller or other distributor, or a predecessor in the commercial chain of distribution, and the omission of the instructions or warnings renders the product not reasonably safe.”).

21 See Mike Miliard, Athenahealth Partners With NoteSwift On AI-powered EHR Documentation, HEALTHCARE IT NEWS (Jan. 31, 2018, 02:22 PM), https://www.healthcareitnews.com/news/athenahealth-partners-noteswift-ai-powered-ehr-documentation (“Wayne Crandall, president and CEO of NoteSwift, said the [AI-powered clinical documentation tool, called ‘Samantha,’] is a way to help alleviate physician burnout and increase face time with patients.”). 22 See Miliard, supra note 21 (“Physicians have a new virtual assistant for their EHR, and her name is Samantha. The technology was launched this past year by Boston-based NoteSwift, and now the AI-powered clinical documentation tool is joining with athenahealth to help docs with their charting.”).