What Are the Ethical and Legal Challenges of AI Use in Medicine?

Page 1

What Are the Ethical and Legal Challenges of AI Use in Medicine? AI tools can assist with various processes including medical records analysis. However, there are ethical and legal risks involved. Artificial intelligence (AI) can make exciting contributions in the medical field, whether in medical education, biomedical research or healthcare delivery. AI comprises machine learning, natural language processing and robotics. With the introduction of the Electronic Health Record (EHR), the amount of clinical data available has exploded and is often too challenging for humans to analyze and utilize. Now, sophisticated AI tools are available to assist with medical records analysis and other processes. These tools can easily identify the core medical facts as well as the problems contained in the medical records and provide a perceptive summary of the records. Since artificial intelligence can identify meaningful relationships in raw data, it can provide support in the field of diagnosis as well as in treating and forecasting outcomes in various medical situations. •

AI-based diagnostic algorithms are now helping with the diagnosis of breast cancer, supporting radiologists by providing a second opinion.

•

Virtual AI assistants can engage in meaningful conversations with patients.

•

AI is used in telemedicine delivery, and also in robotic prostheses and physical task support systems.

However, the downside of AI is that it is accompanied by ethical and legal concerns. So, what are some of the major challenges to look out for? Possibility of Discrimination: AI tools analyze voluminous data to understand patterns, and the inferences made are used to predict future events and outcomes. In the healthcare field, data stems from EHRs, medical insurance claims, and other sources. AI tools use the available data to predict medical conditions such as diabetes, dementia, stroke, heart disease, possibility of opioid abuse and so on. The concern in this regard is that such predictions could become www.mosmedicalrecordreview.com

918-221-7791


part of the patient’s EHR. If these predictions are noted by anyone accessing the patient medical record, it could generate negative outcomes for the patient. While clinicians and administrators who need to access the EHR are bound by the HIPAA regulation and cannot disclose sensitive healthcare data to another entity, data brokers who also mine personal data using AI tools could sell those predictions to third parties such as employers, lenders, insurers and so on that are not bound by HIPAA. These parties can freely disclose such information without asking the permission of the patient. This raises the concern of discrimination based on health. Employers looking for healthy employees, moneylenders, landlords, and health insurers could make an adverse decision. The ADA (Americans with Disabilities Act) is applicable only to present and past health conditions, and doesn’t prohibit discrimination based on medical problems that may arise in the future. Though there is a non-discrimination Act related to genetic testing, there is no law that prohibits employers/insurers from considering genetic information when determining the eligibility of a candidate for employment or health insurance. Negative impact on the patient: A patient who is predicted to develop cognitive decline in later life and knows this via details in his/her medical record, could develop psychological problems. AI Tools May Overpower Physicians: Many people fear AI monopoly, concerned that in the near future computers would make the diagnoses and treatment suggestions as well as health predictions and doctors would become mere implementers.AI predictions could be wrong if there are errors in the medical record and lead to patient harm. Apart from the above-mentioned ethical issues, AI use in medicine raises some legal concerns also. •

Does the software used qualify as a medical device? When software products are developed, regulatory classification is an important consideration. The question is whether the software is a medical device. Medical devices must carry a CE label if they are to be marketed. This label is given only after a conformity assessment procedure has been made. A medical device without

www.mosmedicalrecordreview.com

918-221-7791


a CE labeling stands the risk of being withdrawn from the market. In addition, it would signify an administrative offence and could have consequences under criminal law. A software designed to diagnose or treat diseases could be classified as a medical device. If it is designed just to provide knowledge or store data, it may not qualify as a medical device. When considering regulatory issues, important things to note include: the regulations pertaining to the healthcare profession where the demarcation between

medical

and

non-medical

work

is

very

important,

pharmacovigilance aspects, and questions of medical reimbursement. •

The question of potential liabilities is an important one. o When AI-enabled software systems make diagnoses and treatment suggestions, and an adverse event occurs, the question arises regarding who is liable for the injury. Typically, doctors can delegate patient care to a colleague who is appropriately qualified and has the experience and skills to provide safe care for the patient. The responsibility for the overall care would remain with the delegating doctor. As of now, it is not clear whether the use of AI tools would be viewed as delegation or as a referral to a specialist or senior physician. o AI tools suppliers could also be found liable if they don’t exercise reasonable skill and care in performing their responsibilities. Defects in the technology arising from faulty software engineering design or faulty implementation could prove costly. Liability could also arise under product liability legislation wherein liability would be based on whether the product is defective.

Issues caused by human error are a major consideration. Errors in data analysis, accessing and overwriting the wrong patient record, poor quality data capture, inaccuracy, misinterpretation of data by clinical staff, user alert fatigue and so on pose considerable legal risks.

The AI solutions are not yet strictly designed to be deployed on an entirely autonomous basis. If an AI solution fails causing patient harm, the provider as the person responsible for providing care may still be at risk of a negligence claim.

www.mosmedicalrecordreview.com

918-221-7791


The ethical and legal challenges associated with the use of artificial intelligence tools in medicine must be addressed effectively. Experts in the field recommend the expansion of the HIPAA Privacy Rule to cover anyone who handles patient health information for business purposes. The ADA can also bring regulation prohibiting discrimination on the grounds of predictions of future diseases. It is the onus of physicians providing patients with AI predictions to make sure that they are well-informed regarding the pros and cons of such predictions. Employed correctly, AI can prove useful in many areas such as comprehensive chart review, patient welfare and outcomes. The medical community must be well-informed regarding all ethical and legal complexities associated with AI use.

www.mosmedicalrecordreview.com

918-221-7791


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.