As AI continues to sweep the healthcare industry, developers and providers will need to focus on patient safety if AI is to become part of everyday clinical practice.
Artificial intelligence (AI) has numerous applications for the healthcare industry. Machine learning, natural language processing, and robotics can predict an individual’s risk of contracting HIV, assess a patient’s risk of inpatient violence, and assist in surgeries.
Proponents of the technology are optimistic about its potential to impact clinical care through early and accurate identification of disease and reduction of administrative burdens for providers.
Opponents, however, remain skeptical and warn that AI is not a panacea to cure all that ills the healthcare industry, with an eye well trained on patient safety.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Without careful forethought, AI could do more harm than good for healthcare.
This first piece in a two-part series on the implications of integrating artificial intelligence into routine clinical care examines its impact on patient safety.
Development of an AI system is only as good as the data used to create it. Many AI technologies rely on machine learning, a process wherein the system learns to predict outcomes based on training datasets. An algorithm combs through large amounts of data and generates a result, such as a diagnosis. The more robust and varied the data the input, the more accurate the output.
The quality of data inputted determines the reliability of outputted information. Flawed or bias underlying data can result in faulty learning and generate erroneous outputs. Minority populations are often underrepresented in data, making them particularly vulnerable to over- or under-diagnosis when AI strategies are used to inform decision-making.
Homogeneous data forms the basis of many AI tools, which means that their outputs do not apply to a diverse population. For example, if a deep learning tool that predicts an individual’s risk of contracting HIV learned from a dataset that only included males, the tool would not have the same power to reliably and accurately predict the risk in female patients. Using the tool as-is would result in ultimately harmful or unnecessary interventions.
“What these systems are doing is learning from ordinary people’s behavior. If you were to take a poll among a certain group of people as to what is ethically best, it depends on that group of people. Different groups are going to come up with different things,” said Susan Anderson, PhD, professor emerita of philosophy at the University of Connecticut.
Along with her husband Michael Anderson, PhD, the pair published a response in the American Medical Association’s Journal of Ethics focusing on AI and are currently developing an elder care robot that uses ethical principles to help daily activities such as medication reminders.
An algorithm can be adjusted if the developers see this problem and account for potentially biased input, Michael Anderson said. “Computers are not able to make a differentiation unless you foresee that problem and make sure there’s enough data to cover it.”
Further data can be integrated into a system to overcome the potential bias, but if that data is not available, the output of the system risks being inherently biased and potentially unrepresentative of a particular patient population. Downstream, this can make AI less effective for that patient population.
Michael Anderson noted, though, that eliminating bias in the dataset can be challenging.
“You have to clean the data of bias. You’d have to go through each person’s piece of data and look to find the bias. That seems crazy,” he said.
While using robust and high-quality data helps ensure the accuracy of AI, respect for patient’s confidentiality and privacy must remain paramount. Some patients may not feel comfortable with their data being shared and used to develop AI tools especially if it is unclear how and where their data is stored.
Patients have the right to decide if and how their data is shared. In order to make informed decisions, they must fully understand AI and its potential vulnerabilities to hacking and data breaching.
“One of the basic biomedical ethical principles is respecting the autonomy of the patient,” explained Susan Anderson.
Patients should also be informed of the potential for inaccurate diagnosis, whether that be over-diagnosis or misdiagnosing as the result of AI technologies. Some research shows that machine learning can more accurately diagnose disease than trained clinicians. Other studies demonstrate shortcoming in the potential of the same machine learning tool to miss more malignant cases than a human physician. These findings indicate a need to identify the severity of outcomes alongside potential harm posed to patients by inaccurate outputs.
The monetary, clinical, and emotional costs to the patient of an over-diagnosis or misdiagnosis should be taken into consideration for AI implementation. In either case, if the false findings pose too high of a cost burden, using AI output for decision-making is not the best strategy and could potentially harm the patient.
Human-centric thinking must remain at the heart of AI technologies if AI is to become part of clinical practice.
“Robots need to be programmed to do the right thing. That’s the heart of the concept. You need to understand someone’s suffering, but the robot itself does not have to suffer in order to make sure it does the right thing,” Michael Anderson stated.
The purpose of AI is to help the provider make the best decision that ultimately benefits the patient. Often, how the data predicts the outcomes (which variables are more influential) and how the information is combined are unclear to the end-user, making clinical decisions difficult. This “black box” problem means the provider cannot assess the accuracy of the methods that lead the system to its output.
“Black boxes are when it’s hard to describe what’s going on, if at all possible. They have no problem with making decisions that nobody can understand or can explain,” Michael Anderson elaborated. “People are now seeing that maybe this is not something that they were hoping it would be.”
Given the novelty of AI, informed consent becomes more complicated. Not only does the provider need to explain to a patient what AI is trying to accomplish, but they must also ensure the patient understands the concepts of AI and its associated risks. Providers must understand the process AI is going to undertake to generate its outcome, which is impossible if the black-box problem is present. Patients cannot make informed decisions about their care if they do not understand the tools used in their treatment.
Tensions arising from this lack of understanding could alter the doctor-patient relationship as a patient may no longer fully trust their physician to make the best decisions about his care.
No AI tool is completely accurate, yet. If it were, the black box problem would not be an issue because AI’s output would be correct 100 percent of the time. There would be no need to question the methodology because the output would always be accurate.
Until AI reaches the point where it does not pose a threat to patient care, its methods must undergo continuous scrutiny. Human-centric and ethical decision-making practices need to remain at the core of AI development.
“There always has to be an ethical judgment that is quite independent of facts. And I just don’t see how you’re going to get that from a bunch of data,” Susan Anderson concluded.
Date: September 04, 2019
Source: HealthITAnalytics