AMIA is encouraging the FDA to refine its AI and machine learning regulatory framework across several areas, including bias and cybersecurity.
AMIA is encouraging FDA to modify its regulatory framework for Artificial Intelligence (AI)/Machine Learning (ML)-based software as a medical device (SaMD), particularly in areas of potential bias and cybersecurity risks.
In April 2019, FDA announced that it would develop a framework for regulating AI products that self-update based on new data. Although FDA has authorized other AI products, these products typically use “locked” algorithms that don’t continually adapt or learn each time the algorithm is used.
In response to FDA’s request for feedback, AMIA offered comments on the draft framework, and outlined areas that may need to be refined.
“Properly regulating AI and Machine Learning-based SaMD will require ongoing dialogue between FDA and stakeholders,” said AMIA President and CEO Douglas B. Fridsma, MD, PhD, FACP, FACMI. “This draft Framework is only the beginning of a vital conversation to improve both patient safety and innovation. We certainly look forward to continuing it.”
AMIA commended the FDA for publishing the draft framework, and for offering ideas such as SaMD Pre-Specifications (SPS), Algorithm Change Protocol (ACP), and Good Machine Learning Practices (GMLP), all of which will guide new regulatory standards for AI and machine learning.
However, AMIA also had several recommendations for FDA to improve the framework, including a stronger emphasis and acknowledgement of how continuously learning algorithms must be treated differently from “locked” algorithms.
“While the Framework acknowledges the two different kinds of algorithms, we are concerned that the Modifications Framework is rooted in a concept that both locked and continuously learning SaMD provides opportunity for periodic, intentional updates,” AMIA wrote.
“In particular, the ACP section assumes that periodic re-training of SaMD will occur, and that this re-training will do so under controlled circumstances where opportunities to evaluate / retest the impact of changes will occur.”
AMIA advised FDA to include periodic evaluation requirements in the new framework, regardless of planned updates or re-training. The organization also suggested that FDA get additional feedback to determine when periodic evaluations should happen.
AMIA also pointed out that modern AI can be susceptible to learning because of poor data or biased data, and it may not be able to provide an explanation for any decisions it offers. To prevent this problem, AMIA recommended that FDA require a review of AI technology when it learns on populations that are different from its training population.
“There should be strong requirements regarding transparency and availability of the original and update training data set’s characteristics. Further, the FDA should develop an exhaustive list of data characteristics, such as training set population, to enumerate the dimensions for intended use,” AMIA wrote.
“Especially when continuously learning algorithms are applied to different populations or rely on different types of data inputs (e.g. manual v. automated) from those inputs they were originally trained, there is a need for users to understand the potential impacts of new inputs or impacts to the SaMD’s intended use.”
In addition to these recommendations, AMIA encouraged FDA to consider how security risks could impact AI.
“We encourage FDA to consider how cybersecurity risks, such as hacking or data manipulation that may influence the algorithm’s output, may be addressed in a future version of the Framework,” AMIA said.
“For example, we could envision a need for specific types of error detection geared towards preventing a system adaptation to an erroneous signal. Detection of data that may have either been corrupted or manipulated should be a priority.”
AMIA also made suggestions on how FDA could reduce potential biases in AI and machine learning algorithms. The draft framework considered algorithms in the context in which the algorithms will be designed, AMIA noted. However, even when discrimination isn’t intended, bias against people of certain ethnicities, genders, ages, socioeconomic backgrounds, and other characteristics can occur.
“We recommend that FDA develop guidance about how and how often developers of SaMD-based products test their products for such biases and adjust algorithms to eliminate identified biases,” AMIA said.
With these recommendations, AMIA expects to continue the conversation around regulating AI and machine learning SaMD and improving patient care.
“Together, further inquiry will help improve FDA’s ability to regulate SaMD and help potential users understand the intentions/limitations of SaMD,” AMIA concluded.
“As the FDA endeavors to better understand this space, AMIA offers its support and the support of its members to help regulators achieve the dual goal of patient safety and innovation.”
Date: June 12, 2019
Source: Health IT Analytics