• Skip to main content

DistilGovHealth

DistilNFO GovHealth Advisory

  • Publications
    • Home
    • DistilINFO HealthPlan
    • DistilINFO HospitalIT
    • DistilINFO IT
    • DistilINFO Retail
    • DistilINFO POPHealth
    • DistilINFO Ageing
    • DistilINFO Life Sciences
    • DistilINFO GovHealth
    • DistilINFO EHS
    • DistilINFO HealthIndia
    • Subscribe
    • Submit Article
    • Advertise
    • Newsletters

AMIA Urges FDA to Modify AI, Machine Learning Regulatory Framework

Share:

June 11, 2019

AMIA is encouraging the FDA to refine its AI and machine learning regulatory framework across several areas, including bias and cybersecurity.

AMIA is encouraging FDA to modify its regulatory framework for Artificial Intelligence (AI)/Machine Learning (ML)-based software as a medical device (SaMD), particularly in areas of potential bias and cybersecurity risks.

In April 2019, FDA announced that it would develop a framework for regulating AI products that self-update based on new data. Although FDA has authorized other AI products, these products typically use “locked” algorithms that don’t continually adapt or learn each time the algorithm is used.

In response to FDA’s request for feedback, AMIA offered comments on the draft framework, and outlined areas that may need to be refined.

Want to publish your own articles on DistilINFO Publications?

Send us an email, we will get in touch with you.

“Properly regulating AI and Machine Learning-based SaMD will require ongoing dialogue between FDA and stakeholders,” said AMIA President and CEO Douglas B. Fridsma, MD, PhD, FACP, FACMI. “This draft Framework is only the beginning of a vital conversation to improve both patient safety and innovation. We certainly look forward to continuing it.”

AMIA commended the FDA for publishing the draft framework, and for offering ideas such as SaMD Pre-Specifications (SPS), Algorithm Change Protocol (ACP), and Good Machine Learning Practices (GMLP), all of which will guide new regulatory standards for AI and machine learning.

However, AMIA also had several recommendations for FDA to improve the framework, including a stronger emphasis and acknowledgement of how continuously learning algorithms must be treated differently from “locked” algorithms.

“While the Framework acknowledges the two different kinds of algorithms, we are concerned that the Modifications Framework is rooted in a concept that both locked and continuously learning SaMD provides opportunity for periodic, intentional updates,” AMIA wrote.

“In particular, the ACP section assumes that periodic re-training of SaMD will occur, and that this re-training will do so under controlled circumstances where opportunities to evaluate / retest the impact of changes will occur.”

AMIA advised FDA to include periodic evaluation requirements in the new framework, regardless of planned updates or re-training. The organization also suggested that FDA get additional feedback to determine when periodic evaluations should happen.

AMIA also pointed out that modern AI can be susceptible to learning because of poor data or biased data, and it may not be able to provide an explanation for any decisions it offers. To prevent this problem, AMIA recommended that FDA require a review of AI technology when it learns on populations that are different from its training population.

“There should be strong requirements regarding transparency and availability of the original and update training data set’s characteristics. Further, the FDA should develop an exhaustive list of data characteristics, such as training set population, to enumerate the dimensions for intended use,” AMIA wrote.

“Especially when continuously learning algorithms are applied to different populations or rely on different types of data inputs (e.g. manual v. automated) from those inputs they were originally trained, there is a need for users to understand the potential impacts of new inputs or impacts to the SaMD’s intended use.”

In addition to these recommendations, AMIA encouraged FDA to consider how security risks could impact AI.

“We encourage FDA to consider how cybersecurity risks, such as hacking or data manipulation that may influence the algorithm’s output, may be addressed in a future version of the Framework,” AMIA said.

“For example, we could envision a need for specific types of error detection geared towards preventing a system adaptation to an erroneous signal. Detection of data that may have either been corrupted or manipulated should be a priority.”

AMIA also made suggestions on how FDA could reduce potential biases in AI and machine learning algorithms. The draft framework considered algorithms in the context in which the algorithms will be designed, AMIA noted. However, even when discrimination isn’t intended, bias against people of certain ethnicities, genders, ages, socioeconomic backgrounds, and other characteristics can occur.

“We recommend that FDA develop guidance about how and how often developers of SaMD-based products test their products for such biases and adjust algorithms to eliminate identified biases,” AMIA said.

With these recommendations, AMIA expects to continue the conversation around regulating AI and machine learning SaMD and improving patient care.

“Together, further inquiry will help improve FDA’s ability to regulate SaMD and help potential users understand the intentions/limitations of SaMD,” AMIA concluded.

“As the FDA endeavors to better understand this space, AMIA offers its support and the support of its members to help regulators achieve the dual goal of patient safety and innovation.”

Date: June 12, 2019

Source: Health IT Analytics

Coffee with DistilINFO's Morning Updates...

Sign up for DistilINFO e-Newsletters.

Just a little bit more about you...
PROCEED
Choose Lists
BACK

Related Stories

  • Major Payers Find HHS Finalized Nondiscrimination Rule Too NarrowMajor Payers Find HHS Finalized Nondiscrimination Rule Too Narrow
  • New Clinically Validated Sleepcheck App LaunchesNew Clinically Validated Sleepcheck App Launches
  • Apple Still has a Lot of Room to Grow in the $3.5 Trillion Health Care SectorApple Still has a Lot of Room to Grow in the $3.5 Trillion Health Care Sector
  • Google Moves Further Into Healthcare: a Timeline of the Last YearGoogle Moves Further Into Healthcare: a Timeline of the Last Year
  • Superb Healthcare At Ultra-Low Prices? How Singapore Does ItSuperb Healthcare At Ultra-Low Prices? How Singapore Does It
  • AI, Machine Learning, and Blockchain are Key for Healthcare InnovationAI, Machine Learning, and Blockchain are Key for Healthcare Innovation

Trending This Week

Sorry. No data so far.

About Us

DistilINFO is media company that publishes Industry news, views and Interviews. We distil the information for you – saving time and keeping you up to date on your interest areas.

More About Us

Follow Us


Useful Links

  • Subscribe
  • Contact
  • Advertise
  • Privacy Policy
  • Terms of Service
  • Feedback

All Publications

  • DistilINFO HealthPlan Advisory
  • DistilINFO HospitalIT Advisory
  • DistilINFO IT Advisory
  • DistilINFO Retail Advisory
  • DistilINFO POPHealth Advisory
  • DistilINFO Ageing Advisory
  • DistilINFO Life Sciences Advisory
  • DistilINFO GovHealth Advisory
  • DistilINFO EHS Advisory
  • DistilINFO HealthIndia Advisory

© DistilINFO Publications