The rise of artificial intelligence developments over the last decade has had profound implications for the health care industry. From IBM’s Watson to lesser known innovations that have flown under the radar, such as clinical decision software and predictive analytics, these changes have infiltrated the field’s daily functions. Congress generally views AI with trepidation and fascination. We expect Congress to keep the subject at arm’s length until provoked to action.
Congress held its first hearing on artificial intelligence in November of 2016 but has yet to introduce substantial, targeted legislation directly aimed at regulating the influence of artificial intelligence in the health care sector. Bills currently pending in Congress on this issue are, for the most part, exploratory, not regulatory, in nature. They call for reports, studies, and investments in artificial intelligence. Two companion bills, H.R. 5356 and S. 2806, for example, call for the establishment of a National Security Commission on Artificial Intelligence. Similarly, S. 2217 and its companion bill, H.R. 4625, direct the department of Commerce to establish a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence. These two bills comprise the FUTURE of Artificial Intelligence Act of 2017, and they are the only bills that call for a study that includes a health care component. Additionally, S. 3502 would authorize an emerging technology policy lab within the General Services Administration. Despite the fact that these bills are relatively unobtrusive in nature, some have parallel bills in both chambers, some are sponsored by a mix of Democrats and Republicans, and all are still sitting in committee.
Some previously-enacted legislation, however, has played a minor introductory role in regulating the advancing field; for example, the 21st Century Cures Act. The 21st Century Cures Act does not directly address artificial intelligence or machine learning, but it does require that, in order to be excluded from regulation as a “device,” software that interprets or analyzes patient records for the purpose of a diagnosis or treatment must allow the professional to independently review the basis for the recommendation. Effectively, this limits what clinical decision support software can “do.”
Experience tells us that Congress will likely intervene when something “bad” happens. In almost any health care sector, if something goes awry that harms patients or puts personal information at risk, Congress quickly snaps to attention. We would expect the same with AI.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
For example, Congress may be prompted to act when health care industry players present legislative or regulatory hurdles that limit the utility of AI technology. Should Congress feel sufficient pressure from these stakeholders, they could be motivated to act. Finally, what if providers reach a point where they feel pressured to accept a diagnosis or treatment suggestion generated by a device that employs artificial intelligence? If providers begin to raise questions about the utilization of AI in practice, Congress could be drawn into the conversation.
Currently, there is no sense that Congress will actively engage in legislative oversight of AI in the immediate future – the lack of movement on the existing bills combined with the overall lack of more targeted legislation are evidence of this. However, we are able to identify potential circumstances that could spur the House and the Senate to get involved.
Date: November 19, 2018
Source: The National Law Review