Commercially available smartphones and smart speakers could be trained to recognize breathing sounds indicative of cardiac arrest, then call for help, according to a proof-of-concept study published June 19 in npj Digital Medicine.
In the study, researchers from the University of Washington in Seattle used recordings from 911 calls to train an algorithm to recognize audible signs of agonal breathing, a symptom of cardiac arrest that can cause an individual to gasp for air or stop breathing. They also used recordings from sleep studies to teach the algorithm to distinguish benign sounds that interrupt normal breathing patterns, including snores and obstructive sleep apnea.
When an Amazon Echo, iPhone 5s and Samsung Galaxy S4 were each equipped with the algorithm and placed several feet away from a speaker playing breathing sounds, the artificial intelligence detected agonal breathing with 97 percent accuracy and regular breathing with over 99 percent accuracy. With further testing and development, the algorithm could therefore be used as a contactless method for discerning cardiac arrest and calling emergency services.
Next, the researchers will train the algorithm on even more 911 calls, then commercialize the technology through their UW spinout startup Sound Life Sciences. Further development will include devising a way for the devices to listen to breathing sounds without requiring activation phrases like “Hey, Siri” and “Alexa,” while still protecting users’ privacy.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
Date: June 26, 2019
Source: Becker’s Health IT & CIO Report