Google’s DeepMind AI outperformed radiologists in detecting breast cancer, according to a retrospective study published in Nature on Wednesday. After being trained on thousands of mammograms, the system was able to better identify breast cancer cases than the radiologists that had made their initial assessments.
Researchers from Google, Northwestern University, the NHS and Imperial College London tested the system on thousands of mammograms from previous cases in the U.S. and the U.K. Patients in these cases later had a biopsy proving the breast cancer diagnosis, or follow-up imaging that showed no cancer.
The study showed a 5.7 reduction in false positives, or diagnoses of breast cancer when there was none, and a 9.4 percent reduction in false negatives when analyzing from 18,000 U.S. patients. For U.K. patients, the data reduced false positives by 1.2 percent and reduced false negatives by 2.7 percent. It’s worth noting that researchers had access to a much larger dataset in the U.K., with images from more than 100,000 patients.
Researchers also pitted the system against a team of six radiologists in interpreting 500 randomly selected cases. The AI outperformed these radiologists too, though it also missed some cases that all six of them had flagged as cancer.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
The system is “…capable of surpassing human experts in breast cancer prediction,” according to the study’s abstract. But the tech is unlikely to replace radiologists anytime soon.
While machine learning algorithms are well suited to repetitive tasks, such as combing through hundreds of x-rays, there’s still lots of work to be done before these systems begin appearing in clinics.
“I think AI is perfect to support breast cancer screening. Machines are perfect for doing mundane, high-volume tasks,” said Dr. Hugh Harvey, managing director of digital health consulting firm Hardian Health and a former radiologist. “All of this early research is all well and good. To actually get it deployed into the clinic is a whole lot of work.”
Any AI system would need to get approval from the Food and Drug Administration. It would also need to be able to connect with a hospital’s health record systems, and would need some sort of surveillance tool in place to make sure it’s not drifting in accuracy.
The Nature study also has some limitations. Although researchers had access to a robust dataset, most of the images were from just one manufacturer’s mammography system. Additionally, a better spread of population demographics would have be useful in making more real-world predictions. Most of all, the technology will eventually need to be tested in a clinical trial setting.
“The real world is more complicated and potentially more diverse than the type of controlled research environment reported in this study,” Dr. Etta Pisano, chief research officer for the American College of Radiology, wrote in an analysis of the study.
She noted that earlier technologies, such as computer-aided detection of breast cancer, showed promising results in experimental testing. But when it came to real-world applications, the systems fell short.
In the long-run, these systems will also have to prove some sort of concrete benefit to be adopted, Harvey said. For example, can they reduce patient mortality? In Europe, where two radiologists must review the same case, can it reduce costs? For radiologists that process 500 images in a session, will it improve their efficiency?
“Those are the papers we’ll see in the next couple of years,” Harvey said. “The only industry we can look at to give us an estimate of how long (approval) takes is the pharmaceutical industry. People predict the digital sector will go faster, but we still have to do these prospective studies.”
Source: MedCity News