Google AI system could aid breast cancer detection: New study
US and UK researchers found artificial intelligence can improve the accuracy of breast cancer screenings.
Researchers in the United States and the United Kingdom have reported that an artificial intelligence (AI) system from Google proved as good as expert radiologists at detecting which women had breast cancer based on screening mammograms – and that the system showed promise at reducing errors.
The study, published in the journal Nature on Wednesday, is the latest to show that AI has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally.
Radiologists miss about 20 percent of breast cancers in mammograms, says the American Cancer Society, and half of all women who get the screenings over a 10-year period have a false-positive result.
The findings of the study – developed with Alphabet’s DeepMind AI unit, which merged with Google Health in September – represent a major advance in the potential for the early detection of breast cancer, said Mozziyar Etemadi, who is one of the study’s co-authors and who is from Northwestern Medicine in Chicago in the US.
The team, which included researchers at Imperial College London and the UK’s National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.
They then compared the system’s predictions to the actual results from a set of 25,856 mammograms in the UK and 3,097 from the US.
The study showed the AI system could identify cancers with a similar degree of accuracy to expert radiologists while reducing the number of false-positive results by 5.7 percent in the US-based group and by 1.2 percent in the UK-based group.
It also cut the number of false negatives, where tests are wrongly classified as normal, by 9.4 percent in the US group, and by 2.7 percent in the UK group.
These differences reflect how mammograms are read. In the US, only one radiologist reads the results and the tests are done every one to two years. In the UK, the tests are done every three years, and each is read by two radiologists. When they disagree, a third radiologist is consulted.
In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately predicting breast cancers.
Connie Lehman, chief of the breast imaging department at Massachusetts General Hospital in Boston in the US, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.
The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics. Yet CAD programmes have not significantly improved performance in clinical practice.
The issue, Lehman said, is that current CAD programmes were trained to identify things human radiologists can see, whereas, with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.
This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” Lehman said.
Although computers have not been “super helpful” so far, “what we’ve shown at least in tens of thousands of mammograms is the tool can make a very well-informed decision,” Etemadi said.
The Nature study has some limitations. Most of the tests were done using the same type of imaging equipment, and the US group contained many patients with confirmed breast cancers.
Crucially, the team has yet to show the tool improves patient care, said Lisa Watanabe, chief medical officer of CureMetrix, whose AI mammogram programme won US approval last year.
“AI software is only helpful if it actually moves the dial for the radiologist,” she said.
Etemadi agreed that those studies are needed – as is regulatory approval, a process that could take several years.