Pages

Tuesday 12 November 2019

Doctors could soon spend less time looking at mammograms, thanks to artificial intelligence

As computers get better at spotting cancer, doctors will have more time to focus on treating patients.
15th March 2018 in Health

Blog post image

In the US alone, tens of millions of mammograms are performed each year. Analyzing these images takes up a lot of doctors' time. The use of computer assistance to help read mammograms is becoming widespread, but doubts persist about whether the practice is helpful enough to justify its steep price tag. Lower-cost deep learning systems, which train themselves to recognize cancer, could help. Thanks to deep-learning methods like those more commonly used to spot everyday objects in photographs, a new system identified cancer’s precise location more than 90 percent of the time in tests. We spoke with the study’s author, Dezso Ribli of Hungary’s Eötvös Loránd University, to learn more.

ResearchGate: What role does computer assisted detection already play in breast cancer detection?

Dezso Ribli: Computer assisted detection (CAD) is supposed to help the doctors detect lesions that could have been overlooked. In the United States, where mammograms are evaluated by one radiologist, CAD usage is widespread. In Europe, where mammograms are reviewed by two radiologists, CAD is practically not used.

While initial studies showed promising increases in cancer detection rates with CAD, recently a large study by Lehman et al. found no positive impact of CAD usage on radiologist performance. The benefits of the current technology are therefore questionable, while more than $400 million is spent on it yearly in the US.

RG: How is your new system different?

Ribli: Routinely used CAD solutions are based on methods that predate the revolution of deep learning in computer vision. With deep learning, carefully designed neural networks with many, many layers are trained to recognize visual patterns. Unlike previous methodologies, these neural networks learn the meaningful patterns and representations only from the data itself. In a nutshell, deep learning models can recognize objects on images if you show them enough labeled examples, and they “learn” by refining the parameters of subsequent filtering steps.

In some visual tasks, deep learning has decreased error rates by tenfold compared to previous technologies. We think that the old methods in CAD could be replaced by deep learning, and the accuracy of CAD could be drastically increased. Our system applies one of the best deep learning frameworks for object detection to mammography analysis.


 “In some visual tasks, deep learning has decreased error rates by tenfold compared to previous technologies.”


RG: How well did your system perform when you tested it?

Ribli: The system secured the 2nd place in The Digital Mammography DREAM Challenge, a prestigious data science competition with more than 1,200 registered participants. Our model was the only one of the best performing solutions able to accurately localize cancers, which is essential for a CAD system in order to be practically usable. Since then, the model's performance has been significantly improved. Details will be shared in another paper about the second phase of the challenge.

With the most recent public digital mammography dataset (INbreast) the model is able to detect more than 90 percent of the cancers with precise positions, while producing only 0.3 false positive detection per image.

RG: How does this model compare to standard mammography analysis methods?

Ribli: Unfortunately, direct comparison with currently used solutions is not possible, because those methods were never evaluated on the same datasets. But the results suggest that our model performs better than commercially available solutions.

RG: What were the limitations of your study?

Ribli: Publicly available mammogram datasets are rather small. Our model was mostly trained on around 2,000 scanned cancer images from the 1990s and a small number of additional cancers on digital images. To put that in context, deep learning models for recognizing everyday objects are usually trained on datasets containing hundreds of thousands or even millions of images.

We are sure that, with larger training datasets, the performance of deep learning CAD models will improve. Creating a huge training dataset is not an impossible task: In the US alone, tens of millions of screening mammography exams are performed annually.

RG: Could your approach be adapted to work with other kinds of imaging and other kinds of cancer?

Ribli: Yes, deep learning, and specifically object detection models, have the potential to help with any kind of cancer imaging or medical imaging.


 “Doctors will be able to concentrate their power of the hardest and most complicated cases, which are more suitable for a human mind.”


RG: Do you think there will be a point in the future where AI takes over breast cancer detection altogether, or will there always be a role for human radiologists in analyzing mammograms?

Ribli: Breast cancer screening is performed on the scale of a hundred million exams a year. I think in the next few decades AI is going to assist radiologists, and progressively take over the easier tasks, which are monotonous and tiring for humans. Doctors will be able to concentrate their power of the hardest and most complicated cases, which are more suitable for a human mind, and less suitable for a machine. AI will also relieve the pressure on doctors caused by a lack of specialized radiologists.

But keep in mind that, in the end, the detected cancers have to be validated with a biopsy, and surgeries are often performed. I think these tasks will be performed by human doctors for a long time. The potential role of AI in screening mammography is to handle the millions of routine imaging exams, presenting the potential cancers to the doctors who perform follow-up procedures.

RG: What’s next for this research?

Ribli: Larger training and testing datasets need to be collected to enable further improvements. One interesting direction is breast tomosynthesis, an imaging technique proven to be superior to standard mammography. Tomosynthesis analysis takes even more time for a radiologist, and CAD has a potential role in that modality too. Another very important next step is to close the gap between research and practice. We would like to test the system in a clinical setup and eventually introduce it to routine care.

A demonstration version of the CAD model is available here

Featured image by Margo Wright.

https://www.researchgate.net/blog/post/artificial-intelligence-is-getting-even-better-at-reading-mammograms