My Work Explained:

Deep Learning for Medical Imaging

November 2018, by Dr. James Condon | Semi-technical | Medical Imaging | Deep Learning

This blog post is part of the AI Collaborative Network "My Work Explained" series and is featured in Issue 1 of the #AICollaborative Network Newsletter. View a full copy of the Newsletter. Forward the Newsletter to a friend. Subscribe to receive the Network Newsletter in your inbox.

I’m part of a group of researchers at University of Adelaide that are using a form of AI called deep learning to help interpret medical images, including mammograms. For some background, about 1 in 8 women will develop breast cancer by age 85. Mammography screening of women without any symptoms helps pick up breast cancer earlier. Early detection gives a better chance of treating cancer before it spreads around the body. Screening mammography from age 40 has been shown to reduce the amount of women who die from breast cancer.

Deep learning involves taking an input and in many stages or layers, using various filters and equations to produce some output. In deep learning, all of these filters, equations and layers are constructed with computer programming into an ‘architecture’ referred to as a neural network, reflecting it’s loose, original inspiration from animal neurology.

Let’s take facial recognition as an example. With black and white images, every pixel is given a numerical value based on its intensity (say 0 for black, 255 for white and somewhere in between for shades of gray). The image starts as a matrix of numbers (see figure 1).

Figure 1 (left-right): An image of a face, grey-scale and down-sampled for simplicity; pixel values corresponding to pixel intensity; a matrix (ie ‘table’, ‘spreadsheet’) of integers.

Serena Yeung and Alex Alahi, Stanford University - Available at http://ai.stanford.edu/~syyeung/cvweb/tutorial1.html

At each layer of the network, different functions are applied to this matrix, creating a map of which features are present in various locations. For example, one part of the network detects straight lines, another might be looking for curved lines, another for circles, another for a certain pattern of darks spots etc.

Figure 2. Adit Deshpande, available at - https://adeshpande3.github.io/

These layers, and what they each detect, are self-adjusted and tuned over lots and lots of correctly labelled training examples based on whether it gets the output closer to or further away from the correct output (1 - for ‘Jane’ or 0 - for ‘not Jane’). The ‘learning’ is this training stage and getting better at this process to arrive at the correct output, based on known examples. In a way, we essentially instruct the network “Here is a big bunch of numbers. Some of them need to equal 1 (‘Jane’), some equal 0 (‘not Jane’). Come up with a set of computations that can separate which bunches of numbers equal Jane and which do not”.

As we move further down the layers of calculations, more abstract and sophisticated things are being detected by the network. In the earlier layers, ‘low-level’ features like lines and blotches of intensities are detected, then ‘high-level’ features like, eye, hair and eventually even features like Jane’s eyes, Jane’s mouth, Jane’s hair etc.

Once we have a trained network which has established the best combination of filters and calculations, we can give it a never-before-seen image and it can return a probability of it being ‘Jane’s face’ (or not).

In the same way, we can train a network to tell the difference between a mammogram of a normal breast and a mammogram that has signs of cancer. This has been done by many groups around the world, many of which have recently reported performance above that of a human radiologist. One area of active research is being able to differentiate between particular subtypes of breast cancer with deep learning. This would normally require a biopsy or surgical specimen of tissue, a microscope and special dyes as well as several specialists and at least several days to a week.

Deep learning so far has only been applied in hindsight, to retrospective databases of mammograms. Our ultimate goal is to get this technology safely and securely into local breast screening clinics to assist human radiologists, get results to women quicker, reduce the number of false positives, (where women are recalled to the clinic, and have a biopsy that does not show cancer), and false negatives (missed, subtle cancers) and maybe even accurately predict aspects of cancer aggressiveness that will better guide treatment decisions.

If you’re interested or want more info, check out Welch Lab’s neural networks demystified on youtube https://www.youtube.com/watch?v=bxe2T-V8XRs and Adit Deshpande’s blog ‘Guide to understanding Convolutional neural networks’ https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/ .

Dr. James Condon, MBBS, UoA HDR candidate

James has been a member of the Artificial Intelligence Collaborative Network since September 2018.

James completed a Bachelor of Medicine with Bachelor of Surgery from the University of Adelaide in 2014. Having completed an internship in the Central Adelaide Local Health Network, and residency at the Lyell McEwin Hospital, he has worked in a range of health care settings including general surgery, general medicine, emergency medicine, radiation oncology, neurology, mental health and clinical trials.

His interests include teaching, medical imaging, neuropsychiatry, data science and deep learning. He was once an oboe soloist at a tsunami relief concert in Indonesia.

Find out more my checking out his Radiopaedia profile, read his article on psychoradiology, or get in contact via LinkedIn.