The Diagnostic Image Analysis Group is part of the Departments of Radiology and Nuclear Medicine, Pathology, and Ophthalmology of Radboud University Medical Center. We develop computer algorithms to aid clinicians in the interpretation of medical images and thereby improve the diagnostic process.
The group has its roots in computer-aided detection of breast cancer in mammograms, and we have expanded to automated detection and diagnosis in breast MRI, ultrasound and tomosynthesis, chest radiographs and chest CT, prostate MRI, neuro-imaging and the analysis of retinal and digital pathology images. The technology we primarily use is deep learning.
It is our goal to have a significant impact on healthcare by bringing our technology to the clinic. We are therefore fully certified to develop, maintain, and distribute software for analysis of medical images in a quality controlled environment (MDD Annex II and ISO 13485).
On this site you find information about the history of the group and our collaborations, an overview of people in DIAG, current projects, publications and theses, contact information, and info for those interested to join our team.
David Tellez et al published a new method to train neural networks on gigapixel whole-slide images directly, avoiding the need for fine-grained annotations. This approach allows the neural network to discover new predictive features by using automatically derived 'annotations' such as molecular biomarkers or patient outcome.
In their work, published in IEEE Transactions on Pattern Analysis and Machine Intelligence, the authors present the neural image compression (NIC, top image) method, which opens the door to training neural networks using slide-level annotations obtained automatically, e.g., targeting molecular biomarkers or patient outcome. This approach allows the neural network to discover previously unknown visual features that are relevant to predict the target at hand. NIC works in two steps. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, summarizing the image in a very efficient manner; reducing its size drastically while retaining most semantic information. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations.
David Tellez et al compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. They found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, they could visualize the regions of the WSIs where the classifier attended to, and confirmed that they overlapped with annotations from human experts.
More Research Highlights.