Diagnostic Image Analysis Group

The Diagnostic Image Analysis Group is part of the Departments of Radiology, Nuclear Medicine and Anatomy, Pathology, Ophthalmology, and Radiation Oncology of Radboud University Medical Center. We develop computer algorithms to aid clinicians in the interpretation of medical images and improve the diagnostic process.

The group has its roots in computer-aided detection of breast cancer in mammograms, and we have expanded to automated detection and diagnosis in breast MRI, ultrasound and tomosynthesis, chest radiographs and chest CT, prostate MRI, neuro-imaging, retinal imaging, pathology and radiotherapy. The technology we primarily use is deep learning.

It is our goal to have a significant impact on healthcare by bringing our technology to the clinic. We are therefore fully certified to develop, maintain, and distribute software for analysis of medical images in a quality controlled environment (MDD Annex II and ISO 13485) and we closely collaborate with many companies that use our technology in their products.

On this site you find information about the history of the group and our collaborations, an overview of people in DIAG, current projects, publications and theses, contact information, and info for those interested to join our team.

Highlights

May, 2020

Lobe segmentation II.png

Automated pulmonary lobe segmentation in computed tomography scans is still an open problem, especially for scans with substantial abnormalities, such as in COVID-19 infection. Convolution kernels in recently presented networks only respond to local information within the scope of their effective receptive field, and this may be insufficient to capture all necessary contextual information.

Xie Weiyi and colleagues argue that contextual information is critically important for accurate delineation of pulmonary lobes, especially when the lungs are severely affected by diseases such as COVID-19 or COPD. They propose a propose a relational approach (RTSU-Net) that leverages global context by introducing a first stage in which the receptive field encompasses the entire scan and by using a novel non-local neural network module.

With a limited amount of training data available from COVID-19 subjects, Xie Weiyi et al initially trained and validated RTSU-Net on a cohort of 5000 subjects from the COPDGene study. Using the models pretrained on COPDGene, transfer learning was applied to retrain and evaluate RTSU-Net on 470 COVID-19 subjects. Experimental results show that RTSU-Net outperforms state-of-the-art baselines and performs robustly on cases with incomplete fissures and severe lung infection due to COVID-19.

The image above displays a qualitative comparison of the proposed RTSU-Net segmentation (middle row) and ground truth (bottom row) in CT scans of COVID-19 patients. Blue: right upper lobe, light blue: right lower lobe, red: left upper lobe, green: left lower lobe.

The algorithm is now available on Grand Challenge, where users are free to use the algorithm on their own data sets.

Read more about the RTSU-Net in the TMI paper, published this month.

More Research Highlights.

News

  • June 12, 2020 - June 12, During the Euroson 2020 webinar, Thomas van den Heuvel won the Young Investigator Award from the European Federation of Societies for Ultrasound in Medicine and Biology with his abstract entitled: “Introducing prenatal ultrasound screening in research-limited settings using artificial intelligence”.
  • March 18, 2020 - The defense of Midas Meijs' PhD thesis titled 'Automated Image Analysis and Machine Learning to Detect Cerebral Vascular Pathology in 4D-CTA' has been postponed because of COVID19. A new date will follow.

More News.