The Diagnostic Image Analysis Group is part of the Departments of Radiology, Nuclear Medicine and Anatomy, Pathology, Ophthalmology, and Radiation Oncology of Radboud University Medical Center. We develop computer algorithms to aid clinicians in the interpretation of medical images and improve the diagnostic process.
The group has its roots in computer-aided detection of breast cancer in mammograms, and we have expanded to automated detection and diagnosis in breast MRI, ultrasound and tomosynthesis, chest radiographs and chest CT, prostate MRI, neuro-imaging, retinal imaging, pathology and radiotherapy. The technology we primarily use is deep learning.
It is our goal to have a significant impact on healthcare by bringing our technology to the clinic. We are therefore fully certified to develop, maintain, and distribute software for analysis of medical images in a quality controlled environment (MDD Annex II and ISO 13485) and we closely collaborate with many companies that use our technology in their products.
On this site you find information about the history of the group and our collaborations, an overview of people in DIAG, current projects, publications and theses, contact information, and info for those interested to join our team.
Automated pulmonary lobe segmentation in computed tomography scans is still an open problem, especially for scans with substantial abnormalities, such as in COVID-19 infection. Convolution kernels in recently presented networks only respond to local information within the scope of their effective receptive field, and this may be insufficient to capture all necessary contextual information.
Xie Weiyi and colleagues argue that contextual information is critically important for accurate delineation of pulmonary lobes, especially when the lungs are severely affected by diseases such as COVID-19 or COPD. They propose a contextual two-stage U-net (CTSU-Net) that leverages global context by introducing a first stage in which the receptive field encompasses the entire scan and by using a novel non-local neural network module.
With a limited amount of training data available from COVID-19 subjects, Xie Weiyi et al initially trained and validated CTSU-Net on a cohort of 5000 subjects from the COPDGene study. Using the models pretrained on COPDGene, transfer learning was applied to retrain and evaluate CTSU-Net on 204 COVID-19 subjects. Experimental results show that CTSU-Net outperforms state-of-the-art baselines and performs robustly on cases with incomplete fissures and severe lung infection due to COVID-19.
The image above displays a qualitative comparison of the proposed CTSU-Net segmentation (middle row) and ground truth (bottom row) in CT scans of COVID-19 patients. Blue: right upper lobe, light blue: right lower lobe, red: left upper lobe, green: left lower lobe.
The algorithm is now available on Grand Challenge, where users are free to use the algorithm on their own data sets.
Read more about the CTSU-Net in the ArXiv paper.
More Research Highlights.