The Diagnostic Image Analysis Group is part of the Departments of Radiology and Nuclear Medicine, Pathology, and Ophthalmology of Radboud University Medical Center. We develop computer algorithms to aid clinicians in the interpretation of medical images and thereby improve the diagnostic process.
The group has its roots in computer-aided detection of breast cancer in mammograms, and we have expanded to automated detection and diagnosis in breast MRI, ultrasound and tomosynthesis, chest radiographs and chest CT, prostate MRI, neuro-imaging and the analysis of retinal and digital pathology images. The technology we primarily use is deep learning.
It is our goal to have a significant impact on healthcare by bringing our technology to the clinic. We are therefore fully certified to develop, maintain, and distribute software for analysis of medical images in a quality controlled environment (MDD Annex II and ISO 13485).
On this site you find information about the history of the group and our collaborations, an overview of people in DIAG, current projects, publications and theses, contact information, and info for those interested to join our team.
Currently, the Gleason score, assigned by a pathologist after microscopic examination of cancer morphology, is the most powerful prognostic marker for prostate cancer patients. However, it suffers from significant inter- and intra-observer variability, limiting its usefulness for individual patients. This problem could be solved by the fully automated deep learning system developed by Wouter Bulten and his colleagues. Their work appeared online in The Lancet Oncology.
The system was developed using 5759 biopsies from 1243 patients. A semi-automatic labeling technique was used to circumvent the need for manual pixel-level annotations. The study was focussed on the full range of Gleason grades, and evaluated on a large cohort of patients with an expert consensus reference standard and an external tissue microarray test set.
In the figure above the development of the deep learning system is depicted. The authors employed a semi-automated method of labeling the training data (top row), removing the need for manual annotations by pathologists. The final system can assign Gleason growth patterns on a cell-level and achieved a high agreement with the reference standard (quadratic kappa 0.918). In a separate observer experiment, the deep learning system outperformed 10 out of 15 pathologists in agreement with the reference standard. The system was validated on an external test set where it achieved an AUC of 0.977 on distinguishing between benign and malignant biopsies and an AUC of 0.871 using grade group 2 as a cut-off.
Click here to try Wouter's algorithm on your own data and learn more about the project on automated Gleason grading.
More Research Highlights.