Out-of-distribution (OOD) detection is an important aspect of deep learning-based medicalimaging approaches for ensuring the safety and accuracy of diagnostic tools. In this paper,we investigate the effectiveness of three self-supervised learning techniques for OOD detec-tion in both a labeled RadboudXR dataset and a clinical dataset with OOD data but nolabels. Specifically, we explore two predictive self-supervised techniques and one contrastiveself-supervised technique and evaluate their ability to detect OOD samples. Furthermore,we evaluate the performance of the state-of-the-art vision transformer model on medicaldata both as a standalone method and as the backbone of a self-supervised task. Our resultsindicate that the contrastive self-supervised method Bootstrap-Your-Own-latent (BYOL)and vision transformer model were not effective in detecting OOD samples. However, thepredictive methods performed well on both 2D and 3D data, and demonstrated scalabilityin difficulty. These findings suggest the potential utility of self-supervised learning tech-niques for OOD detection in medical imaging. When determining an OOD cut-off valuefor clinical usage there are, however, problems with separation between datasets. Thesechallenges suggest that further research is needed before these techniques can be adoptedfor clinical usage.
Self-supervised Out-of-Distribution detection for medical imaging
R. Geurtjens, D. Peeters and C. Jacobs
Master thesis 2023.