EyeNED workstation: Development of a multi-modal vendor-independent application for annotation, spatial alignment and analysis of retinal images

H. van Zeeland, J. Meakin, B. Liefers, C. González-Gonzalo, A. Vaidyanathan, B. van Ginneken, C.C.W. Klaver and C.I. Sánchez

in: Association for Research in Vision and Ophthalmology, 2019

Abstract

Purpose: Researchers and specialists in the field of ophthalmology currently rely on suboptimal vendor-specific software solutions for viewing and annotating retinal images. Our goal was to develop a fully-featured vendor-independent application that allows researchers and specialists to visualize multi-modal retinal images, perform spatial alignment and annotations, and review outputs of artificial intelligence (AI) algorithms.

Methods: The application consists of a web-based front-end that allows users to analyze baseline and follow-up images in a multi-modal viewer. It communicates with a back-end interface for grader authentication, loading and storing of images and annotation data. Several types of annotation techniques are available, ranging from image-level classification to point-based and region-based lesion-level annotations.

The user can select color fundus (CF) images, optical coherence tomography (OCT) volumes, infrared (IR) and autofluorescence (AF) images to be shown simultaneously in the viewer. Spatial alignment of the different modalities can be performed using an integrated affine registration method by clicking on corresponding landmarks, after which a synchronized cursor will appear. After several graders have annotated lesions, the application can be used to compare these and create a consensus grading.

Results : The application was used by graders and researchers in the EyeNED research group. Region based annotations of geographic atrophy were made for 313 studies containing 488 CF images and 68 OCT images; and of drusen in 100 OCT b-scans. Semi-automatic annotation of the area of central retinal atrophy in Stargardt disease was performed for 67 AF images. Point-based annotation was carried out on lesions in 50 CF images of diabetic retinopathy patients. The multimodal viewing and localisation of lesions was perceived as particularly helpful in the grading of lesions and consensus discussions.

Conclusions : A software solution has been developed to assist researchers and specialists to view and annotate retinal images. The application was successfully used for annotating lesions in various imaging modalities, facilitating the grading of images in large studies and the collection of annotations for AI solutions.