The following work is focusing on the development of two deep learning models applied to chest x-rays. The first model, Imagesorter, provides a solution for sorting chest x-ray images where metadata is not available or is unreliable. This is frequently the case when accessing large collection of radiographs and can result in very time consuming procedures to obtain reliable data. Specifically, the algorithm returns four properties of the image: the type of image presented, rotation (wheather the image is rotated), inversion (whether the grayscale level of the radiograph inverted) and orientation (whether a lateral chest x-ray is mirrored). Nearly 30,000 radiographs were gathered and used to train, validate and test a deep convolutional neural network. For the purpose, a ResNet50 network pretrained on ImageNet and finetuned on the chest x-ray dataset was used. Moreover, the network architechture was modified to return all the four features at once. The model achieved very good results over the test set and can be consider a valid tool to efficiently explore and sort large x-ray collections. The second model, Endotracheal-Tube, detect the presence of an endotracheal tube in a chest x-ray. Many automated methods require to gather chest x-rays where an endotracheal tube is present. The presented algorithm can help gather reliable data from large collection in a short amount of time. A large dataset was created for the project and a preprocessing method to crop a square area of the image where the tube lays is presented. Four models are trained, validated and tested over the same dataset to assess the best. At the end an InceptionV3 network pretrained on ImageNet and finetuned on the dataset achieved the best results (AUC = 0.993).
Both projects are part of OpenCXR, an open source library developed by the Chest X-Ray teams at the Dignostic Image Analysis Group at the Radboud University Medical Center, Nijmegen, The Netherlands.