Automated pulmonary lobe segmentation in computed tomography scans is still an open problem, especially for scans with substantial abnormalities, such as in COVID-19 infection. Convolution kernels in recently presented networks only respond to local information within the scope of their effective receptive field, and this may be insufficient to capture all necessary contextual information.
Xie Weiyi and colleagues argue that contextual information is critically important for accurate delineation of pulmonary lobes, especially when the lungs are severely affected by diseases such as COVID-19 or COPD. They propose a propose a relational approach (RTSU-Net) that leverages global context by introducing a first stage in which the receptive field encompasses the entire scan and by using a novel non-local neural network module.
With a limited amount of training data available from COVID-19 subjects, Xie Weiyi et al initially trained and validated RTSU-Net on a cohort of 5000 subjects from the COPDGene study. Using the models pretrained on COPDGene, transfer learning was applied to retrain and evaluate RTSU-Net on 470 COVID-19 subjects. Experimental results show that RTSU-Net outperforms state-of-the-art baselines and performs robustly on cases with incomplete fissures and severe lung infection due to COVID-19.
The image above displays a qualitative comparison of the proposed RTSU-Net segmentation (middle row) and ground truth (bottom row) in CT scans of COVID-19 patients. Blue: right upper lobe, light blue: right lower lobe, red: left upper lobe, green: left lower lobe.
The algorithm is now available on Grand Challenge, where users are free to use the algorithm on their own data sets.
Read more about the RTSU-Net in the TMI paper, published this month.
The Gleason score is the most powerful prognostic marker for prostate cancer patients. Unfortunately, when pathologists assign this score from visually analyzing tissue slides, there is a large inter- and intra-observer variability. Deep learning may alleviate this problem. Therefore Wouter Bulten and his colleagues from DIAG developed an automated Gleason scoring system. The work appeared in The Lancet Oncology.
The figure above shows the development of the deep learning system. Data was labeled semi-automatically (top row), removing the need for manual annotations by pathologists. The final system assigns a Gleason growth patterns on a cell-level and achieved a high agreement with the reference standard (quadratic kappa 0.918). In a separate observer experiment, the deep learning system outperformed 10 out of 15 pathologists in agreement with the reference standard. The system was validated on an external test set where it achieved an AUC of 0.977 for distinguishing between benign and malignant biopsies.
Click here to try Wouter's algorithm on your own data and learn more about the project on automated Gleason grading.
On the 16th of September the official opening event of the Thira Lab and Radboud AI for Health Lab took place in a packed Tuinzaal of Radboudumc. The Chair of the Executive Board Paul Smits opened the first two Nijmegen-based labs within the nationwide Innovation Center of Artificial Intelligence (ICAI).
Thira Lab Thira Lab is a collaboration between Radboudumc and Thirona, a spin-out company from Radboudumc, and Delft Imaging Systems, a company developing healthcare solutions for the specific needs of vulnerable communities around the world. In Thira Lab, nine Ph.D. candidates and post-docs from Radboudumc work on deep learning image analysis of CT scans, radiographs and retinal images.
Radboud AI for Health Radboud AI for Health Lab is a new collaboration between Radboud University and Radboudumc, and is part of Radboud AI, a campus-wide initiative to improve collaboration and start new projects with AI researchers in Nijmegen. Radboud AI for Health has awarded 6 Ph.D. positions, aimed to bring a variety of AI solutions to the clinic. Radboud AI for Health, located in the Radboudumc Innovation Space, will also house BSc and MSc students who perform AI research projects in collaboration with Radboudumc clinicians. Finally, the Lab offers courses to Radboudumc employees who would like to learn more about the application of AI in healthcare.
David Tellez et al published a new method to train neural networks on gigapixel whole-slide images directly, avoiding the need for fine-grained annotations. This approach allows the neural network to discover new predictive features by using automatically derived 'annotations' such as molecular biomarkers or patient outcome.
In their work, published in IEEE Transactions on Pattern Analysis and Machine Intelligence, the authors present the neural image compression (NIC, top image) method, which opens the door to training neural networks using slide-level annotations obtained automatically, e.g., targeting molecular biomarkers or patient outcome. This approach allows the neural network to discover previously unknown visual features that are relevant to predict the target at hand. NIC works in two steps. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, summarizing the image in a very efficient manner; reducing its size drastically while retaining most semantic information. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations.
David Tellez et al compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. They found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, they could visualize the regions of the WSIs where the classifier attended to, and confirmed that they overlapped with annotations from human experts.
During the 2nd edition of MIDL, Thomas de Bel won the Best Poster Award for his presentation on stain-transforming cycle-consistent generative adversarial networks (cycleGAN) for improved segmentation of renal histopathology.
Color variations in digital histopathological slides due to differences in tissue processing or scanning techniques can negatively affect the performance of deep learning applications. Thomas de Bel et al applied cycleGAN for stain transformation between two centers and have adapted the orginical cycleGAN architecture for improved training stability and performance, generating high quality artifically stained images. The authors trained two segmentation networks for the analysis of renal tissue using single center data; one with tranformed images, and one without. Stain transformation proved to be beneficial for the segmentation performance on data sets from both centers, raising the Dice-coefficients from 0.36 to 0.85 and from 0.45 to 0.73. Read more about this work in the Proceeding of Machine Learning Research.
Last week, the final meeting of the AMI-project took place at the Radiology and Pathology department of Radboud University Medical Center. The AMI-project was a close collaborative project between the Diagnostic Image Analysis Group and the Fraunhofer Institute for Digital Medicine MEVIS. The aim of AMI was to develop a generic platform for automatic medical image analysis. Deep learning-based algorithms have successfully been developed for the automated analysis and registration of chest CTs, ophthalmology images and histological whole slide images. The web-based viewing system, specifically developed for, but not restricted to this project, offers support for multiple radiology, ophthalmology and pathology image formats. The AMI-project was funded by the Radboud University, Radboud University Medical Center and the Fraunhofer Gesellschaft as an ICON-project, focusing on collaborative, interdisciplinary and international research.
Coronary artery calcium (CAC) and thoracic aorta calcium (TAC) scores derived from chest computed tomography might be useful biomarkers for individualized cardiovascular disease prevention and could be especially relevant in high-risk populations such as heavy smokers. DIAG's Nikolas Lessmann and a team from the UMC Utrecht investigated the prevalence and extent of CAC and TAC in male and female heavy smokers and assessed the difference in the association of CAC and TAC with cardiovascular and all-cause mortality in both groups. Convolutional neural networks were used to automatically detect and label calcifications according to the affected vascular bed, which enabled the inclusion of a large study population. Depicted in the bar diagram above are the median CAC and TAC volume in men (blue) and women (red) in different age groups; CAC was more common and more severe in men and developed later in women, but TAC developed equally in both sexes. More about the associations with cardiovascular mortality can be found in the paper that was published in JACC: Cardiovascular Imaging.
January 9, 2019, Thomas van den Heuvel defended his thesis on Automated low-cost ultrasound. He showed that a deep learning system can perform real-time detection of risk factors for pregnant women using the input from a low-cost ultrasound device. His work was covered by NOS op 3, national radio, Algemeen Dagblad, Medisch Contact, and RTL Z. Next month, Thomas will return to Ethiopia for further testing of his device.
As a result of the second ‘Onderzoek & Implementatie’ program call this year, 3 KWF grants were awarded this month to Radboud Imaging Research group-members. As part of a consortium led by Mireille Broeders, Nico Karssemeijer, Ritse Mann and Jonas Teuwen will investigate the correlation of mammographic image features with pathological subtypes and prognosis. John Hermans and Henkjan Huisman will work together with Lodewijk Brosens on defining a vascular phenotype of pancreatic cancer. Last, Francesco Ciompi will work as project leader together with his team on the PROACTING project. Francesco, Jeroen van der Laak, Jelle Wesseling (NCI) and Esther Lips (NCI) aim in this project to predict the response to neo-adjuvant treatment of breast cancer patients. All project make use of deep learning techniques.
Peter Bandi and Oscar Geessink, organizers of CAMELYON17, challenged participants to move from individual metastases detection (CAMELYON16) to classification of lymph node status on a patient level. Over 300 participants registered on the challenge website, of which 23 teams submitted a total of 37 algorithms before the deadline. The algorithmic details of the top-twelve best submissions are discussed in the paper that was accepted for publication in IEEE Transactions on Medical Imaging last August. Read here which architecture and methodology led to the best results and what pushed the highest kappa value from 0.89 to 0.93.
This week, the Radboud Science Award was awarded to Hanneke van Ouden, Thijs Eijsvogels, Jeroen van der Laak and Geert Litjens. In addition to recognizing excellent research, this award aims at connecting academic research to primary school teaching programs. Prior to and during the award ceremony, the winners were asked critical questions by the students of the participating primary schools. In the coming year, the winners will work on the development of teaching material together with their colleagues, the Radboud Wetenschapsknooppunt and teachers. Videos of the interviews and the award ceremony can be found here
Manual counting of mitotic tumor cells in tissue sections constitutes one of the strongest prognostic markers for breast cancer. This procedure, however, is time-consuming and error-prone. David Tellez developed a method to automatically detect mitotic figures in H&E stained breast cancer tissue sections based on convolutional neural networks (CNNs). The image shows a selection of patches identified by the CNN as containing a mitotic figure. From 181 detections, 128 patches were classified as true positives by a resident pathologist, resulting in a precision score of 0.707. The work was published in IEEE Transactions on Medical Imaging.
Depicted above is part of the poster presented by Hans Pinckaers at the first edition of MIDL 2018 held on the 4th-6th of July in Amsterdam, The Netherlands. The graphics nicely show his work on how to train a normal CNN with 8192x8192 input sizes and a single label on only one GPU. For code check out github.
Hans' poster was one of the 61 posters presented at the conference, next to an additional 21 oral presentations. Applicants representing 25 different countries submitted 122 papers and 99 abstracts, of which 41% and 35% were accepted respectively. The organizers thank everyone who attended MIDL 2018, and hope to see many of you at MIDL 2019 in London.
On the 14th of June the first Deep Learning Nijmegen Meetup is organized by the Nijmegen Data Science Centre. The program comprises two presentations given by inspiring speakers Marcel van Gerven (Radboud University) and Taco Cohen (Qualcomm/UvA), followed by a social event. Visit the Meetup website for time, location and registration; note that places are limited! This event is hosted by David Tellez and Jonas Teuwen.
On the 21st of April, the Hands-on with artificial intelligence (AI) workshop was hosted at the Radiology and Nuclear Medicine Department of Radboudumc. Participants were given the chance to try out software demos from 9 companies from across 7 countries in order to experience the benefits of working with artificial intelligence in the field of radiology. Seventy radiologists were confronted not only by how AI can be of invaluable aid in daily practice, but also how certain tasks can be completely taken over by the computer. Not only in the future, but today! Credits to the organizing team, collaborators, and participants for turning this day into the success that it was..
Anton Schreuder developed a risk model to determine whether participants in a lung cancer screening trial should return for a follow-up CT scan after one or two years after the baseline scan. With the gradual implementation of screening programs across the world, applying this model is expected to greatly decrease the number of unnecessary scans and false positive outcomes at the expense of delaying relatively few diagnoses. This paper was published online on March 30 Thorax.
On the 8th of March, Mohsen Ghafoorian succesfully defended his thesis titled 'Machine Learning for Quantification of Small Vessel Disease Imaging Biomarkers' and with this received his doctoral degree as Ph.D. His most recent paper 'Location Sensitive Deep Convolutional Neural Networks for Segmentation of White Matter Hyperintensities' was published on the 16th of November in Nature Scientific Reports.
On the 12th of January the paper of Jean-Paul Charbonnier' et al. 'Automatic segmentation of the solid core and enclosed vessels in subsolid pulmonary nodules' was published in Nature Scientific Reports. Jean-Paul succesfully defended his thesis 'Segmentation and Quantification of Airways and Blood Vessels in Chest CT' in December 2017.
The Diagnostic Image Analysis Group organized Camelyon16, the first medical image analysis challenge with whole slide digital pathology images in 2016. The competition was a great success, and several of the submitted software solutions outperformed human pathologists in the detection of lymph node metastases. The results of Camelyon16 were published in JAMA and covered by the national Dutch television show Nieuwsuur.
Midas Meijs developed a pattern recognition method for full cerebral vessel segmentation in 4D CT using novel image features including the weighted temporal variance. The method requires no tracking or active contours and was demonstrated to be robust by evaluation on a large database of suspected stroke patients as seen in everyday clinical practice. The work was published in the November 2017 issue of Scientific Reports.
Deep learning has slowly pervaded every aspect of medical imaging. Recently, DIAG published an extensive review, titled ‘A Survey on Deep Learning in Medical Image Analysis’ in the journal Medical Image Analysis. It covers a significant part of the medical imaging field, ranging from radiology to pathology and ophthalmology. The review is subdivided into four main parts. The first part briefly introduces some general concepts in deep learning and some basic neural network architectures. Subsequently, we discuss several novel and interesting applications of deep learning with respect to specific tasks in medical image analysis, for example detection, segmentation and image generation and enhancement. The third part focuses on different application areas. We give a thorough overview of all papers published for every specific area like brain imaging, digital pathology or color fundus images. Last, we provide some insight on current challenges and opportunities for deep learning in medical image analysis and shed some light on the potential application of novel architectures like generative adversarial networks and variational auto-encoders. We hope the paper can function as a primer for both medical image analysis researchers interested in applying deep learning algorithms to their work and computer scientists who want to ventures into medical imaging. You can download the survey from the following sites: arXiv and MEDIA
Jeroen van der Laak has contributed to a news article of the NOS about the success of a deep learning-based algorithm to automatically detect malignant lymph tissue in pathology slides. The algorithm, trained on data provided by the DIAG group, outperforms pathologists on this task. More info can be found on the Google research Blog and in the resulting paper.
Click here to view a short AudioSlides presentation of Thomas van den Heuvel about the automatic detection of cerebral microbleeds in patients with traumatic brain injury. The presence of Cerebral Microbleeds (CMBs) may have prognostic value in Traumatic Brain Injury (TBI) patients. However, manually annotating CMBs in TBI patients is a time consuming task that is prone to errors, because CMBs are easily overlooked and are difficult to distinguish from blood vessels. A Computer Aided Detection system was developed that automatically detects CMBs in TBI patients. This work has been published in NeuroImage: Clinical.
Ajay Patel, Midas Meijs and Sil van de Leemput (4DCT group) have combined their latest results into one rendering (see above). The rendering is based on cranial cavity segmentation, vessel segmentation and white matter/gray matter/cerebrospinal fluid segmentation. This image has been submitted to this year's RSNA image contest in the category 'best medical image'. Voting will take place until October 31st and can be done via this link.
Reliable breast density measurement is needed to personalize screening. Katharina Holland investigated the consistency of BI-RADS density categories (1 to 4) in serial screening mammograms and compared the results to the automated breast density measurements. Less density category changes occurred with automated assessment than with human assessment. The image shows a prior/current mammogram and the density scores given by the readers and the software. Her work has been published in The Breast.
MR Lymphography (MRL) is the most accurate imaging modality for the assessment of lymph node metastases in prostate cancer patients, but the interpretation of MRL images is a very time-consuming task for the radiologist. Oscar Debats improved a computer-aided detection system using anatomical information from multi-atlas registration. The new system finds small lymph nodes automatically. The figure shows: an example MRL slice (left), the old and new lymph node likelihood map (middle and right). The new map is much more accurate. His work appeared in the June 2016 issue of Medical Physics.
Convolutional neural networks (CNNs) are network architectures that are becoming increasingly popular in medical image analysis, but are computationally expensive to train. Mark van Grinsven has developed a method to improve and speed up the CNN training. The method was applied to the automatic detection of hemorrhages in color fundus images. The Figures show an example case (left), the annotations made by a human expert (middle) and the output of the automatic system (right). His work has been published in the Special Issue on Deep Learning of Transactions on Medical Imaging.
Breast cancer lesions might be overlooked or misinterpreted in breast screening programs with MRI. Albert Gubern-Mérida developed an automated system which is able to detect breast cancer lesions in MRI scans that were thought to be negative. The figure shows a cancer on the left breast that was detected during MRI screening in the current scan, but was missed in the previous screening round. The automated system was able to detect the cancer (red box) in both current and prior examinations. This work has been published in the European Journal of Radiology and was the topic of an article in AuntMinnie.
Detection of change between consecutive low-dose CT images is crucial in lung cancer screening. Visual comparison of CT scans is tedious and hence, automatic detection of change may aid human readers. Colin Jacobs developed an automatic system for detecting change between low-dose CT images using subtraction images. The figure shows a growing part-solid nodule with the current scan on the left, the prior scan on the right, and the subtraction image in the middle. This work was presented at the RSNA conference in 2015 and was the topic of an article on AuntMinnie.
Histopathology involves microscopic examination of stained histological slides to study presence and characteristics of disease. Tissue sections are stained with multiple contrasting dyes to highlight different tissue structures and cellular features. This staining provides invaluable information to the pathologists for diagnosing and characterizing various pathological conditions. Computer-aided diagnosis (CAD) can potentially alleviate shortcomings of human interpretation. However, variations in the color and intensity of hematoxylin and eosin (H&E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. Babak Ehteshami Bejnordi proposed an algorithm for standardizing whole-slide histopathological images. The proposed method is based on transformation of the chromatic and density distributions for each individual stain class in the hue-saturation-density (HSD) color model. The results of the standardization performed by the proposed algorithm is shown in the figure. The image shown in the top left was used as the template image. The images on the second row are example images that were stained in different laboratories. The standardized versions of these images are presented in the third row. More...
CT scans are three-dimensional images, reconstructed from many different projection images. Manufacturers of CT scanner have different software for reconstructing the images from the projection data, with many different settings, so the resulting images can look smooth (top left) or a bit sharper (top right). If you want to quantify disease with simple procedures like thresholding, as is common practice to get a measure of the severity of emphysema, the results (the amount of colored pixels) are very dependent on the reconstruction settings. Leticia Gallardo Estrella proposed a simple procedure to standardize the sharpness of the reconstructed scans. The bottom row shows the standardized images, computed from the image directly above. Now the amount of dark pixels is very comparable. We use this procedure to obtain more objective measurements of emphysema in studies with data from different scanners and different reconstruction settings. More...
Francesco Ciompi developed a general scale-invariant and rotation-invariant descriptor for objects in 3D images called Bag of Frequencies. He showed that this descriptor can be used to distinguish between true pulmonary nodules and false positives detected by a computer system to find nodules automatically, and that the descriptor can be used to classify which nodules are spiculated, an indication of malignancy. More...
The human lungs consist of five parts, the lobes (the right lung has three lobes, the left lung two). Segmenting these lobes in CT scans of the lungs is not a simple task. Bianca Lassen developed an automatic method to precisely delineate the lobes. She evaluated the method on 55 scans for a publicly available data set LOLA11. The renderings above illustrate the results on eight of these scans. More...
Computer-aided detection is expected to play an important role to facilitate the reading of automated 3D breast ultrasound images that are increasingly used in breast cancer screening. To reduce the number of false positive detections outside of the breast, Tao Tan published a method to automatically locate the chest wall in Medical Image Analysis. The left images show manually annotated points on the surface of the ribs in the coronal and sagittal plane. The crosses represent the manually annotated points on the rib surface, projected on the current slice. Annotated points on the current slice are represented by pink crosses, and projected crosses are red. The right images show a scan with dark shadow enhancement overlay on the coronal and sagittal plane. More...
Lung cancer, by far the most deadly cancer worldwide, usually becomes symptomatic only when it is already advanced. With low dose CT scanning, lung cancer can be detected in an early stage, when it can still be treated successfully. Bram van Ginneken has been awarded a 1.5 million Euro VICI grant, in the NWO Vernieuwingsimpuls programme, for his proposal Lung CT Screening: More for Less. The goal of this project is to automate the reading of lung screening CT scans as much as possible, using computer detection algorithms and automatic volumetric segmentation of lung nodules, as illustrated above for one lung nodule that grows over a period of three years. From this analysis the probability that a suspicious lesions represents lung cancer can be accurately estimated and appropriate work up for the patient can be determined. We will also develop an automatic computer algorithm to estimate risk for cardiovascular and chronic obstructive lung disease from lung CT screening scans. All this information can be combined by an expert system to make a personal recommendation for the screening interval: not everybody needs a yearly CT. In this way we hope it will be possible to make screening both more effective and less costly.
Prostate cancer is the second most common cause of cancer death in men. Image registration tools are commonly used for image-guided interventions in prostate cancer. Wendy van de Ven extended a non-rigid surface-based registration method with biomechanical modeling usable for e.g. MR guided TRUS biopsies. By using biomechanical modeling the internal prostatic deformation can be better controlled than with a regular surface-based registration method. The left image shows a T2-weighted MR image of the prostate before deformation with internal anatomical prostate landmarks in blue. The middle image shows the prostate after deformation with the real positions of the corresponding landmarks indicated in green and the registered landmarks in red after a regular non-rigid surface-based registration. The right image shows the result obtained after applying a non-rigid surface-based registration with biomechanical regularization. The registration error was significantly smaller when extending a surface-based prostate registration method with a biomechanical model. More...
The human lungs are divided into lobes, which are separated by a double layer of visceral pleura called the lobar fissures. These fissures are often incomplete, and it has been found that certain new treatments are less effective when this is the case. Measuring fissural completeness is therefore important, but visual assessment is time-consuming and tedious. Eva van Rikxoort developed a method to automatically detect the fissures and quantify their completeness from chest CT scans. The left image shows a coronal slice of a chest CT scan, the image on the right shows a visualization of the fissure completeness, where the detected fissure is indicated in yellow and the lobar boundary that is not delineated by a fissure is indicated in red. The automatic fissure completeness was tested on subjects with COPD and shown to perform as good as experienced radiologists. More...
If you take a picture and what you expect to see is not there, it could mean that something is wrong. Or... you took the picture at the wrong moment! In CT angiography, where contrast is injected in the blood vessels to see if a blood vessel is occluded, poor timing of the moment the 3D scan is acquired can lead to the wrong diagnosis. In the CTA scan on the left, the missing vessel at the location of the arrow may be completely occluded. Ewoud Smit developed a technique to derive a timing-invariant CTA (TI-CTA) from a 4D scan that makes a movie while the contrast enters and leaves the brain. On the TI-CTA, shown on the right, we can see that the vessel is not occluded, but apparently the timing of the CTA was wrong and the contrast arrived a little later. Smit's technique and the advantages it brings are presented in his recent Radiology paper. More...
Prostate cancer is the second most common cause of cancer death in men. At DIAG we are developing a CAD system that can detect prostate cancer in MRI studies. The left image shows an image from a typical T2-weighted MR series where a cancer is circled. The center image shows initial cancer likelihood on a per voxel basis. The right image shows the final output of the CAD system, where the cancer is segmented and a probability is given. This work was presented by Geert Litjens at the SPIE Medical Imaging Conference in February, 2012. More...
Tuberculosis is still a large healthcare problem in the world. The CAD4TB group in the Diagnostic Image Analysis Group is developing a CAD system for tuberculosis on chest radiographs. The left image shows a small lesion in the upper right lobe, the right image its corresponding detection by the CAD system. Laurens Hogeweg evaluated this system on a database of radiographs of homeless people from London, UK. This work was presented at the RSNA conference in 2011 and covered by AuntMinnie. More...
Mammographic breast density is a strong risk factor for breast cancer. Most studies measure breast density subjectively with a semi-automatic threshold method through a software package named Cumulus (middle image). Michiel Kallenberg developed a completely automatic method (right image) to assess breast density that corresponds excellently with Cumulus. More...
Magnetic Resonance Lymphography (MRL) is a promising new imaging technique for the detection of lymph node metastases. Oscar Debats developed two new methods for lymph node segmentation in MRL images. Two example lymph nodes are shown above, in coronal (cor), sagittal (sag), and transversal (tra) view. The two new methods, called ECC and PSAM, closely resemble the manual segmentations while existing methods tend to 'leak' out of the nodes, as shown here for the CCRG and GCS methods. More...
|Color fundus images are widely used for screening and diagnosis of diabetic retinopathy. This task involves the detection and quantification of retinal lesions, such as hemorrhages and hard exudates. On the left a color fundus image with such lesions is displayed. Clarisa Sánchez developed a computer-aided diagnosis scheme that automatically detects retinal lesions on color fundus images, shown in the color overlay on the right, and determines if the patient should be referred to a specialist. More...|
|Automated 3D breast ultrasound (ABUS) is a new imaging technique that can help to detect early breast cancer. On the left a malignant lesion imaged with ABUS is displayed in coronal (top) and transversal view (bottom). Especially in the coronal view, spiculation can be observed. Tao Tan developed a computer-aided diagnosis scheme that computes a spiculation feature map, shown in color overlays on the right, and from this map determines the probability that a lesion is malignant. The system obtained very promising results in a dataset of 40 lesions including 20 cancers. More...|
|Geert Litjens has developed a method to simulate nodules on chest radiographs. Such nodules can be lung cancer and should not be missed. Computer-aided detection schemes may be improved if they can be trained with high quality simulated nodules. Two of the four cases shown above are simulated. Click here to find out which ones. More...|
|On the left a normal chest radiograph. On the right the same radiograph, but with part of the fifth through the nineth posterior rib suppressed using a technique developed by Laurens Hogeweg. Suppressing the ribs makes it easier to analyze the texture of the lung parenchyma. More...|