Fundus Image Analysis: Quality Evaluation and Vessel Segmentation

1. A Skeletal Similarity Metric for Quality Evaluation of Retinal Vessel Segmentation

Flowchart of the proposed skeletal similarity metric.

The most commonly used evaluation metrics for quality assessment of retinal vessel segmentation are sensitivity, specificity and accuracy, which are based on pixel-to-pixel matching. However, due to the inter-observer problem that vessels annotated by different observers vary in both thickness and location, pixel-to-pixel matching is too restrictive to fairly evaluate the results of vessel segmentation. In this project, the proposed skeletal similarity metric is constructed by comparing the skeleton maps generated from the reference and the source vessel segmentation maps. To address the inter-observer problem, instead of using a pixel-to-pixel matching strategy, each skeleton segment in the reference skeleton map is adaptively assigned with a searching range whose radius is determined based on its vessel thickness. Pixels in the source skeleton map located within the searching range are then selected for similarity calculation. The skeletal similarity consists of a curve similarity which measures the structural similarity between the reference and the source skeleton maps and a thickness similarity which measures the thickness consistency between the reference and the source vessel segmentation maps. In contrast to other metrics that provide a global score for the overall performance, we modify the definitions of true positive, false negative, true negative and false positive based on the skeletal similarity, based on which sensitivity, specificity, accuracy and other objective measurements can be constructed. More importantly, the skeletal similarity metric has better potential to be used as a pixel-wise loss function for training deep learning models for retinal vessel segmentation. Through comparison of a set of examples, we demonstrate that the redefined metrics based on the skeletal similarity are more effective for quality evaluation, especially with greater tolerance to the inter-observer problem.

Publication:

Zengqiang Yan, Xin Yang, and Kwang-Ting Cheng, "A Skeletal Similarity Metric for Quality Evaluation of Vessel Segmentation," IEEE Transactions on Medical Imaging, 2018.

 

2. Joint Segment-level and Pixel-wise Losses for Deep Learning based Retinal Vessel Segmentation

The proposed joint-loss framework.

Deep learning based methods for retinal vessel segmentation, while enormous progress has been made in recent years, are all trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and a corresponding manual annotated segmentation. However, for fundus images, due to the highly imbalanced ratio between thick vessels and thin vessels (i.e. the majority of vessel pixels belong to thick vessels), a pixel-wise loss penalizes the misalignment of thick vessels heavier than that of thin ones. Consequently, a pixel-wise loss would limit the deep learning model to learn effective features for segmentation of thin vessels. Accurate segmentation of thin vessels, however, is an important task for clinical diagnosis of some diseases, e.g. neovascularization detection for diabetic retinopathy. To address this problem, in this project we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both segment-level and pixel-wise losses, the importance between thick vessels and thin vessels in the loss calculation is more balanced, and in turn more effective features can be learned for vessel segmentation without increasing the overall model complexity. Experimental results on public datasets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Additionally, we evaluate the quality of the output probability map through a threshold-free vessel segmentation experiment, which demonstrates that the joint-loss framework is able to learn more distinguishable features for vessel segmentation. We believe the findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.

Qualitive results of the joint-loss deep learning model on different datasets.

Publication:

Zengqiang Yan, Xin Yang, and Kwang-Ting Cheng, "Joint Segment-level and Pixel-wise Losses for Deep Learning based Retinal Vessel Segmentation," IEEE Transactions on Biomedical Engineering, 2018.

 

3. A Three-stage Deep Learning Model for Accurate Retinal Vessel Segmentation

The proposed three-stage framework.

Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixel-wise loss which treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying non-vessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE and CHASE DB1 clearly demonstrate that the proposed three-stage deep learning model outperforms the current state-of-the-art vessel segmentation methods.

Qualitive results of the three-stage deep learning model on different datasets.

Publication:

Zengqiang Yan, Xin Yang, and Kwang-Ting Cheng, "A Three-stage Deep Learning Model for Accurate Retinal Vessel Segmentation," IEEE Journal of Biomedical and Health Informatics, 2018.