Microaneurysms segmentation and diabetic retinopathy detection by learning discriminative representations

Mhd Hasan Sarhan, Shadi Albarqouni, Mehmet Yigitsoy, Nassir Navab, Eslami Abouzar

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Deep learning techniques are recently being used in fundus image analysis and diabetic retinopathy detection. Microaneurysms are important indicators of diabetic retinopathy progression. The authors introduce a two-stage deep learning approach for microaneurysms segmentation using multiple scales of the input with selective sampling and embedding triplet loss. The proposed approach facilitates a region proposal fully convolutional neural network trained on segmented patches and a patch-wise refinement network for improving the results suggested by the first stage hypothesis. To enhance the discriminative power of the second stage refinement network, the authors use triplet embedding loss with a selective sampling routine that dynamically assigns sampling probabilities to the oversampled class patches. This approach introduces a 23.5% relative improvement over the vanilla fully convolutional neural network on the Indian Diabetic Retinopathy Image Data set segmentation data set. The proposed segmentation is incorporated in a classification model to solve two downstream tasks for diabetic retinopathy detection and referable diabetic retinopathy detection. The classification tasks are trained on the Kaggle diabetic retinopathy challenge data set and evaluated on the Messidor data. The authors show that adding the segmentation enhances the classification performance and achieves comparable performance to the state-of-the-art models.

Original languageEnglish
Pages (from-to)4571-4578
Number of pages8
JournalIET Image Processing
Issue number17
StatePublished - 24 Dec 2020


Dive into the research topics of 'Microaneurysms segmentation and diabetic retinopathy detection by learning discriminative representations'. Together they form a unique fingerprint.

Cite this