SINCE 2004

  • 0

      0 Item in Bag


      Your Shopping bag is empty

      CHECKOUT
  • Notice

    • ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328

    Projects > ELECTRONICS > 2018 > IEEE > MEDICAL IMAGE PROCESSING

    CLINICAL REPORT GUIDED RETINAL MICROANEURYSM DETECTION WITH MULTI-SIEVING DEEP LEARNING


    Abstract

    Timely detection and treatment of microaneurysms is a critical step to prevent the development of vision-threatening eye diseases such as diabetic retinopathy. However, detecting microaneurysms in fundus images is a highly challenging task due to the low image contrast, misleading cues of other red lesions, and the large variation of imaging conditions. Existing methods tend to fail in face of the large intra-class variation and small inter-class variations for microaneurysm detection in fundus images. Recently, hybrid text/image mining computeraided diagnosis (CAD) systems have emerged to offer a promise of bridging the semantic gap between images and diagnostic information. In this paper, we focus on developing an interleaved deep mining technique to cope intelligently with the unbalanced microaneurysm detection problem. Specifically, we present a clinical report guided multi-sieving convolutional neural network (MS-CNN) which leverages a small amount of supervised information in clinical reports to identify the potential microaneurysm regions via the image-to-text mapping in the feature space. These potential microaneurysm regions are then interleaved with fundus image information for multi-sieving deep mining in a highly unbalanced classification problem. Critically, the clinical reports are employed to bridge the semantic gap between lowlevel image features and high-level diagnostic information. We build an efficient microaneurysm detection framework based on the hybrid text/image interleaving and validate its performance on challenging clinical datasets acquired from Diabetic Retinopathy patients. Extensive evaluations are carried out in terms of fundus detection and classification. Experimental results show that our framework achieves 99.7% precision and 87.8% recall, comparing favorably with state-of-the-art algorithms. Integration of expert domain knowledge and image information demonstrate the feasibility of reducing the difficulty of training classifiers under extremely unbalanced data distributions.


    Existing System

    Gaussian kernels and multi-scale correlation coefficient.


    Proposed System

    The overview of our proposed includes two parts: an automatic image-to-text mapping that generates expert guided segmentation from clinical reports coupled with fundus images, and a multi-sieve convolutional neural network (MS-CNN) that filters the false positives based on multimodal inputs including the expert-guided segmentation and the image data. Specifically, in the training stage, the image-to-text mapping model extracts key words from expert annotations in clinical reports to establish a mapping between keywords and visual feature subspaces. This model maps visual features to lesion types projected from the text, which leads to expert-guided segmentation of the fundus images. After that convolutional neural networks are learned from multimodal sources (the expertguided segmentation and the fundus images) since it can automatically learn the feature maps and perform end-to-end pixel-wise classification.


    Architecture


    BLOCK DIAGRAM


    FOR MORE INFORMATION CLICK HERE