SINCE 2004

  • 0

      0 Item in Bag


      Your Shopping bag is empty

      CHECKOUT
  • Notice

    • ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328

    Projects > ELECTRONICS > 2018 > IEEE > DIGITAL IMAGE PROCESSING

    SALIENCY DETECTION BASED ON MULTISCALE EXTREMA OF LOCAL PERCEPTUAL COLOR DIFFERENCES


    Abstract

    Visual saliency detection is a useful technique for predicting which regions humans will tend to gaze upon in any given image. Over the last several decades, numerous algorithms for automatic saliency detection have been proposed and shown to work well on both synthetic and natural images. However, two key challenges remain largely unaddressed: (1) How to improve the relatively low predictive performance for images that contain large objects; and (2) how to perform saliency detection on a wider variety of images from various categories without training. In this work, we propose a new saliency detection algorithm that addresses these challenges. Our model first detects potentially salient regions based on multiscale extrema of local perceived color differences measured in the CIELAB color space. These extrema are highly effective for estimating the locations, sizes, and saliency levels of candidate regions. The local saliency candidates are further refined via two global extrema-based features, and then a Gaussian mixture is used to generate the final saliency map. Experimental validation on the extensive CAT2000 dataset demonstrates that our proposed method either outperforms or is highly competitive with prior approaches, and can perform well across different categories and object sizes, while remaining training-free.


    Existing System

    Saliency Detection via Models of Visual Processing, Saliency Detection Based on Frequency-Domain Features and Local Saliency via Perceived Color-Difference Extrema.


    Proposed System

    The main contributions of this work are twofold: (1) We present a saliency detection technique that can maintain high detection performance on images containing objects of various sizes and salient sub-objects by using a combination of multiscale perceptual-color-difference extrema and a measure of global saliency based on directional-rarity and color-rarity. We examine the influence of the CIELAB color space on salience detection within the framework of the proposed algorithm. The proposed approach enables us to determine and describe salient objects center positions and scales based on a LoG or CenSurE multiscale decomposition. As we will demonstrate, this technique allows us better detect salient sub-objects while remaining competitive in terms of overall detection performance. (2) Our CenSurE-based approach to saliency detection does not require training, and has an advantage that there is no need to explicitly calculate the ∆L*, ∆a*, and ∆b* CIELAB differences because approximations of these values naturally emerge from the multiscale decomposition. Our approach requires specification of the number of scales/sizes used in the multiscale decomposition; however, these choices are not critical as long as they are fixed at reasonable values, and as we will show, the detection performance remains largely the same when using a LoG-based decomposition vs faster CenSurE approximation, thus suggesting that the specific choice of multiscale decomposition is not a critical part of our approach.


    Architecture


    BLOCK DIAGRAM


    FOR MORE INFORMATION CLICK HERE