SINCE 2004

  • 0

      0 Item in Bag


      Your Shopping bag is empty

      CHECKOUT
  • Notice

    • ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328

    Projects > ELECTRONICS > 2019 > IEEE > DIGITAL IMAGE PROCESSING

    D3R-NET: DYNAMIC ROUTING RESIDUE RECURRENT NETWORK FOR VIDEO RAIN REMOVAL


    Abstract

    In this paper, we address the problem of video rain removal by considering rain occlusion regions, i.e. very low light transmittance for rain streaks. Different from additive rain streaks, in such occlusion regions, the details of backgrounds are completely lost. Therefore, we propose a hybrid rain model to depict both rain streaks and occlusions. Integrating the hybrid model and useful motion segmentation context information, we present a Dynamic Routing Residue Recurrent Network (D3RNet). D3R-Net first extracts the spatial features by a residual network. Then, the spatial features are aggregated by recurrent units along the temporal axis. In the temporal fusion, the context information is embedded into the network in a “dynamic routing” way. A heap of recurrent units takes responsibility for handling the temporal fusion in given contexts, e.g. rain or non-rain regions. In the certain forward and backward processes, one of these recurrent units is mainly activated. Then, a context selection gate is employed to detect the context and select one of these temporally fused features generated by these recurrent units as the final fused feature. Finally, this last feature plays a role of “residual feature”. It is combined with the spatial feature, and then used to reconstruct the negative rain streaks. In such a D3R-Net, we incorporate motion segmentation, which denotes whether a pixel belongs to fast moving edges or not, and rain type indicator, indicating whether a pixel belongs to rain streaks, rain occlusions and non-rain regions, as the context variables. Extensive experiments on a series of synthetic and real videos with rain streaks verify not only the superiority of the proposed method over state-of-the-art but also the effectiveness of our network design and its each component.


    Existing System

    Frequency domain representation, sparse representation, Gaussian mixture model


    Proposed System

    This method proposed a novel hybrid video rain model that visits various rain cases including rain occlusions. In rain occlusion regions, the pixels are replaced by rain reliance. This regional information is then embedded into the proposed method for video deraining. This method firstly to solve the problem of video rain removal with deep recurrent networks. Specifically, a D3R-Net is proposed. The rain streaks appear randomly among frames, whereas the motions of backgrounds are tractable. Considering that, recurrent neural networks (RNN) are employed to encode the information of adjacent background frames from their degraded observations, obtaining representative features for deraining. Furthermore, our D3R-Net utilizes a spatial temporal residue learning, where the temporally fused feature plays a role of “residue feature”. Based on the proposed refined hybrid rain model, and further considerations of the commonly seen context variables that appeared in previous works, D3R-Net is seamlessly integrated with motion segmentation and rain type indicator in a “dynamic routing” framework. Its core idea is that, the network components have several copies. Each copy is good at handling the rain removal in a given context. Then, in each training or testing iteration, the network is constructed dynamically based on the detected context. This “dynamic routing” framework and the added contexts lead to a performance gain.


    Architecture


    BLOCK DIAGRAM


    FOR MORE INFORMATION CLICK HERE