SINCE 2004

  • 0

      0 Item in Bag


      Your Shopping bag is empty

      CHECKOUT
  • Notice

    • ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328

    Projects > ELECTRONICS > 2019 > IEEE > DIGITAL IMAGE PROCESSING

    FACIAL EXPRESSION RECOGNITION BASED ON DEEP EVOLUTIONAL SPATIAL-TEMPORAL NETWORKS


    Abstract

    One key challenging issue of facial expression recognition is to capture the dynamic variation of facial physical structure from videos. In this paper, we propose a Part-based Hierarchical Bidirectional Recurrent Neural Network (PHRNN) to analyze the facial expression information of temporal sequences. Our PHRNN models facial morphological variations and dynamical evolution of expressions, which is effective to extract “temporal features” based on facial landmarks (geometry information) from consecutive frames. Meanwhile, in order to complement the still appearance information, a Multi-Signal Convolutional Neural Network (MSCNN) is proposed to extract “spatial features” from still frames. We use both recognition and verification signals as supervision to calculate different loss functions, which are helpful to increase the variations of different expressions and reduce the differences among identical expressions. This deep Evolutional Spatial-Temporal Networks (composed of PHRNN and MSCNN) extract the partial-whole, geometry-appearance and dynamic-still information, effectively boosting the performance of facial expression recognition. Experimental results show that this method largely outperforms the state-of-the-art ones. On three widely used facial expression databases (CK+, Oulu-CASIA and MMI), our method reduces the error rates of the previous best ones by 45.5%, 25.8% and 24.4%, respectively.


    Existing System

    Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP)


    Proposed System

    The main contributions of this proposed system are three-fold. Firstly, we propose a PHRNN model to extract dynamic geometry information. Landmarks are decomposed into different parts according to the facial morphological variations, which are helpful to model dynamical evolution of expression. Secondly, in order to complement the still appearance information, we propose a MSCNN model with both recognition and verification signals used as supervision. The two signals correspond to two different loss functions, which are helpful to increase the variations of different expressions and reduce the difference among identical expressions. Thirdly, the PHRNN and MSCNN complement each other to compose the Evolutional Spatial- Temporal Networks, which considers partial-whole, geometry appearance and dynamic-still information simultaneously. Experimental results demonstrate that our proposed method outperforms the previous best methods in facial expression recognition, with a large improvement on three widely used facial expression databases


    Architecture


    BLOCK DIAGRAM


    FOR MORE INFORMATION CLICK HERE