SINCE 2004

  • 0

      0 Item in Bag


      Your Shopping bag is empty

      CHECKOUT
  • Notice

    • ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328

    Projects > ELECTRONICS > 2019 > IEEE > DIGITAL IMAGE PROCESSING

    PEDESTRIAN DETECTION FOR AUTONOMOUS VEHICLE USING MULTI-SPECTRAL CAMERAS


    Abstract

    Pedestrian detection is a critical feature of autonomous vehicle or advanced driver assistance system. This paper presents a novel instrument for pedestrian detection by combining stereo vision cameras with a thermal camera. A new dataset for vehicle applications is built from the test vehicle recorded data when driving on city roads. Data received from multiple cameras are aligned using trifocal tensor with precalibrated parameters. Candidates are generated from each image frame using sliding windows across multiple scales. A reconfigurable detector framework is proposed, in which feature extraction and classification are two separate stages. The input to the detector can be the color image, disparity map, thermal data, or any of their combinations. When applying to convolutional channel features, feature extraction utilizes the first three convolutional layers of a pre-trained convolutional neural network cascaded with an AdaBoost classifier. The evaluation results show that it significantly outperforms the traditional histogram of oriented gradients features. The proposed pedestrian detector with multi-spectral cameras can achieve 9% logaverage miss rate.


    Existing System

    K nearest neighbour (KNN), radio basis function (RBF)


    Proposed System

    Thermal data are obtained from the thermal cameras and reconstructed according to the point registration using trifocal tensor. By aligning data from multi-cameras, features can be extracted from each sensor using the same window or region of interests, which corresponds to the same real-world area or object. Instead of concatenating the features of different data sources and training a single classifier, feature extraction and classification are performed independently for each data source before the decision fusion stage. The decision fusion stage uses the confidence scores of the classifiers, along with some additional constraints to make the final decision. The proposed detector system can be reconfigured using different feature extraction and classification methods, such as HOG with SVM or CCF with AdaBoost. The decision fusion stage can utilize information from one or multiple classifiers.


    Architecture


    BLOCK DIAGRAM


    FOR MORE INFORMATION CLICK HERE