- ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328
Projects > ELECTRONICS > 2018 > IEEE > DIGITAL IMAGE PROCESSING
This paper presents an object classification method for vision and light detection and ranging fusion of autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image upsampling theory. By creating a point cloud of LIDAR data upsampling and converting into pixel-level depth information, depth information is connected with Red Green Blue (RGB) data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is also adopted to guarantee both object classification accuracy and minimal loss. Experimental results are presented and show the effectiveness and efficiency of object classification strategies.
support vector machine (SVM) model for RGB-D-based object detection, histograms of gradient orientation (HOG).
In this paper, we propose a deep learning-based approach by fusing vision and LIDAR data for object detection in autonomous vehicle environment. On the one hand, we upsample point clouds of LIDAR data and convert the upsampled point cloud data into pixel-level depth feature map. On the other hand, we convert the RGB together with depth feature map and then fed the data into a CNN. On the basis of the integrated RGB and depth data, we utilize deep CNN to perform feature learning from raw input information and obtain informative feature representation to classify objects in the autonomous vehicle environment. The proposed approach, in which visual data are fused with LIDAR data, exhibits superior classification accuracy over the approach using only RGB data or depth data. During the training phase, using LIDAR information can accelerate feature learning and hasten the convergence of CNN on the target task. 
BLOCK DIAGRAM