H<\/em> is a transformation matrix (or measurement matrix), intended to map object detection data space into our model space.<\/p>\nThe next step is calculating the Kalman gain (K): \n \nThe Kalman gain basically shows what to rely on, either estimated state or detection data: \n \nIf the Kalman gain is a bigger number, sensor data will be more impactful on the calculation of the new estimated state. If it is a smaller number, we have to rely on the mathematical model of the estimated state.<\/p>\n
The output of the Kalman filter is a fine-tuned trajectory of pedestrian direction. The prediction of pedestrian location for an extended period can be implemented by simply repeating step #1. Evidently, a long-lasting prediction will have less accuracy than a shorter one, with each new prediction step of the Kalman filter adding more uncertainty to the predicted data.<\/p>\n
Pedestrian collision prediction module<\/h3>\n The general approach to pedestrian detection in cars also includes pedestrian collision predictor. The module obtains raw data from the pedestrian detector, prediction data from the Kalman filter and data from the road segmentation module.<\/p>\n
The purpose of the module is to combine all the information and analyze the interposition of a pedestrian, his\/her predicted location and the location on the road. The output of the model is a probability of a car-pedestrian collision.<\/p>\n
The following is a brief overview the module\u2019s workflow:<\/p>\n
1) Pedestrian step distance calculation<\/h4>\n In the real-world conditions, information on the distance between the car and the pedestrian, along with the car speed, can be easily obtained from the vehicle\u2019s ADAS computer. Pedestrian recognition using automotive RADAR sensor or\u00a0LIDAR hardware handles distance measurement to the obstacles.<\/p>\n
However, in this case, the information was absent, and distance d<\/em> was withdrawn from geometrical calculations based on the pedestrian box\u2019s bottom line and camera view angle in the following way (fig. 3): \nDistance to the pedestrian<\/b> \n \n \nwhere Bh<\/em>\u2013 pedestrian bound box height (predicted or detected) in pixels, \nih<\/em>\u2013 image height in pixels, \nC<\/em>\u2013 constant calculated manually for a specific car camera (data obtained for a specific test image dataset).<\/p>\nFor test experiment it is not important to define the absolute value of distance, it is more important to calculate the distance metric to understand how far pedestrian stands from the vehicle.<\/p>\n
2) Collision prediction step<\/h4>\n To calculate the collision probability, the information on the bottom coordinates of pedestrian bound boxes is required, which is predicted and obtained from the object detector and Kalman filter. The main idea of this step is to check the intersection of a road segment and box\u2019s bottom line.<\/p>\n
In case of an intersection, a pedestrian (or pedestrian predicted location) stands on the vehicle path, and the collision may happen.<\/p>\n
To simplify, the collision probability was calculated as the ratio of distance to the predicted pedestrian location and general distance of detected road available for the car: \n <\/p>\n
where IoU<\/em>– a function showing the intersection over the union of road segment and the pedestrian predicted location bound box, \nbh<\/em>– pedestrian bound box height (predicted or detected) in pixels, \nrh<\/em>– road segment max height in pixels.<\/p>\nAn example of the algorithm implementation is in pic.2, with the results of automatic road segmentation highlighted in green. There are confidence members for each bound box: blue bound boxes for pedestrian detection confidence and green\/red bound boxes for collision probability. The box color also defines the collision probability: red \u2013 high probability, green \u2013 low probability.<\/p>\n
The final output of pedestrian collision detector approach (blue boxes \u2013 detections, green boxes \u2013 location predictions)<\/b> \n <\/p>\nConclusion<\/h2>\n
Computer vision problem requires a solution for automotive driving, particularly pedestrian detection and tracking with moving camera. And such a solution is possible: convolutional neural network allows for robust pedestrian tracking. However, an additional challenge for advanced automotive driving is the pedestrian location prediction algorithm, which can be based on pure machine learning techniques.<\/p>\n
In this article, we have described the main components of pedestrian detection for autonomous vehicle and reviewed the pedestrian location prediction algorithm in-depth, which is based on the Kalman filter. The result we have achieved show that the listed mathematical algorithms allow building a single approach to real-time pedestrian detection and avoid dangerous road situations in self-driving cars.<\/p>\n
\nContact our automotive experts<\/a> at Intellias to learn more about the Kalman filter application for pedestrian recognition\u00a0and the mathematical algorithms in autonomous vehicles. <\/em><\/p>\n","protected":false},"excerpt":{"rendered":"Learn what the Kalman filter is and how it helps in pedestrian recognition for safer autonomous driving<\/p>\n","protected":false},"author":15,"featured_media":58895,"template":"","class_list":["post-19865","blog","type-blog","status-publish","has-post-thumbnail","hentry","blog-category-machine-learning-ai","blog-category-automotive-blog","blog-category-autonomous-driving"],"acf":[],"yoast_head":"\n
How to Apply Kalman Filter for Pedestrian Detection | Intellias Blog<\/title>\n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n\t \n