View Video Presentation: https://doi.org/10.2514/6.2021-1042.vid
Object tracking is a crucial task for autonomous systems, which require precise estimation of their surroundings to support decision making, planning, and control. High resolution sensors such as radar and lidar are frequently used in autonomous systems to assist in estimation of the surroundings. This estimation includes not only the kinematics of the objects such as position and velocity, but also their physical attributes such as dimensions and orientation. In this work, we propose a framework to ease the challenge of estimating extended objects by fusing radar data and lidar data. In this framework, the radar data is processed by an extended object tracker, which can process multiple measurements per object. The lidar data is preprocessed to extract bounding boxes and the bounding boxes are tracked using a conventional “point target” tracker. Further, we propose to fuse tracks of extended objects from each system using a decentralized fusion scheme. Our approach is evaluated using simulated sensor data in the context of autonomous driving in highway environments. Simulation results show that fusing extended object tracks from radar and lidar results in better tracking performance as compared to each individual sensor.