A pov-based highway vehicle trajectory dataset and prediction architecture

V Katariya, GA Noghre, AD Pazho, H Tabkhi - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2303.06202, 2023arxiv.org
Vehicle Trajectory datasets that provide multiple point-of-views (POVs) can be valuable for
various traffic safety and management applications. Despite the abundance of trajectory
datasets, few offer a comprehensive and diverse range of driving scenes, capturing multiple
viewpoints of various highway layouts, merging lanes, and configurations. This limits their
ability to capture the nuanced interactions between drivers, vehicles, and the roadway
infrastructure. We introduce the\emph {Carolinas Highway Dataset (CHD\footnote {\emph …
Vehicle Trajectory datasets that provide multiple point-of-views (POVs) can be valuable for various traffic safety and management applications. Despite the abundance of trajectory datasets, few offer a comprehensive and diverse range of driving scenes, capturing multiple viewpoints of various highway layouts, merging lanes, and configurations. This limits their ability to capture the nuanced interactions between drivers, vehicles, and the roadway infrastructure. We introduce the \emph{Carolinas Highway Dataset (CHD\footnote{\emph{CHD} available at: \url{https://github.com/TeCSAR-UNCC/Carolinas\_Dataset}})}, a vehicle trajectory, detection, and tracking dataset. \emph{CHD} is a collection of 1.6 million frames captured in highway-based videos from eye-level and high-angle POVs at eight locations across Carolinas with 338,000 vehicle trajectories. The locations, timing of recordings, and camera angles were carefully selected to capture various road geometries, traffic patterns, lighting conditions, and driving behaviors. We also present \emph{PishguVe}\footnote{\emph{PishguVe} code available at: \url{https://github.com/TeCSAR-UNCC/PishguVe}}, a novel vehicle trajectory prediction architecture that uses attention-based graph isomorphism and convolutional neural networks. The results demonstrate that \emph{PishguVe} outperforms existing algorithms to become the new state-of-the-art (SotA) in bird's-eye, eye-level, and high-angle POV trajectory datasets. Specifically, it achieves a 12.50\% and 10.20\% improvement in ADE and FDE, respectively, over the current SotA on NGSIM dataset. Compared to best-performing models on CHD, \emph{PishguVe} achieves lower ADE and FDE on eye-level data by 14.58\% and 27.38\%, respectively, and improves ADE and FDE on high-angle data by 8.3\% and 6.9\%, respectively.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果