Surround-view fisheye camera perception for automated driving: Overview, survey & challenges

VR Kumar, C Eising, C Witt… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Surround-view fisheye cameras are commonly used for near-field sensing in automated
driving. Four fisheye cameras on four sides of the vehicle are sufficient to cover 360° around …

Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving

S Yogamani, C Hughes, J Horgan… - Proceedings of the …, 2019 - openaccess.thecvf.com
Fisheye cameras are commonly employed for obtaining a large field of view in surveillance,
augmented reality and in particular automotive applications. In spite of their prevalence …

SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving

AR Sekkat, Y Dupuis, VR Kumar… - IEEE Robotics and …, 2022 - ieeexplore.ieee.org
Surround-view cameras are a primary sensor for automated driving, used for near-field
perception. It is one of the most commonly used sensors in commercial vehicles primarily …

X-Align: Cross-Modal Cross-View Alignment for Bird's-Eye-View Segmentation

S Borse, M Klingner, VR Kumar, H Cai… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Bird's-eye-view (BEV) grid is a common representation for the perception of road
components, eg, drivable area, in autonomous driving. Most existing approaches rely on …

Near-field perception for low-speed vehicle automation using surround-view fisheye cameras

C Eising, J Horgan, S Yogamani - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Cameras are the primary sensor in automated driving systems. They provide high
information density and are optimal for detecting road infrastructure cues laid out for human …

Optical flow for autonomous driving: Applications, challenges and improvements

S Shen, L Kerofsky, S Yogamani - arXiv preprint arXiv:2301.04422, 2023 - arxiv.org
Optical flow estimation is a well-studied topic for automated driving applications. Many
outstanding optical flow estimation methods have been proposed, but they become …

Real-time joint object detection and semantic segmentation network for automated driving

G Sistu, I Leang, S Yogamani - arXiv preprint arXiv:1901.03912, 2019 - arxiv.org
Convolutional Neural Networks (CNN) are successfully used for various visual perception
tasks including bounding box object detection, semantic segmentation, optical flow, depth …

Monocular fisheye camera depth estimation using sparse lidar supervision

VR Kumar, S Milz, C Witt, M Simon… - 2018 21st …, 2018 - ieeexplore.ieee.org
Near-field depth estimation around a self-driving car is an important function that can be
achieved by four wide-angle fisheye cameras having a field of view of over 180°. Depth …

Adversarial attacks on multi-task visual perception for autonomous driving

I Sobh, A Hamed, VR Kumar, S Yogamani - arXiv preprint arXiv …, 2021 - arxiv.org
Deep neural networks (DNNs) have accomplished impressive success in various
applications, including autonomous driving perception tasks, in recent years. On the other …

Limoseg: Real-time bird's eye view based lidar motion segmentation

S Mohapatra, M Hodaei, S Yogamani, S Milz… - arXiv preprint arXiv …, 2021 - arxiv.org
Moving object detection and segmentation is an essential task in the Autonomous Driving
pipeline. Detecting and isolating static and moving components of a vehicle's surroundings …