Autonomous Vision Members Publications

Members

Autonomous Vision, Perceiving Systems
Guest Scientist
no image
Autonomous Vision
Autonomous Vision
Autonomous Vision
Autonomous Vision
  • Doctoral Researcher

Publications

Autonomous Vision Conference Paper NEAT: Neural Attention Fields for End-to-End Autonomous Driving Chitta, K., Prakash, A., Geiger, A. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 15773-15783 , IEEE, International Conference on Computer Vision (ICCV), October 2021 (Published)
Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial pre-requisite for autonomous driving. We present NEural ATtention fields (NEAT), a novel representation that enables such reasoning for end-to-end Imitation Learning (IL) models. Our representation is a continuous function which maps locations in Bird's Eye View (BEV) scene coordinates to waypoints and semantics, using intermediate attention maps to iteratively compress high-dimensional 2D image features into a compact representation. This allows our model to selectively attend to relevant regions in the input while ignoring information irrelevant to the driving task, effectively associating the images with the BEV representation. NEAT nearly matches the state-of-the-art on the CARLA Leaderboard while being far less resource-intensive. Furthermore, visualizing the attention maps for models with NEAT intermediate representations provides improved interpretability. On a new evaluation setting involving adverse environmental conditions and challenging scenarios, NEAT outperforms several strong baselines and achieves driving scores on par with the privileged CARLA expert used to generate its training data.
Paper Supplementary Material Video 1 Video 2 Project page DOI URL BibTeX

Autonomous Vision Conference Paper Multi-Modal Fusion Transformer for End-to-End Autonomous Driving Prakash, A., Chitta, K., Geiger, A. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7073-7083 , IEEE, Conference on Computer Vision and Pattern Recognition (CVPR), 2021 (Published)
How should representations from complementary sensors be integrated for autonomous driving? Geometry-based sensor fusion has shown great promise for perception tasks such as object detection and motion forecasting. However, for the actual driving task, the global context of the 3D scene is key, e.g. a change in traffic light state can affect the behavior of a vehicle geometrically distant from that traffic light. Geometry alone may therefore be insufficient for effectively fusing representations in end-to-end driving models. In this work, we demonstrate that existing sensor fusion methods under-perform in the presence of a high density of dynamic agents and complex scenarios, which require global contextual reasoning, such as handling traffic oncoming from multiple directions at uncontrolled intersections. Therefore, we propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention. We experimentally validate the efficacy of our approach in urban settings involving complex scenarios using the CARLA urban driving simulator. Our approach achieves state-of-the-art driving performance while reducing collisions by 80% compared to geometry-based fusion.
pdf video Project Page DOI URL BibTeX