Back

Autonomous Vision Perceiving Systems Members Publications

Object Scene Flow

Research photo showcase
From top-left to bottom-right: The input image with inferred segmentation into moving object hypotheses, the optical flow produced by our method, the disparity and optical flow ground truth of our novel scene flow dataset.
Model Dataset Results Download Videos

Members

Thumb ticker sm moritz menze
Perceiving Systems, Autonomous Vision
Thumb ticker sm upper body tiny
Autonomous Vision, Perceiving Systems
Guest Scientist

Publications

Perceiving Systems Autonomous Vision Conference Paper Object Scene Flow for Autonomous Vehicles Menze, M., Geiger, A. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, 3061-3070, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015
This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which can't be handled by existing methods.
pdf abstract suppmat DOI BibTeX

Perceiving Systems Autonomous Vision Conference Paper Joint 3D Estimation of Vehicles and Scene Flow Menze, M., Heipke, C., Geiger, A. In Proc. of the ISPRS Workshop on Image Sequence Analysis (ISA), 2015
Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.
PDF BibTeX