Header logo is


2015


Exploiting Object Similarity in 3D Reconstruction
Exploiting Object Similarity in 3D Reconstruction

Zhou, C., Güney, F., Wang, Y., Geiger, A.

In International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
Despite recent progress, reconstructing outdoor scenes in 3D from movable platforms remains a highly difficult endeavor. Challenges include low frame rates, occlusions, large distortions and difficult lighting conditions. In this paper, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by locating objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows us to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. We evaluate our approach with respect to LIDAR ground truth on a novel challenging suburban dataset and show its advantages over the state-of-the-art.

avg ps

pdf suppmat [BibTex]

2015


pdf suppmat [BibTex]


FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation

Lenz, P., Geiger, A., Urtasun, R.

In International Conference on Computer Vision (ICCV), International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

avg ps

pdf suppmat video project [BibTex]

pdf suppmat video project [BibTex]


Towards Probabilistic Volumetric Reconstruction using Ray Potentials
Towards Probabilistic Volumetric Reconstruction using Ray Potentials

(Best Paper Award)

Ulusoy, A. O., Geiger, A., Black, M. J.

In 3D Vision (3DV), 2015 3rd International Conference on, pages: 10-18, Lyon, October 2015 (inproceedings)

Abstract
This paper presents a novel probabilistic foundation for volumetric 3-d reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.

avg ps

code YouTube pdf suppmat DOI Project Page [BibTex]

code YouTube pdf suppmat DOI Project Page [BibTex]


Displets: Resolving Stereo Ambiguities using Object Knowledge
Displets: Resolving Stereo Ambiguities using Object Knowledge

Güney, F., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 4165-4175, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
Stereo techniques have witnessed tremendous progress over the last decades, yet some aspects of the problem still remain challenging today. Striking examples are reflecting and textureless surfaces which cannot easily be recovered using traditional local regularizers. In this paper, we therefore propose to regularize over larger distances using object-category specific disparity proposals (displets) which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The proposed displets encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel based CRF framework and demonstrate its benefits on the KITTI stereo evaluation.

avg ps

pdf abstract suppmat [BibTex]

pdf abstract suppmat [BibTex]


Object Scene Flow for Autonomous Vehicles
Object Scene Flow for Autonomous Vehicles

Menze, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 3061-3070, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which can't be handled by existing methods.

avg ps

pdf abstract suppmat DOI [BibTex]

pdf abstract suppmat DOI [BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Mohapatra, P., Jawahar, C. V., Kumar, M. P.

IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2015 (article)

avg

[BibTex]

[BibTex]


Joint 3D Object and Layout Inference from a single RGB-D Image
Joint 3D Object and Layout Inference from a single RGB-D Image

(Best Paper Award)

Geiger, A., Wang, C.

In German Conference on Pattern Recognition (GCPR), 9358, pages: 183-195, Lecture Notes in Computer Science, Springer International Publishing, 2015 (inproceedings)

Abstract
Inferring 3D objects and the layout of indoor scenes from a single RGB-D image captured with a Kinect camera is a challenging task. Towards this goal, we propose a high-order graphical model and jointly reason about the layout, objects and superpixels in the image. In contrast to existing holistic approaches, our model leverages detailed 3D geometry using inverse graphics and explicitly enforces occlusion and visibility constraints for respecting scene properties and projective geometry. We cast the task as MAP inference in a factor graph and solve it efficiently using message passing. We evaluate our method with respect to several baselines on the challenging NYUv2 indoor dataset using 21 object categories. Our experiments demonstrate that the proposed method is able to infer scenes with a large degree of clutter and occlusions.

avg ps

pdf suppmat video project DOI [BibTex]

pdf suppmat video project DOI [BibTex]


Discrete Optimization for Optical Flow
Discrete Optimization for Optical Flow

Menze, M., Heipke, C., Geiger, A.

In German Conference on Pattern Recognition (GCPR), 9358, pages: 16-28, Springer International Publishing, 2015 (inproceedings)

Abstract
We propose to look at large-displacement optical flow from a discrete point of view. Motivated by the observation that sub-pixel accuracy is easily obtained given pixel-accurate optical flow, we conjecture that computing the integral part is the hardest piece of the problem. Consequently, we formulate optical flow estimation as a discrete inference problem in a conditional random field, followed by sub-pixel refinement. Naive discretization of the 2D flow space, however, is intractable due to the resulting size of the label set. In this paper, we therefore investigate three different strategies, each able to reduce computation and memory demands by several orders of magnitude. Their combination allows us to estimate large-displacement optical flow both accurately and efficiently and demonstrates the potential of discrete optimization for optical flow. We obtain state-of-the-art performance on MPI Sintel and KITTI.

avg ps

pdf suppmat project DOI [BibTex]

pdf suppmat project DOI [BibTex]


Joint 3D Estimation of Vehicles and Scene Flow
Joint 3D Estimation of Vehicles and Scene Flow

Menze, M., Heipke, C., Geiger, A.

In Proc. of the ISPRS Workshop on Image Sequence Analysis (ISA), 2015 (inproceedings)

Abstract
Three-dimensional reconstruction of dynamic scenes is an important prerequisite for applications like mobile robotics or autonomous driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

avg ps

PDF [BibTex]

PDF [BibTex]


Exciting Engineered Passive Dynamics in a Bipedal Robot
Exciting Engineered Passive Dynamics in a Bipedal Robot

Renjewski, D., Spröwitz, A., Peekema, A., Jones, M., Hurst, J.

{IEEE Transactions on Robotics and Automation}, 31(5):1244-1251, IEEE, New York, NY, 2015 (article)

Abstract
A common approach in designing legged robots is to build fully actuated machines and control the machine dynamics entirely in soft- ware, carefully avoiding impacts and expending a lot of energy. However, these machines are outperformed by their human and animal counterparts. Animals achieve their impressive agility, efficiency, and robustness through a close integration of passive dynamics, implemented through mechanical components, and neural control. Robots can benefit from this same integrated approach, but a strong theoretical framework is required to design the passive dynamics of a machine and exploit them for control. For this framework, we use a bipedal spring–mass model, which has been shown to approximate the dynamics of human locomotion. This paper reports the first implementation of spring–mass walking on a bipedal robot. We present the use of template dynamics as a control objective exploiting the engineered passive spring–mass dynamics of the ATRIAS robot. The results highlight the benefits of combining passive dynamics with dynamics-based control and open up a library of spring–mass model-based control strategies for dynamic gait control of robots.

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Comparing the effect of different spine and leg designs for a small bounding quadruped robot
Comparing the effect of different spine and leg designs for a small bounding quadruped robot

Eckert, P., Spröwitz, A., Witte, H., Ijspeert, A. J.

In Proceedings of ICRA, pages: 3128-3133, Seattle, Washington, USA, 2015 (inproceedings)

Abstract
We present Lynx-robot, a quadruped, modular, compliant machine. It alternately features a directly actuated, single-joint spine design, or an actively supported, passive compliant, multi-joint spine configuration. Both spine con- figurations bend in the sagittal plane. This study aims at characterizing these two, largely different spine concepts, for a bounding gait of a robot with a three segmented, pantograph leg design. An earlier, similar-sized, bounding, quadruped robot named Bobcat with a two-segment leg design and a directly actuated, single-joint spine design serves as a comparison robot, to study and compare the effect of the leg design on speed, while keeping the spine design fixed. Both proposed spine designs (single rotatory and active and multi-joint compliant) reach moderate, self-stable speeds.

dlg

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]

2014


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

2014


pdf DOI [BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Jawahar, C. V., Kumar, M. P.

IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2014 (conference)

avg

[BibTex]

[BibTex]


Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Calibrating and Centering Quasi-Central Catadioptric Cameras
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


3D Traffic Scene Understanding from Movable Platforms
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

avg ps

pdf link (url) [BibTex]

pdf link (url) [BibTex]


Roombots: A hardware perspective on 3D self-reconfiguration and locomotion with a homogeneous modular robot
Roombots: A hardware perspective on 3D self-reconfiguration and locomotion with a homogeneous modular robot

Spröwitz, A., Moeckel, R., Vespignani, M., Bonardi, S., Ijspeert, A. J.

{Robotics and Autonomous Systems}, 62(7):1016-1033, Elsevier, Amsterdam, 2014 (article)

Abstract
In this work we provide hands-on experience on designing and testing a self-reconfiguring modular robotic system, Roombots (RB), to be used among others for adaptive furniture. In the long term, we envision that RB can be used to create sets of furniture, such as stools, chairs and tables that can move in their environment and that change shape and functionality during the day. In this article, we present the first, incremental results towards that long term vision. We demonstrate locomotion and reconfiguration of single and metamodule RB over 3D surfaces, in a structured environment equipped with embedded connection ports. RB assemblies can move around in non-structured environments, by using rotational or wheel-like locomotion. We show a proof of concept for transferring a Roombots metamodule (two in-series coupled RB modules) from the non-structured environment back into the structured grid, by aligning the RB metamodule in an entrapment mechanism. Finally, we analyze the remaining challenges to master the full Roombots scenario, and discuss the impact on future Roombots hardware.

dlg

DOI [BibTex]

DOI [BibTex]


Learning to Rank using High-Order Information
Learning to Rank using High-Order Information

Dokania, P. K., Behl, A., Jawahar, C. V., Kumar, M. P.

International Conference on Computer Vision, 2014 (conference)

avg

[BibTex]

[BibTex]


Automatic Generation of Reduced CPG Control Networks for Locomotion of Arbitrary Modular Robot Structures
Automatic Generation of Reduced CPG Control Networks for Locomotion of Arbitrary Modular Robot Structures

Bonardi, S., Vespignani, M., Möckel, R., Van den Kieboom, J., Pouya, S., Spröwitz, A., Ijspeert, A.

In Proceedings of Robotics: Science and Systems, University of California, Barkeley, 2014 (inproceedings)

Abstract
The design of efficient locomotion controllers for arbitrary structures of reconfigurable modular robots is challenging because the morphology of the structure can change dynamically during the completion of a task. In this paper, we propose a new method to automatically generate reduced Central Pattern Generator (CPG) networks for locomotion control based on the detection of bio-inspired sub-structures, like body and limbs, and articulation joints inside the robotic structure. We demonstrate how that information, coupled with the potential symmetries in the structure, can be used to speed up the optimization of the gaits and investigate its impact on the solution quality (i.e. the velocity of the robotic structure and the potential internal collisions between robotic modules). We tested our approach on three simulated structures and observed that the reduced network topologies in the first iterations of the optimization process performed significantly better than the fully open ones.

dlg

DOI [BibTex]

DOI [BibTex]


Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs
Kinematic primitives for walking and trotting gaits of a quadruped robot with compliant legs

Spröwitz, A. T., Ajallooeian, M., Tuleu, A., Ijspeert, A. J.

Frontiers in Computational Neuroscience, 8(27):1-13, 2014 (article)

Abstract
In this work we research the role of body dynamics in the complexity of kinematic patterns in a quadruped robot with compliant legs. Two gait patterns, lateral sequence walk and trot, along with leg length control patterns of different complexity were implemented in a modular, feed-forward locomotion controller. The controller was tested on a small, quadruped robot with compliant, segmented leg design, and led to self-stable and self-stabilizing robot locomotion. In-air stepping and on-ground locomotion leg kinematics were recorded, and the number and shapes of motion primitives accounting for 95\% of the variance of kinematic leg data were extracted. This revealed that kinematic patterns resulting from feed-forward control had a lower complexity (in-air stepping, 2–3 primitives) than kinematic patterns from on-ground locomotion (νm4 primitives), although both experiments applied identical motor patterns. The complexity of on-ground kinematic patterns had increased, through ground contact and mechanical entrainment. The complexity of observed kinematic on-ground data matches those reported from level-ground locomotion data of legged animals. Results indicate that a very low complexity of modular, rhythmic, feed-forward motor control is sufficient for level-ground locomotion in combination with passive compliant legged hardware.

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2012


Development of a Minimalistic Pneumatic Quadruped Robot for Fast Locomotion
Development of a Minimalistic Pneumatic Quadruped Robot for Fast Locomotion

Narioka, K., Rosendo, A., Spröwitz, A., Hosoda, K.

In Proceedings of the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2012, pages: 307-311, IEEE, Guangzhou, 2012 (inproceedings)

Abstract
In this paper, we describe the development of the quadruped robot ”Ken” with the minimalistic and lightweight body design for achieving fast locomotion. We use McKibben pneumatic artificial muscles as actuators, providing high frequency and wide stride motion of limbs, also avoiding problems with overheating. We conducted a preliminary experiment, finding out that the robot can swing its limb over 7.5 Hz without amplitude reduction, nor heat problems. Moreover, the robot realized a several steps of bouncing gait by using simple CPG-based open loop controller, indicating that the robot can generate enough torque to kick the ground and limb contraction to avoid stumbling.

dlg

DOI [BibTex]

2012


DOI [BibTex]


Locomotion through Reconfiguration based on Motor Primitives for Roombots Self-Reconfigurable Modular Robots
Locomotion through Reconfiguration based on Motor Primitives for Roombots Self-Reconfigurable Modular Robots

Bonardi, S., Moeckel, R., Spröwitz, A., Vespignani, M., Ijspeert, A. J.

In Robotics; Proceedings of ROBOTIK 2012; 7th German Conference on, pages: 1-6, 2012 (inproceedings)

Abstract
We present the hardware and reconfiguration experiments for an autonomous self-reconfigurable modular robot called Roombots (RB). RB were designed to form the basis for self-reconfigurable furniture. Each RB module contains three degrees of freedom that have been carefully selected to allow a single module to reach any position on a 2-dimensional grid and to overcome concave corners in a 3-dimensional grid. For the first time we demonstrate locomotion capabilities of single RB modules through reconfiguration with real hardware. The locomotion through reconfiguration is controlled by a planner combining the well-known D* algorithm and composed motor primitives. The novelty of our approach is the use of an online running hierarchical planner closely linked to the real hardware.

dlg

link (url) [BibTex]

link (url) [BibTex]


no image
Geometric Image Synthesis

Alhaija, H. A., Mustikovela, S. K., Geiger, A., Rother, C.

(conference)

avg

Project Page [BibTex]


Project Page [BibTex]