Header logo is


2016


Patches, Planes and Probabilities: A Non-local Prior for Volumetric {3D} Reconstruction
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

2016


YouTube pdf poster suppmat Project Page [BibTex]


Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Deep Discrete Flow
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


no image
Sustainable effects of simulator-based training on ecological driving

Lüderitz, C., Wirzberger, M., Karrer-Gauß, K.

In Advances in Ergonomic Design of Systems, Products and Processes. Proceedings of the Annual Meeting of the GfA 2015, pages: 463-475, Springer, 2016 (inbook)

Abstract
Simulation-based driver training offers a promising way to teach ecological driving behavior under controlled, comparable conditions. In a study with 23 professional drivers, we tested the effectiveness of such training. The driving behavior of a training group in a simulated drive with and without instructions were compared. Ten weeks later, a repetition drive tested the long-term effect training. Driving data revealed reduced fuel consumption by ecological driving in both the guided and repetition drives. Driving time decreased significantly in the training and did not differ from driving time after 10 weeks. Results did not achieve significance for transfer to test drives in real traffic situations. This may be due to the small sample size and biased data as a result of unusual driving behavior. Finally, recent and promising approaches to support drivers in maintaining eco-driving styles beyond training situations are outlined.

re

DOI [BibTex]

DOI [BibTex]


no image
Examining load-inducing factors in instructional design: An ACT-R approach

Wirzberger, M., Rey, G. D.

In Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016), pages: 223-224, University Park, PA, Penn State, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Helping people make better decisions using optimal gamification

Lieder, F., Griffiths, T. L.

In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016 (inproceedings)

re

Project Page [BibTex]

Project Page [BibTex]


no image
CLT meets ACT-R: Modeling load-inducing factors in instructional design

Wirzberger, M., Rey, G. D.

In Abstracts of the 58th Conference of Experimental Psychologists, pages: 377, Pabst Science Publishers, Lengerich, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Modeling load factors in multimedia learning: An ACT-R approach

Wirzberger, M.

In Dagstuhl 2016. Proceedings of the 10th Joint Workshop of the German Research Training Groups in Computer Science, pages: 98, Universitätsverlag Chemnitz, Chemnitz, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Separating cognitive load facets in a working memory updating task: An experimental approach

Wirzberger, M., Beege, M., Schneider, S., Nebel, S., Rey, G. D.

In International Meeting of the Psychonomic Society, Granada – Spain, May 5-8, 2016, Abstract Book, pages: 211-212, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
CLT meets WMU: Simultaneous experimental manipulation of load factors in a basal working memory task

Wirzberger, M., Beege, M., Schneider, S., Nebel, S., Rey, G. D.

In 9th International Cognitive Load Theory Conference, June 22nd to 24th, 2016, Bochum, Germany, Abstracts, pages: 19, 2016 (inproceedings)

re

[BibTex]

[BibTex]


no image
Bedingt räumliche Nähe bessere Lernergebnisse? Die Rolle der Distanz und Integration beim Lernen mit multiplen Informationsquellen

Beege, M., Nebel, S., Schneider, S., Wirzberger, M., Schmidt, N., Rey, G. D.

In 50th Conference of the German Psychological Society. Abstracts, pages: 540, Pabst Science Publishers, Lengerich, 2016 (inproceedings)

re

[BibTex]

[BibTex]

2014


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

2014


pdf DOI [BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Jawahar, C. V., Kumar, M. P.

IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2014 (conference)

avg

[BibTex]

[BibTex]


Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Calibrating and Centering Quasi-Central Catadioptric Cameras
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Learning to Rank using High-Order Information
Learning to Rank using High-Order Information

Dokania, P. K., Behl, A., Jawahar, C. V., Kumar, M. P.

International Conference on Computer Vision, 2014 (conference)

avg

[BibTex]

[BibTex]


no image
Algorithm selection by rational metareasoning as a model of human strategy selection

Lieder, F., Plunkett, D., Hamrick, J. B., Russell, S. J., Hay, N. J., Griffiths, T. L.

In Advances in Neural Information Processing Systems 27, 2014 (inproceedings)

re

Project Page [BibTex]

Project Page [BibTex]


no image
"I don’t need it!" – Modeling ad-induced interruption while using a Smartphone-app

Wirzberger, M., Russwinkel, N.

CrossWorlds 2014: Theory, Development and Evaluation of Social Technology, 2014 (conference)

re

DOI [BibTex]

DOI [BibTex]


no image
"Keep green!" – Nachhaltige Förderung ökologischen Fahrens durch Simulatortraining? ["Keep green!" – Promoting ecological driving through simulator training in a sustainable manner?]

Wirzberger, M., Lüderitz, C., Rohrer, S., Karrer-Gauß, K.

In 49th Conference of the German Psychological Society. Abstracts, pages: 570, Pabst Science Publishers, Lengerich, 2014 (inproceedings)

re

[BibTex]

[BibTex]


no image
The high availability of extreme events serves resource-rational decision-making

Lieder, F., Hsu, M., Griffiths, T. L.

In Proceedings of the 36th Annual Conference of the Cognitive Science Society, 2014 (inproceedings)

re

[BibTex]

[BibTex]


no image
Layers of Abstraction: (Neuro)computational models of learning local and global statistical regularities

Diaconescu, A., Lieder, F., Mathys, C., Stephan, K. E.

In 20th Annual Meeting of the Organization for Human Brain Mapping, 2014 (inproceedings)

re

[BibTex]

[BibTex]


no image
Geometric Image Synthesis

Alhaija, H. A., Mustikovela, S. K., Geiger, A., Rother, C.

(conference)

avg

Project Page [BibTex]


Project Page [BibTex]