Header logo is


2017


no image
Electroencephalographic identifiers of motor adaptation learning

Ozdenizci, O., Yalcin, M., Erdogan, A., Patoglu, V., Grosse-Wentrup, M., Cetin, M.

Journal of Neural Engineering, 14(4):046027, 2017 (article)

ei

link (url) [BibTex]

2017


link (url) [BibTex]


no image
Detecting distortions of peripherally presented letter stimuli under crowded conditions

Wallis, T. S. A., Tobias, S., Bethge, M., Wichmann, F. A.

Attention, Perception, & Psychophysics, 79(3):850-862, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Temporal evolution of the central fixation bias in scene viewing

Rothkegel, L. O. M., Trukenbrod, H. A., Schütt, H. H., Wichmann, F. A., Engbert, R.

Journal of Vision, 17(13):3, 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
BundleMAP: Anatomically Localized Classification, Regression, and Hypothesis Testing in Diffusion MRI

Khatami, M., Schmidt-Wilcke, T., Sundgren, P. C., Abbasloo, A., Schölkopf, B., Schultz, T.

Pattern Recognition, 63, pages: 593-600, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


Data-Driven Physics for Human Soft Tissue Animation
Data-Driven Physics for Human Soft Tissue Animation

Kim, M., Pons-Moll, G., Pujades, S., Bang, S., Kim, J., Black, M. J., Lee, S.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):54:1-54:12, 2017 (article)

Abstract
Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.

ps

video paper link (url) Project Page [BibTex]

video paper link (url) Project Page [BibTex]


no image
A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Wallis, T. S. A., Funke, C. M., Ecker, A. S., Gatys, L. A., Wichmann, F. A., Bethge, M.

Journal of Vision, 17(12), 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Model Selection for Gaussian Mixture Models

Huang, T., Peng, H., Zhang, K.

Statistica Sinica, 27(1):147-169, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

(Best Paper, Eurographics 2017)

Marcard, T. V., Rosenhahn, B., Black, M., Pons-Moll, G.

Computer Graphics Forum 36(2), Proceedings of the 38th Annual Conference of the European Association for Computer Graphics (Eurographics), pages: 349-360 , 2017 (article)

Abstract
We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall

ps

video pdf Project Page [BibTex]

video pdf Project Page [BibTex]


Efficient 2D and 3D Facade Segmentation using Auto-Context
Efficient 2D and 3D Facade Segmentation using Auto-Context

Gadde, R., Jampani, V., Marlet, R., Gehler, P.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017 (article)

Abstract
This paper introduces a fast and efficient segmentation technique for 2D images and 3D point clouds of building facades. Facades of buildings are highly structured and consequently most methods that have been proposed for this problem aim to make use of this strong prior information. Contrary to most prior work, we are describing a system that is almost domain independent and consists of standard segmentation methods. We train a sequence of boosted decision trees using auto-context features. This is learned using stacked generalization. We find that this technique performs better, or comparable with all previous published methods and present empirical results on all available 2D and 3D facade benchmark datasets. The proposed method is simple to implement, easy to extend, and very efficient at test-time inference.

ps

arXiv Project Page [BibTex]

arXiv Project Page [BibTex]


{ClothCap}: Seamless {4D} Clothing Capture and Retargeting
ClothCap: Seamless 4D Clothing Capture and Retargeting

Pons-Moll, G., Pujades, S., Hu, S., Black, M.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 36(4):73:1-73:15, ACM, New York, NY, USA, 2017, Two first authors contributed equally (article)

Abstract
Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.

ps

video project_page paper link (url) DOI Project Page Project Page [BibTex]

video project_page paper link (url) DOI Project Page Project Page [BibTex]


no image
An image-computable psychophysical spatial vision model

Schütt, H. H., Wichmann, F. A.

Journal of Vision, 17(12), 2017 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Methods and measurements to compare men against machines

Wichmann, F. A., Janssen, D. H. J., Geirhos, R., Aguilar, G., Schütt, H. H., Maertens, M., Bethge, M.

Electronic Imaging, pages: 36-45(10), 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

ei

[BibTex]

[BibTex]


no image
A Comparison of Autoregressive Hidden Markov Models for Multimodal Manipulations With Variable Masses

Kroemer, O., Peters, J.

IEEE Robotics and Automation Letters, 2(2):1101-1108, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration

Maeda, G., Ewerton, M., Neumann, G., Lioutikov, R., Peters, J.

International Journal of Robotics Research, 36(13-14):1579-1594, 2017, Special Issue on the Seventeenth International Symposium on Robotics Research (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Phase-coded Aperture Camera with Programmable Optics

Chen, J., Hirsch, M., Heintzmann, R., Eberhardt, B., Lensch, H. P. A.

Electronic Imaging, 2017(17):70-75, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
On Maximum Entropy and Inference

Gresele, L., Marsili, M.

Entropy, 19(12):article no. 642, 2017 (article)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Towards Engagement Models that Consider Individual Factors in HRI: On the Relation of Extroversion and Negative Attitude Towards Robots to Gaze and Speech During a Human-Robot Assembly Task

Ivaldi, S., Lefort, S., Peters, J., Chetouani, M., Provasi, J., Zibetti, E.

International Journal of Social Robotics, 9(1):63-86, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]


no image
Non-parametric Policy Search with Limited Information Loss

van Hoof, H., Neumann, G., Peters, J.

Journal of Machine Learning Research , 18(73):1-46, 2017 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Stability of Controllers for Gaussian Process Dynamics

Vinogradska, J., Bischoff, B., Nguyen-Tuong, D., Peters, J.

Journal of Machine Learning Research, 18(100):1-37, 2017 (article)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
SUV-quantification of physiological lung tissue in an integrated PET/MR-system: Impact of lung density and bone tissue

Seith, F., Schmidt, H., Gatidis, S., Bezrukov, I., Schraml, C., Pfannenberg, C., la Fougère, C., Nikolaou, K., Schwenzer, N.

PLOS ONE, 12(5):1-13, 2017 (article)

ei

DOI [BibTex]

DOI [BibTex]

1995


no image
View-Based Cognitive Mapping and Path Planning

Schölkopf, B., Mallot, H.

Adaptive Behavior, 3(3):311-348, January 1995 (article)

Abstract
This article presents a scheme for learning a cognitive map of a maze from a sequence of views and movement decisions. The scheme is based on an intermediate representation called the view graph, whose nodes correspond to the views whereas the labeled edges represent the movements leading from one view to another. By means of a graph theoretical reconstruction method, the view graph is shown to carry complete information on the topological and directional structure of the maze. Path planning can be carried out directly in the view graph without actually performing this reconstruction. A neural network is presented that learns the view graph during a random exploration of the maze. It is based on an unsupervised competitive learning rule translating temporal sequence (rather than similarity) of views into connectedness in the network. The network uses its knowledge of the topological and directional structure of the maze to generate expectations about which views are likely to be encountered next, improving the view-recognition performance. Numerical simulations illustrate the network's ability for path planning and the recognition of views degraded by random noise. The results are compared to findings of behavioral neuroscience.

ei

Web DOI [BibTex]

1995


Web DOI [BibTex]


no image
Suppression and creation of chaos in a periodically forced Lorenz system.

Franz, MO., Zhang, MH.

Physical Review, E 52, pages: 3558-3565, 1995 (article)

Abstract
Periodic forcing is introduced into the Lorenz model to study the effects of time-dependent forcing on the behavior of the system. Such a nonautonomous system stays dissipative and has a bounded attracting set which all trajectories finally enter. The possible kinds of attracting sets are restricted to periodic orbits and strange attractors. A large-scale survey of parameter space shows that periodic forcing has mainly three effects in the Lorenz system depending on the forcing frequency: (i) Fixed points are replaced by oscillations around them; (ii) resonant periodic orbits are created both in the stable and the chaotic region; (iii) chaos is created in the stable region near the resonance frequency and in periodic windows. A comparison to other studies shows that part of this behavior has been observed in simulations of higher truncations and real world experiments. Since very small modulations can already have a considerable effect, this suggests that periodic processes such as annual or diurnal cycles should not be omitted even in simple climate models.

ei

[BibTex]

[BibTex]


no image
Image segmentation from motion: just the loss of high-spatial-frequency content ?

Wichmann, F., Henning, G.

Perception, 24, pages: S19, 1995 (poster)

Abstract
The human contrast sensitivity function (CSF) is bandpass for stimuli of low temporal frequency but, for moving stimuli, results in a low-pass CSF with large high spatial-frequency losses. Thus the high spatial-frequency content of images moving on the retina cannot be seen; motion perception could be facilitated by, or even be based on, the selective loss of high spatial-frequency content. 2-AFC image segmentation experiments were conducted with segmentation based on motion or on form. In the latter condition, the form difference mirrored that produced by moving stimuli. This was accomplished by generating stimulus elements which were spectrally either broadband or low-pass. For the motion used, the spectral difference between static broadband and static low-pass elements matched the spectral difference between moving and static broadband elements. On the hypothesis that segmentation from motion is based on the detection of regions devoid of high spatial-frequencies, both tasks should be similarly difficult for human observers. However, neither image segmentation (nor, incidentally, motion detection) was sensitive to the high spatial-frequency content of the stimuli. Thus changes in perceptual form produced by moving stimuli appear not to be used as a cue for image segmentation.

ei

[BibTex]