Header logo is


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


Learning of sub-optimal gait controllers for magnetic walking soft millirobots
Learning of sub-optimal gait controllers for magnetic walking soft millirobots

Culha, U., Demir, S. O., Trimpe, S., Sitti, M.

In Proceedings of Robotics: Science and Systems, July 2020, Culha and Demir are equally contributing authors (inproceedings)

Abstract
Untethered small-scale soft robots have promising applications in minimally invasive surgery, targeted drug delivery, and bioengineering applications as they can access confined spaces in the human body. However, due to highly nonlinear soft continuum deformation kinematics, inherent stochastic variability during fabrication at the small scale, and lack of accurate models, the conventional control methods cannot be easily applied. Adaptivity of robot control is additionally crucial for medical operations, as operation environments show large variability, and robot materials may degrade or change over time,which would have deteriorating effects on the robot motion and task performance. Therefore, we propose using a probabilistic learning approach for millimeter-scale magnetic walking soft robots using Bayesian optimization (BO) and Gaussian processes (GPs). Our approach provides a data-efficient learning scheme to find controller parameters while optimizing the stride length performance of the walking soft millirobot robot within a small number of physical experiments. We demonstrate adaptation to fabrication variabilities in three different robots and to walking surfaces with different roughness. We also show an improvement in the learning performance by transferring the learning results of one robot to the others as prior information.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


{GENTEL : GENerating Training data Efficiently for Learning to segment medical images}
GENTEL : GENerating Training data Efficiently for Learning to segment medical images

Thakur, R. P., Rocamora, S. P., Goel, L., Pohmann, R., Machann, J., Black, M. J.

Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFAIP), June 2020 (conference)

Abstract
Accurately segmenting MRI images is crucial for many clinical applications. However, manually segmenting images with accurate pixel precision is a tedious and time consuming task. In this paper we present a simple, yet effective method to improve the efficiency of the image segmentation process. We propose to transform the image annotation task into a binary choice task. We start by using classical image processing algorithms with different parameter values to generate multiple, different segmentation masks for each input MRI image. Then, instead of segmenting the pixels of the images, the user only needs to decide whether a segmentation is acceptable or not. This method allows us to efficiently obtain high quality segmentations with minor human intervention. With the selected segmentations, we train a state-of-the-art neural network model. For the evaluation, we use a second MRI dataset (1.5T Dataset), acquired with a different protocol and containing annotations. We show that the trained network i) is able to automatically segment cases where none of the classical methods obtain a high quality result ; ii) generalizes to the second MRI dataset, which was acquired with a different protocol and was never seen at training time ; and iii) enables detection of miss-annotations in this second dataset. Quantitatively, the trained network obtains very good results: DICE score - mean 0.98, median 0.99- and Hausdorff distance (in pixels) - mean 4.7, median 2.0-.

ps

[BibTex]

[BibTex]


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

arxiv project page code [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires that (1) the generated human bodies to be semantically plausible within the 3D environment (e.g. people sitting on the sofa or cooking near the stove), and (2) the generated human-scene interaction to be physically feasible such that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human poses conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR. Our project page for data and code can be seen at: \url{https://vlg.inf.ethz.ch/projects/PSI/}.

ps

Code PDF [BibTex]

Code PDF [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

ps

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

ps

arXiv code video supplemental video [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]

arXiv [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

ps

pdf [BibTex]

pdf [BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


Towards 5-DoF Control of an Untethered Magnetic Millirobot via MRI Gradient Coils
Towards 5-DoF Control of an Untethered Magnetic Millirobot via MRI Gradient Coils

Onder Erin, D. A. M. E. T., Sitti, M.

In IEEE International Conference on Robotics and Automation (ICRA), 2020 (inproceedings)

pi

[BibTex]

[BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Video 1 [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page Slides Video Poster [BibTex]

pdf Project Page Slides Video Poster [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Video 3 Slides Poster [BibTex]


Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art
Computer Vision for Autonomous Vehicles: Problems, Datasets and State-of-the-Art

Janai, J., Güney, F., Behl, A., Geiger, A.

Arxiv, Foundations and Trends in Computer Graphics and Vision, 2020 (book)

Abstract
Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles. As with any rapidly growing field, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several survey papers on particular sub-problems have appeared, no comprehensive survey on problems, datasets, and methods in computer vision for autonomous vehicles has been published. This monograph attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques. Our survey includes both the historically most relevant literature as well as the current state of the art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding, and end-to-end learning for autonomous driving. Towards this goal, we analyze the performance of the state of the art on several challenging benchmarking datasets, including KITTI, MOT, and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we also provide a website that allows navigating topics as well as methods and provides additional information.

avg

pdf Project Page link Project Page [BibTex]

2011


Outdoor Human Motion Capture using Inverse Kinematics and von Mises-Fisher Sampling
Outdoor Human Motion Capture using Inverse Kinematics and von Mises-Fisher Sampling

Pons-Moll, G., Baak, A., Gall, J., Leal-Taixe, L., Mueller, M., Seidel, H., Rosenhahn, B.

In IEEE International Conference on Computer Vision (ICCV), pages: 1243-1250, November 2011 (inproceedings)

ps

project page pdf supplemental [BibTex]

2011


project page pdf supplemental [BibTex]


Home {3D} body scans from noisy image and range data
Home 3D body scans from noisy image and range data

Weiss, A., Hirshberg, D., Black, M.

In Int. Conf. on Computer Vision (ICCV), pages: 1951-1958, IEEE, Barcelona, November 2011 (inproceedings)

Abstract
The 3D shape of the human body is useful for applications in fitness, games and apparel. Accurate body scanners, however, are expensive, limiting the availability of 3D body models. We present a method for human shape reconstruction from noisy monocular image and range data using a single inexpensive commodity sensor. The approach combines low-resolution image silhouettes with coarse range data to estimate a parametric model of the body. Accurate 3D shape estimates are obtained by combining multiple monocular views of a person moving in front of the sensor. To cope with varying body pose, we use a SCAPE body model which factors 3D body shape and pose variations. This enables the estimation of a single consistent shape while allowing pose to vary. Additionally, we describe a novel method to minimize the distance between the projected 3D body contour and the image silhouette that uses analytic derivatives of the objective function. We propose a simple method to estimate standard body measurements from the recovered SCAPE model and show that the accuracy of our method is competitive with commercial body scanning systems costing orders of magnitude more.

ps

pdf YouTube poster Project Page Project Page [BibTex]

pdf YouTube poster Project Page Project Page [BibTex]


Means in spaces of tree-like shapes
Means in spaces of tree-like shapes

Aasa Feragen, Soren Hauberg, Mads Nielsen, Francois Lauze

In Computer Vision (ICCV), 2011 IEEE International Conference on, pages: 736 -746, IEEE, november 2011 (inproceedings)

ps

Publishers site PDF Suppl. material [BibTex]

Publishers site PDF Suppl. material [BibTex]


Everybody needs somebody: modeling social and grouping behavior on a linear programming multiple people tracker
Everybody needs somebody: modeling social and grouping behavior on a linear programming multiple people tracker

Leal-Taixé, L., Rosenhahn, G. P. A. B.

In IEEE International Conference on Computer Vision Workshops (IICCVW), November 2011 (inproceedings)

ps

project page pdf [BibTex]

project page pdf [BibTex]


Evaluating the Automated Alignment of {3D} Human Body Scans
Evaluating the Automated Alignment of 3D Human Body Scans

Hirshberg, D. A., Loper, M., Rachlin, E., Tsoli, A., Weiss, A., Corner, B., Black, M. J.

In 2nd International Conference on 3D Body Scanning Technologies, pages: 76-86, (Editors: D’Apuzzo, Nicola), Hometrica Consulting, Lugano, Switzerland, October 2011 (inproceedings)

Abstract
The statistical analysis of large corpora of human body scans requires that these scans be in alignment, either for a small set of key landmarks or densely for all the vertices in the scan. Existing techniques tend to rely on hand-placed landmarks or algorithms that extract landmarks from scans. The former is time consuming and subjective while the latter is error prone. Here we show that a model-based approach can align meshes automatically, producing alignment accuracy similar to that of previous methods that rely on many landmarks. Specifically, we align a low-resolution, artist-created template body mesh to many high-resolution laser scans. Our alignment procedure employs a robust iterative closest point method with a regularization that promotes smooth and locally rigid deformation of the template mesh. We evaluate our approach on 50 female body models from the CAESAR dataset that vary significantly in body shape. To make the method fully automatic, we define simple feature detectors for the head and ankles, which provide initial landmark locations. We find that, if body poses are fairly similar, as in CAESAR, the fully automated method provides dense alignments that enable statistical analysis and anthropometric measurement.

ps

pdf slides DOI Project Page [BibTex]

pdf slides DOI Project Page [BibTex]


Branch\&Rank: Non-Linear Object Detection
Branch&Rank: Non-Linear Object Detection

(Best Impact Paper Prize)

Lehmann, A., Gehler, P., VanGool, L.

In Proceedings of the British Machine Vision Conference (BMVC), pages: 8.1-8.11, (Editors: Jesse Hoey and Stephen McKenna and Emanuele Trucco), BMVA Press, September 2011, http://dx.doi.org/10.5244/C.25.8 (inproceedings)

ps

video of talk pdf slides supplementary [BibTex]

video of talk pdf slides supplementary [BibTex]


Efficient and Robust Shape Matching for Model Based Human Motion Capture
Efficient and Robust Shape Matching for Model Based Human Motion Capture

Pons-Moll, G., Leal-Taixé, L., Truong, T., Rosenhahn, B.

In German Conference on Pattern Recognition (GCPR), pages: 416-425, September 2011 (inproceedings)

ps

project page pdf [BibTex]

project page pdf [BibTex]


no image
BrainGate pilot clinical trials: Progress in translating neural engineering principles to clinical testing

Hochberg, L., Simeral, J., Black, M., Bacher, D., Barefoot, L., Berhanu, E., Borton, D., Cash, S., Feldman, J., Gallivan, E., Homer, M., Jarosiewicz, B., King, B., Liu, J., Malik, W., Masse, N., Perge, J., Rosler, D., Schmansky, N., Travers, B., Truccolo, W., Nurmikko, A., Donoghue, J.

33rd Annual International IEEE EMBS Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, August 2011 (conference)

ps

[BibTex]

[BibTex]


no image
Planning manipulation and grasping tasks with a redundant arm

Gray, S. R., Romano, J. M., Brindza, J., Kim, S., Kuchenbecker, K. J., Kumar, V.

In Proc. ASME International Design Engineering Technical Conferences, Washington, D.C., USA, 2011, DETC2011-47453. Oral presentation given by Gray (inproceedings)

hi

[BibTex]

[BibTex]


Learning Output Kernels with Block Coordinate Descent
Learning Output Kernels with Block Coordinate Descent

Dinuzzo, F., Ong, C. S., Gehler, P., Pillonetto, G.

In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages: 49-56, ICML ’11, (Editors: Getoor, Lise and Scheffer, Tobias), ACM, New York, NY, USA, ICML, June 2011 (inproceedings)

ei ps

data+code pdf [BibTex]

data+code pdf [BibTex]


no image
Lessons in Using Vibrotactile Feedback to Guide Fast Arm Motions

Bark, K., Khanna, P., Irwin, R., Kapur, P., Jax, S. A., Buxbaum, L. J., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 355-360, Istanbul, Turkey, June 2011, Poster presentation given by Bark (inproceedings)

hi

[BibTex]

[BibTex]


no image
Haptically Assisted Golf Putting Through a Planar Four-Cable System

Huang, P. Y., Kunkel, J. A., Brindza, J., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 191-196, Istanbul, Turkey, June 2011, Poster presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Design of Body-Grounded Tactile Actuators for Playback of Human Physical Contact

Stanley, A. A., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 563-568, Istanbul, Turkey, June 2011, Poster presentation given by Stanley (inproceedings)

hi

[BibTex]

[BibTex]


no image
Tool Vibration Feedback May Help Expert Robotic Surgeons Apply Less Force During Manipulation Tasks

McMahan, W., Bark, K., Gewirtz, J., Standish, D., Martin, P. D., Kunkel, J. A., Lilavois, M., Wedmid, A., Lee, D. I., Kuchenbecker, K. J.

In Proc. Hamlyn Symposium on Medical Robotics, pages: 37-38, London, England, June 2011, Oral Presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


Role of expertise and contralateral symmetry in the diagnosis of pneumoconiosis: an experimental study
Role of expertise and contralateral symmetry in the diagnosis of pneumoconiosis: an experimental study

Jampani, V., Vaidya, V., Sivaswamy, J., Tourani, K. L.

In Proc. SPIE 7966, Medical Imaging: Image Perception, Observer Performance, and Technology Assessment, 2011, Florida, March 2011 (inproceedings)

Abstract
Pneumoconiosis, a lung disease caused by the inhalation of dust, is mainly diagnosed using chest radiographs. The effects of using contralateral symmetric (CS) information present in chest radiographs in the diagnosis of pneumoconiosis are studied using an eye tracking experimental study. The role of expertise and the influence of CS information on the performance of readers with different expertise level are also of interest. Experimental subjects ranging from novices & medical students to staff radiologists were presented with 17 double and 16 single lung images, and were asked to give profusion ratings for each lung zone. Eye movements and the time for their diagnosis were also recorded. Kruskal-Wallis test (χ2(6) = 13.38, p = .038), showed that the observer error (average sum of absolute differences) in double lung images differed significantly across the different expertise categories when considering all the participants. Wilcoxon-signed rank test indicated that the observer error was significantly higher for single-lung images (Z = 3.13, p < .001) than for the double-lung images for all the participants. Mann-Whitney test (U = 28, p = .038) showed that the differential error between single and double lung images is significantly higher in doctors [staff & residents] than in non-doctors [others]. Thus, Expertise & CS information plays a significant role in the diagnosis of pneumoconiosis. CS information helps in diagnosing pneumoconiosis by reducing the general tendency of giving less profusion ratings. Training and experience appear to play important roles in learning to use the CS information present in the chest radiographs.

ps

url link (url) [BibTex]

url link (url) [BibTex]


Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance
Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance

Gehler, P., Rother, C., Kiefel, M., Zhang, L., Schölkopf, B.

In Advances in Neural Information Processing Systems 24, pages: 765-773, (Editors: Shawe-Taylor, John and Zemel, Richard S. and Bartlett, Peter L. and Pereira, Fernando C. N. and Weinberger, Kilian Q.), Curran Associates, Inc., Red Hook, NY, USA, Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors. This results in a Random Field model with global, latent variables (basis colors) and pixel-accurate output reflectance values. We show that without edge information high-quality results can be achieved, that are on par with methods exploiting this source of information. Finally, we are able to improve on state-of-the-art results by integrating edge information into our model. We believe that our new approach is an excellent starting point for future developments in this field.

ei ps

website + code pdf poster Project Page Project Page [BibTex]

website + code pdf poster Project Page Project Page [BibTex]


OpenBioSafetyLab: A virtual world based biosafety training application for medical students
OpenBioSafetyLab: A virtual world based biosafety training application for medical students

Nakasone, A., Tang, S., Shigematsu, M., Heinecke, B., Fujimoto, S., Prendinger, H.

In International Conference on Information Technology: New Generations (ITNG), IEEE CPS, 2011 (inproceedings)

ps

PDF [BibTex]

PDF [BibTex]


no image
Haptography: Capturing and Recreating the Rich Feel of Real Surfaces

Kuchenbecker, K. J., Romano, J. M., McMahan, W.

In Proceedings of the International Symposium on Robotics Research (ISRR), 70, pages: 245-260, Springer Tracts in Advanced Robotics, Springer, 2011, Oral presentation given by Kuchenbecker in August of 2009 (inproceedings)

hi

[BibTex]

[BibTex]


Combining wireless neural recording and video capture for the analysis of natural gait
Combining wireless neural recording and video capture for the analysis of natural gait

Foster, J., Freifeld, O., Nuyujukian, P., Ryu, S., Black, M. J., Shenoy, K.

In Proc. 5th Int. IEEE EMBS Conf. on Neural Engineering, pages: 613-616, IEEE, 2011 (inproceedings)

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Micro-assembly using optically controlled bubbles

Hu, W., Ishii, K. S., Ohta, A. T.

In Optical MEMS and Nanophotonics (OMN), 2011 International Conference on, pages: 53-54, 2011 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Automated Control of AFM Based Nanomanipulation

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 237-311, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Design and analysis of a magnetically actuated and compliant capsule endoscopic robot

Yim, S., Sitti, M.

In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages: 4810-4815, 2011 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Micro-scale propulsion using multiple flexible artificial flagella

Singleton, J., Diller, E., Andersen, T., Regnier, S., Sitti, M.

In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages: 1687-1692, 2011 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


Tagged Cardiac MR Image Segmentation Using Boundary & Regional-Support   and Graph-based Deformable Priors
Tagged Cardiac MR Image Segmentation Using Boundary & Regional-Support and Graph-based Deformable Priors

Xiang, B., Wang, C., Deux, J., Rahmouni, A., Paragios, N.

In IEEE International Symposium on Biomedical Imaging (ISBI), 2011 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Multiview Structure from Motion in Trajectory Space
Multiview Structure from Motion in Trajectory Space

Zaheer, A., Akhter, I., Mohammad, H. B., Marzban, S., Khan, S.

In Computer Vision (ICCV), 2011 IEEE International Conference on, pages: 2447-2453, 2011 (inproceedings)

Abstract
Most nonrigid objects exhibit temporal regularities in their deformations. Recently it was proposed that these regularities can be parameterized by assuming that the non- rigid structure lies in a small dimensional trajectory space. In this paper, we propose a factorization approach for 3D reconstruction from multiple static cameras under the com- pact trajectory subspace representation. Proposed factor- ization is analogous to rank-3 factorization of rigid struc- ture from motion problem, in transformed space. The benefit of our approach is that the 3D trajectory basis can be directly learned from the image observations. This also allows us to impute missing observations and denoise tracking errors without explicit estimation of the 3D structure. In contrast to standard triangulation based methods which require points to be visible in at least two cameras, our ap- proach can reconstruct points, which remain occluded even in all the cameras for quite a long time. This makes our solution especially suitable for occlusion handling in motion capture systems. We demonstrate robustness of our method on challenging real and synthetic scenarios.

ps

pdf project page [BibTex]

pdf project page [BibTex]


Unscented Kalman Filtering for Articulated Human Tracking
Unscented Kalman Filtering for Articulated Human Tracking

Anders Boesen Lindbo Larsen, Soren Hauberg, Kim S. Pedersen

In Image Analysis, 6688, pages: 228-237, Lecture Notes in Computer Science, (Editors: Heyden, Anders and Kahl, Fredrik), Springer Berlin Heidelberg, 2011 (inproceedings)

ps

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


no image
Adaptation for perception of the human body: Investigations of transfer across viewpoint and pose

Sekunova, A., Black, M. J., Parkinson, L., Barton, J. S.

Vision Sciences Society, 2011 (conference)

ps

[BibTex]

[BibTex]


Level Set Segmentation with Robust Image Gradient Energy and Statistical Shape Prior
Level Set Segmentation with Robust Image Gradient Energy and Statistical Shape Prior

Si Yong Yeo, Xianghua Xie, Igor Sazonov, Perumal Nithiarasu

In IEEE International Conference on Image Processing, pages: 3397 - 3400, 2011 (inproceedings)

Abstract
We propose a new level set segmentation method with statistical shape prior using a variational approach. The image energy is derived from a robust image gradient feature. This gives the active contour a global representation of the geometric configuration, making it more robust to image noise, weak edges and initial configurations. Statistical shape information is incorporated using nonparametric shape density distribution, which allows the model to handle relatively large shape variations. Comparative examples using both synthetic and real images show the robustness and efficiency of the proposed method.

ps

link (url) [BibTex]

link (url) [BibTex]


no image
Teleoperation Based AFM Manipulation Control

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 145-235, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Descriptions and challenges of AFM based nanorobotic systems

Xie, H., Onal, C., Régnier, S., Sitti, M.

In Atomic Force Microscopy Based Nanorobotics, pages: 13-29, Springer Berlin Heidelberg, 2011 (incollection)

pi

[BibTex]

[BibTex]


no image
Control of multiple heterogeneous magnetic micro-robots on non-specialized surfaces

Diller, E., Floyd, S., Pawashe, C., Sitti, M.

In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages: 115-120, 2011 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Tip based robotic precision micro/nanomanipulation systems

Onal, C., Sumer, B., Ozcan, O., Nain, A., Sitti, M.

In SPIE Defense, Security, and Sensing, pages: 80580M-80580M, 2011 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Design of a miniature integrated multi-modal jumping and gliding robot

Woodward, M. A., Sitti, M.

In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages: 556-561, 2011 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]