Header logo is


2016


no image
Predictive and Self Triggering for Event-based State Estimation

Trimpe, S.

In Proceedings of the 55th IEEE Conference on Decision and Control (CDC), pages: 3098-3105, Las Vegas, NV, USA, December 2016 (inproceedings)

am ics

arXiv PDF DOI Project Page [BibTex]

2016


arXiv PDF DOI Project Page [BibTex]


Keep it {SMPL}: Automatic Estimation of {3D} Human Pose and Shape from a Single Image
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image

Bogo, F., Kanazawa, A., Lassner, C., Gehler, P., Romero, J., Black, M. J.

In Computer Vision – ECCV 2016, pages: 561-578, Lecture Notes in Computer Science, Springer International Publishing, 14th European Conference on Computer Vision, October 2016 (inproceedings)

Abstract
We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we fi rst use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fi t it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.

ps

pdf Video Sup Mat video Code Project ppt Project Page [BibTex]

pdf Video Sup Mat video Code Project ppt Project Page [BibTex]


Superpixel Convolutional Networks using Bilateral Inceptions
Superpixel Convolutional Networks using Bilateral Inceptions

Gadde, R., Jampani, V., Kiefel, M., Kappler, D., Gehler, P.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, Springer, 14th European Conference on Computer Vision, October 2016 (inproceedings)

Abstract
In this paper we propose a CNN architecture for semantic image segmentation. We introduce a new “bilateral inception” module that can be inserted in existing CNN architectures and performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard backpropagation techniques. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super) pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception modules between the last CNN (1 × 1 convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time.

am ps

pdf supplementary poster Project Page Project Page [BibTex]

pdf supplementary poster Project Page Project Page [BibTex]


Barrista - Caffe Well-Served
Barrista - Caffe Well-Served

Lassner, C., Kappler, D., Kiefel, M., Gehler, P.

In ACM Multimedia Open Source Software Competition, ACM OSSC16, October 2016 (inproceedings)

Abstract
The caffe framework is one of the leading deep learning toolboxes in the machine learning and computer vision community. While it offers efficiency and configurability, it falls short of a full interface to Python. With increasingly involved procedures for training deep networks and reaching depths of hundreds of layers, creating configuration files and keeping them consistent becomes an error prone process. We introduce the barrista framework, offering full, pythonic control over caffe. It separates responsibilities and offers code to solve frequently occurring tasks for pre-processing, training and model inspection. It is compatible to all caffe versions since mid 2015 and can import and export .prototxt files. Examples are included, e.g., a deep residual network implemented in only 172 lines (for arbitrary depths), comparing to 2320 lines in the official implementation for the equivalent model.

am ps

pdf link (url) DOI Project Page [BibTex]

pdf link (url) DOI Project Page [BibTex]


Robust Gaussian Filtering using a Pseudo Measurement
Robust Gaussian Filtering using a Pseudo Measurement

Wüthrich, M., Garcia Cifuentes, C., Trimpe, S., Meier, F., Bohg, J., Issac, J., Schaal, S.

In Proceedings of the American Control Conference (ACC), Boston, MA, USA, July 2016 (inproceedings)

Abstract
Most widely-used state estimation algorithms, such as the Extended Kalman Filter and the Unscented Kalman Filter, belong to the family of Gaussian Filters (GF). Unfortunately, GFs fail if the measurement process is modelled by a fat-tailed distribution. This is a severe limitation, because thin-tailed measurement models, such as the analytically-convenient and therefore widely-used Gaussian distribution, are sensitive to outliers. In this paper, we show that mapping the measurements into a specific feature space enables any existing GF algorithm to work with fat-tailed measurement models. We find a feature function which is optimal under certain conditions. Simulation results show that the proposed method allows for robust filtering in both linear and nonlinear systems with measurements contaminated by fat-tailed noise.

am ics

Web link (url) DOI Project Page [BibTex]

Web link (url) DOI Project Page [BibTex]


DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation
DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation

Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P., Schiele, B.

In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 4929-4937, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
This paper considers the task of articulated human pose estimation of multiple people in real-world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.

ps

code pdf supplementary DOI Project Page [BibTex]

code pdf supplementary DOI Project Page [BibTex]


Video segmentation via object flow
Video segmentation via object flow

Tsai, Y., Yang, M., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Video object segmentation is challenging due to fast moving objects, deforming shapes, and cluttered backgrounds. Optical flow can be used to propagate an object segmentation over time but, unfortunately, flow is often inaccurate, particularly around object boundaries. Such boundaries are precisely where we want our segmentation to be accurate. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multiscale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process object flow and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme. Experiments on the SegTrack v2 and Youtube-Objects datasets show that the proposed algorithm performs favorably against the other state-of-the-art methods.

ps

pdf [BibTex]

pdf [BibTex]


Patches, Planes and Probabilities: A Non-local Prior for Volumetric {3D} Reconstruction
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

YouTube pdf poster suppmat Project Page [BibTex]


Optical Flow with Semantic Segmentation and Localized Layers
Optical Flow with Semantic Segmentation and Localized Layers

Sevilla-Lara, L., Sun, D., Jampani, V., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 3889-3898, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Existing optical flow methods make generic, spatially homogeneous, assumptions about the spatial structure of the flow. In reality, optical flow varies across an image depending on object class. Simply put, different objects move differently. Here we exploit recent advances in static semantic scene segmentation to segment the image into objects of different types. We define different models of image motion in these regions depending on the type of object. For example, we model the motion on roads with homographies, vegetation with spatially smooth flow, and independently moving objects like cars and planes with affine motion plus deviations. We then pose the flow estimation problem using a novel formulation of localized layers, which addresses limitations of traditional layered models for dealing with complex scene motion. Our semantic flow method achieves the lowest error of any published monocular method in the KITTI-2015 flow benchmark and produces qualitatively better flow and segmentation than recent top methods on a wide range of natural videos.

ps

video Kitti Precomputed Data (1.6GB) pdf YouTube Sequences Code Project Page Project Page [BibTex]

video Kitti Precomputed Data (1.6GB) pdf YouTube Sequences Code Project Page Project Page [BibTex]


Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks

Jampani, V., Kiefel, M., Gehler, P. V.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 4452-4461, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this paper, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose bilateral neural networks for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters.

ps

project page code CVF open-access pdf supplementary poster Project Page Project Page [BibTex]

project page code CVF open-access pdf supplementary poster Project Page Project Page [BibTex]


Occlusion boundary detection via deep exploration of context
Occlusion boundary detection via deep exploration of context

Fu, H., Wang, C., Tao, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Occlusion boundaries contain rich perceptual information about the underlying scene structure. They also provide important cues in many visual perception tasks such as scene understanding, object recognition, and segmentation. In this paper, we improve occlusion boundary detection via enhanced exploration of contextual information (e.g., local structural boundary patterns, observations from surrounding regions, and temporal context), and in doing so develop a novel approach based on convolutional neural networks (CNNs) and conditional random fields (CRFs). Experimental results demonstrate that our detector significantly outperforms the state-of-the-art (e.g., improving the F-measure from 0.62 to 0.71 on the commonly used CMU benchmark). Last but not least, we empirically assess the roles of several important components of the proposed detector, so as to validate the rationale behind this approach.

ps

pdf [BibTex]

pdf [BibTex]


Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Active Uncertainty Calibration in Bayesian ODE Solvers
Active Uncertainty Calibration in Bayesian ODE Solvers

Kersting, H., Hennig, P.

Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), pages: 309-318, (Editors: Ihler, A. and Janzing, D.), AUAI Press, June 2016 (conference)

Abstract
There is resurging interest, in statistics and machine learning, in solvers for ordinary differential equations (ODEs) that return probability measures instead of point estimates. Recently, Conrad et al.~introduced a sampling-based class of methods that are `well-calibrated' in a specific sense. But the computational cost of these methods is significantly above that of classic methods. On the other hand, Schober et al.~pointed out a precise connection between classic Runge-Kutta ODE solvers and Gaussian filters, which gives only a rough probabilistic calibration, but at negligible cost overhead. By formulating the solution of ODEs as approximate inference in linear Gaussian SDEs, we investigate a range of probabilistic ODE solvers, that bridge the trade-off between computational cost and probabilistic calibration, and identify the inaccurate gradient measurement as the crucial source of uncertainty. We propose the novel filtering-based method Bayesian Quadrature filtering (BQF) which uses Bayesian quadrature to actively learn the imprecision in the gradient measurement by collecting multiple gradient evaluations.

ei pn

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


Automatic LQR Tuning Based on Gaussian Process Global Optimization
Automatic LQR Tuning Based on Gaussian Process Global Optimization

Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 270-277, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree- of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Results of a two- and four- dimensional tuning problems highlight the method’s potential for automatic controller tuning on robotic platforms.

am ics pn

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]


Depth-based Object Tracking Using a Robust Gaussian Filter
Depth-based Object Tracking Using a Robust Gaussian Filter

Issac, J., Wüthrich, M., Garcia Cifuentes, C., Bohg, J., Trimpe, S., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
We consider the problem of model-based 3D- tracking of objects given dense depth images as input. Two difficulties preclude the application of a standard Gaussian filter to this problem. First of all, depth sensors are characterized by fat-tailed measurement noise. To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand. Thereby, we avoid using heuristic outlier detection methods that simply reject measurements if they do not match the model. Secondly, the computational cost of the standard Gaussian filter is prohibitive due to the high-dimensional measurement, i.e. the depth image. To address this problem, we propose an approximation to reduce the computational complexity of the filter. In quantitative experiments on real data we show how our method clearly outperforms the standard Gaussian filter. Furthermore, we compare its performance to a particle-filter-based tracking method, and observe comparable computational efficiency and improved accuracy and smoothness of the estimates.

am ics

Video Bayesian Object Tracking Library Bayesian Filtering Framework Object Tracking Dataset link (url) DOI Project Page [BibTex]

Video Bayesian Object Tracking Library Bayesian Filtering Framework Object Tracking Dataset link (url) DOI Project Page [BibTex]


no image
Batch Bayesian Optimization via Local Penalization

González, J., Dai, Z., Hennig, P., Lawrence, N.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 648-657, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C.), May 2016 (conference)

ei pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Communication Rate Analysis for Event-based State Estimation

(Best student paper finalist)

Ebner, S., Trimpe, S.

In Proceedings of the 13th International Workshop on Discrete Event Systems, May 2016 (inproceedings)

am ics

PDF DOI [BibTex]

PDF DOI [BibTex]


Probabilistic Approximate Least-Squares
Probabilistic Approximate Least-Squares

Bartels, S., Hennig, P.

Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 51, pages: 676-684, JMLR Workshop and Conference Proceedings, (Editors: Gretton, A. and Robert, C. C. ), May 2016 (conference)

Abstract
Least-squares and kernel-ridge / Gaussian process regression are among the foundational algorithms of statistics and machine learning. Famously, the worst-case cost of exact nonparametric regression grows cubically with the data-set size; but a growing number of approximations have been developed that estimate good solutions at lower cost. These algorithms typically return point estimators, without measures of uncertainty. Leveraging recent results casting elementary linear algebra operations as probabilistic inference, we propose a new approximate method for nonparametric least-squares that affords a probabilistic uncertainty estimate over the error between the approximate and exact least-squares solution (this is not the same as the posterior variance of the associated Gaussian process regressor). This allows estimating the error of the least-squares solution on a subset of the data relative to the full-data solution. The uncertainty can be used to control the computational effort invested in the approximation. Our algorithm has linear cost in the data-set size, and a simple formal form, so that it can be implemented with a few lines of code in programming languages with linear algebra functionality.

ei pn

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


Appealing female avatars from {3D} body scans: Perceptual effects of stylization
Appealing female avatars from 3D body scans: Perceptual effects of stylization

Fleming, R., Mohler, B., Romero, J., Black, M. J., Breidt, M.

In 11th Int. Conf. on Computer Graphics Theory and Applications (GRAPP), Febuary 2016 (inproceedings)

Abstract
Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data. However, negative reactions to some realistic computer generated humans suggest that this approach might not always provide the most appealing results. Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%.

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Deep Discrete Flow
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


Reconstructing Articulated Rigged Models from RGB-D Videos
Reconstructing Articulated Rigged Models from RGB-D Videos

Tzionas, D., Gall, J.

In European Conference on Computer Vision Workshops 2016 (ECCVW’16) - Workshop on Recovering 6D Object Pose (R6D’16), pages: 620-633, Springer International Publishing, 2016 (inproceedings)

Abstract
Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.

ps

pdf suppl Project's Website YouTube link (url) DOI [BibTex]

pdf suppl Project's Website YouTube link (url) DOI [BibTex]


no image
On the Effects of Measurement Uncertainty in Optimal Control of Contact Interactions

Ponton, B., Schaal, S., Righetti, L.

In The 12th International Workshop on the Algorithmic Foundations of Robotics WAFR, Berkeley, USA, 2016 (inproceedings)

Abstract
Stochastic Optimal Control (SOC) typically considers noise only in the process model, i.e. unknown disturbances. However, in many robotic applications involving interaction with the environment, such as locomotion and manipulation, uncertainty also comes from lack of precise knowledge of the world, which is not an actual disturbance. We analyze the effects of also considering noise in the measurement model, by devel- oping a SOC algorithm based on risk-sensitive control, that includes the dynamics of an observer in such a way that the control law explicitly de- pends on the current measurement uncertainty. In simulation results on a simple 2D manipulator, we have observed that measurement uncertainty leads to low impedance behaviors, a result in contrast with the effects of process noise that creates stiff behaviors. This suggests that taking into account measurement uncertainty could be a potentially very interesting way to approach problems involving uncertain contact interactions.

am mg

link (url) [BibTex]

link (url) [BibTex]


no image
A Convex Model of Momentum Dynamics for Multi-Contact Motion Generation

Ponton, B., Herzog, A., Schaal, S., Righetti, L.

In 2016 IEEE-RAS 16th International Conference on Humanoid Robots Humanoids, pages: 842-849, IEEE, Cancun, Mexico, 2016 (inproceedings)

Abstract
Linear models for control and motion generation of humanoid robots have received significant attention in the past years, not only due to their well known theoretical guarantees, but also because of practical computational advantages. However, to tackle more challenging tasks and scenarios such as locomotion on uneven terrain, a more expressive model is required. In this paper, we are interested in contact interaction-centered motion optimization based on the momentum dynamics model. This model is non-linear and non-convex; however, we find a relaxation of the problem that allows us to formulate it as a single convex quadratically-constrained quadratic program (QCQP) that can be very efficiently optimized and is useful for multi-contact planning. This convex model is then coupled to the optimization of end-effector contact locations using a mixed integer program, which can also be efficiently solved. This becomes relevant e.g. to recover from external pushes, where a predefined stepping plan is likely to fail and an online adaptation of the contact location is needed. The performance of our algorithm is demonstrated in several multi-contact scenarios for a humanoid robot.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Inertial Sensor-Based Humanoid Joint State Estimation

Rotella, N., Mason, S., Schaal, S., Righetti, L.

In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages: 1825-1831, IEEE, Stockholm, Sweden, 2016 (inproceedings)

Abstract
This work presents methods for the determination of a humanoid robot's joint velocities and accelerations directly from link-mounted Inertial Measurement Units (IMUs) each containing a three-axis gyroscope and a three-axis accelerometer. No information about the global pose of the floating base or its links is required and precise knowledge of the link IMU poses is not necessary due to presented calibration routines. Additionally, a filter is introduced to fuse gyroscope angular velocities with joint position measurements and compensate the computed joint velocities for time-varying gyroscope biases. The resulting joint velocities are subject to less noise and delay than filtered velocities computed from numerical differentiation of joint potentiometer signals, leading to superior performance in joint feedback control as demonstrated in experiments performed on a SARCOS hydraulic humanoid.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Stepping Stabilization Using a Combination of DCM Tracking and Step Adjustment

Khadiv, M., Kleff, S., Herzog, A., Moosavian, S. A. A., Schaal, S., Righetti, L.

In 2016 4th International Conference on Robotics and Mechatronics (ICROM), pages: 130-135, IEEE, Teheran, Iran, 2016 (inproceedings)

Abstract
In this paper, a method for stabilizing biped robots stepping by a combination of Divergent Component of Motion (DCM) tracking and step adjustment is proposed. In this method, the DCM trajectory is generated, consistent with the predefined footprints. Furthermore, a swing foot trajectory modification strategy is proposed to adapt the landing point, using DCM measurement. In order to apply the generated trajectories to the full robot, a Hierarchical Inverse Dynamics (HID) is employed. The HID enables us to use different combinations of the DCM tracking and step adjustment for stabilizing different biped robots. Simulation experiments on two scenarios for two different simulated robots, one with active ankles and the other with passive ankles, are carried out. Simulation results demonstrate the effectiveness of the proposed method for robots with both active and passive ankles.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Structured contact force optimization for kino-dynamic motion generation

Herzog, A., Schaal, S., Righetti, L.

In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 2703-2710, IEEE, Daejeon, South Korea, 2016 (inproceedings)

Abstract
Optimal control approaches in combination with trajectory optimization have recently proven to be a promising control strategy for legged robots. Computationally efficient and robust algorithms were derived using simplified models of the contact interaction between robot and environment such as the linear inverted pendulum model (LIPM). However, as humanoid robots enter more complex environments, less restrictive models become increasingly important. As we leave the regime of linear models, we need to build dedicated solvers that can compute interaction forces together with consistent kinematic plans for the whole-body. In this paper, we address the problem of planning robot motion and interaction forces for legged robots given predefined contact surfaces. The motion generation process is decomposed into two alternating parts computing force and motion plans in coherence. We focus on the properties of the momentum computation leading to sparse optimal control formulations to be exploited by a dedicated solver. In our experiments, we demonstrate that our motion generation algorithm computes consistent contact forces and joint trajectories for our humanoid robot. We also demonstrate the favorable time complexity due to our formulation and composition of the momentum equations.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Balancing and Walking Using Full Dynamics LQR Control With Contact Constraints

Mason, S., Rotella, N., Schaal, S., Righetti, L.

In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages: 63-68, IEEE, Cancun, Mexico, 2016 (inproceedings)

Abstract
Torque control algorithms which consider robot dynamics and contact constraints are important for creating dynamic behaviors for humanoids. As computational power increases, algorithms tend to also increase in complexity. However, it is not clear how much complexity is really required to create controllers which exhibit good performance. In this paper, we study the capabilities of a simple approach based on contact consistent LQR controllers designed around key poses to control various tasks on a humanoid robot. We present extensive experimental results on a hydraulic, torque controlled humanoid performing balancing and stepping tasks. This feedback control approach captures the necessary synergies between the DoFs of the robot to guarantee good control performance. We show that for the considered tasks, it is only necessary to re-linearize the dynamics of the robot at different contact configurations and that increasing the number of LQR controllers along desired trajectories does not improve performance. Our result suggest that very simple controllers can yield good performance competitive with current state of the art, but more complex, optimization-based whole-body controllers. A video of the experiments can be found at https://youtu.be/5T08CNKV1hw.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Step Timing Adjustement: a Step toward Generating Robust Gaits

Khadiv, M., Herzog, A., Moosavian, S. A. A., Righetti, L.

In 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages: 35-42, IEEE, Cancun, Mexico, 2016 (inproceedings)

Abstract
Step adjustment for humanoid robots has been shown to improve robustness in gaits. However, step duration adaptation is often neglected in control strategies. In this paper, we propose an approach that combines both step location and timing adjustment for generating robust gaits. In this approach, step location and step timing are decided, based on feedback from the current state of the robot. The proposed approach is comprised of two stages. In the first stage, the nominal step location and step duration for the next step or a previewed number of steps are specified. In this stage which is done at the start of each step, the main goal is to specify the best step length and step duration for a desired walking speed. The second stage deals with finding the best landing point and landing time of the swing foot at each control cycle. In this stage, stability of the gaits is preserved by specifying a desired offset between the swing foot landing point and the Divergent Component of Motion (DCM) at the end of current step. After specifying the landing point of the swing foot at a desired time, the swing foot trajectory is regenerated at each control cycle to realize desired landing properties. Simulation on different scenarios shows the robustness of the generated gaits from our proposed approach compared to the case where no timing adjustment is employed.

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2009


no image
Modelling the interplay of central pattern generation and sensory feedback in the neuromuscular control of running

Daley, M., Righetti, L., Ijspeert, A.

In Comparative Biochemistry and Physiology - Part A: Molecular & Integrative Physiology. Annual Main Meeting for the Society for Experimental Biology, 153, Glasgow, Scotland, 2009 (inproceedings)

mg

link (url) DOI [BibTex]

2009


link (url) DOI [BibTex]

2006


no image
Movement generation using dynamical systems : a humanoid robot performing a drumming task

Degallier, S., Santos, C. P., Righetti, L., Ijspeert, A.

In 2006 6th IEEE-RAS International Conference on Humanoid Robots, pages: 512-517, IEEE, Genova, Italy, 2006 (inproceedings)

Abstract
The online generation of trajectories in humanoid robots remains a difficult problem. In this contribution, we present a system that allows the superposition, and the switch between, discrete and rhythmic movements. Our approach uses nonlinear dynamical systems for generating trajectories online and in real time. Our goal is to make use of attractor properties of dynamical systems in order to provide robustness against small perturbations and to enable online modulation of the trajectories. The system is demonstrated on a humanoid robot performing a drumming task.

mg

link (url) DOI [BibTex]

2006


link (url) DOI [BibTex]


no image
Design methodologies for central pattern generators: an application to crawling humanoids

Righetti, L., Ijspeert, A.

In Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006 (inproceedings)

mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Programmable central pattern generators: an application to biped locomotion control

Righetti, L., Ijspeert, A.

In Proceedings of the IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., pages: 1585-1590, IEEE, 2006 (inproceedings)

mg

[BibTex]

[BibTex]

2004


no image
Operating system support for interface virtualisation of reconfigurable coprocessors

Vuletic, M., Righetti, L., Pozzi, L., Ienne, P.

In In Proceedings of the Design, Automation and Test in Europe Conference and Exhibition, pages: 748-749, IEEE, Paris, France, 2004 (inproceedings)

Abstract
Reconfigurable systems-on-chip (SoC) consist of large field programmable gate arrays (FPGAs) and standard processors. The reconfigurable logic can be used for application-specific coprocessors to speedup execution of applications. The widespread use is limited by the complexity of interfacing software applications with coprocessors. We present a virtualization layer that lowers the interfacing complexity and improves the portability. The layer shifts the burden of moving data between processor and coprocessor from the programmer to the operating system (OS). A reconfigurable SoC running Linux is used to prove the concept.

mg

link (url) DOI [BibTex]

2004


link (url) DOI [BibTex]

2003


no image
Evolution of Fault-tolerant Self-replicating Structures

Righetti, L., Shokur, S., Capcarre, M.

In Advances in Artificial Life, pages: 278-288, Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2003 (inproceedings)

Abstract
Designed and evolved self-replicating structures in cellular automata have been extensively studied in the past as models of Artificial Life. However, CAs, unlike their biological counterpart, are very brittle: any faulty cell usually leads to the complete destruction of any emerging structures, let alone self-replicating structures. A way to design fault-tolerant structures based on error-correcting-code has been presented recently [1], but it required a cumbersome work to be put into practice. In this paper, we get back to the original inspiration for these works, nature, and propose a way to evolve self-replicating structures, faults here being only an idiosyncracy of the environment.

mg

link (url) DOI [BibTex]

2003


link (url) DOI [BibTex]