Header logo is


2020


Label Efficient Visual Abstractions for Autonomous Driving
Label Efficient Visual Abstractions for Autonomous Driving

Behl, A., Chitta, K., Prakash, A., Ohn-Bar, E., Geiger, A.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
It is well known that semantic segmentation can be used as an effective intermediate representation for learning driving policies. However, the task of street scene semantic segmentation requires expensive annotations. Furthermore, segmentation algorithms are often trained irrespective of the actual driving task, using auxiliary image-space loss functions which are not guaranteed to maximize driving metrics such as safety or distance traveled per intervention. In this work, we seek to quantify the impact of reducing segmentation annotation costs on learned behavior cloning agents. We analyze several segmentation-based intermediate representations. We use these visual abstractions to systematically study the trade-off between annotation efficiency and driving performance, ie, the types of classes labeled, the number of image samples used to learn the visual abstraction model, and their granularity (eg, object masks vs. 2D bounding boxes). Our analysis uncovers several practical insights into how segmentation-based visual abstractions can be exploited in a more label efficient manner. Surprisingly, we find that state-of-the-art driving performance can be achieved with orders of magnitude reduction in annotation cost. Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.

avg

pdf slides video Project Page [BibTex]

2020


pdf slides video Project Page [BibTex]


3D Morphable Face Models - Past, Present and Future
3D Morphable Face Models - Past, Present and Future

Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F., Bolkart, T., Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.

ACM Transactions on Graphics, September 2020 (article)

Abstract
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications.

ps

project page pdf preprint [BibTex]

project page pdf preprint [BibTex]


no image
Vision-based Force Estimation for a da Vinci Instrument Using Deep Neural Networks

Lee, Y., Husin, H. M., Forte, M. P., Lee, S., Kuchenbecker, K. J.

Extended abstract presented as an Emerging Technology ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Cleveland, Ohio, USA, August 2020 (misc) Accepted

hi

[BibTex]

[BibTex]


Event-triggered Learning
Event-triggered Learning

Solowjow, F., Trimpe, S.

Automatica, 117, Elsevier, July 2020 (article)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


Learning of sub-optimal gait controllers for magnetic walking soft millirobots
Learning of sub-optimal gait controllers for magnetic walking soft millirobots

Culha, U., Demir, S. O., Trimpe, S., Sitti, M.

In Proceedings of Robotics: Science and Systems, July 2020, Culha and Demir are equally contributing authors (inproceedings)

Abstract
Untethered small-scale soft robots have promising applications in minimally invasive surgery, targeted drug delivery, and bioengineering applications as they can access confined spaces in the human body. However, due to highly nonlinear soft continuum deformation kinematics, inherent stochastic variability during fabrication at the small scale, and lack of accurate models, the conventional control methods cannot be easily applied. Adaptivity of robot control is additionally crucial for medical operations, as operation environments show large variability, and robot materials may degrade or change over time,which would have deteriorating effects on the robot motion and task performance. Therefore, we propose using a probabilistic learning approach for millimeter-scale magnetic walking soft robots using Bayesian optimization (BO) and Gaussian processes (GPs). Our approach provides a data-efficient learning scheme to find controller parameters while optimizing the stride length performance of the walking soft millirobot robot within a small number of physical experiments. We demonstrate adaptation to fabrication variabilities in three different robots and to walking surfaces with different roughness. We also show an improvement in the learning performance by transferring the learning results of one robot to the others as prior information.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Actively Learning Gaussian Process Dynamics
Actively Learning Gaussian Process Dynamics

Buisson-Fenet, M., Solowjow, F., Trimpe, S.

2nd Annual Conference on Learning for Dynamics and Control, June 2020 (conference) Accepted

Abstract
Despite the availability of ever more data enabled through modern sensor and computer technology, it still remains an open problem to learn dynamical systems in a sample-efficient way. We propose active learning strategies that leverage information-theoretical properties arising naturally during Gaussian process regression, while respecting constraints on the sampling process imposed by the system dynamics. Sample points are selected in regions with high uncertainty, leading to exploratory behavior and data-efficient training of the model. All results are verified in an extensive numerical benchmark.

ics

ArXiv [BibTex]

ArXiv [BibTex]


{GENTEL : GENerating Training data Efficiently for Learning to segment medical images}
GENTEL : GENerating Training data Efficiently for Learning to segment medical images

Thakur, R. P., Rocamora, S. P., Goel, L., Pohmann, R., Machann, J., Black, M. J.

Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFAIP), June 2020 (conference)

Abstract
Accurately segmenting MRI images is crucial for many clinical applications. However, manually segmenting images with accurate pixel precision is a tedious and time consuming task. In this paper we present a simple, yet effective method to improve the efficiency of the image segmentation process. We propose to transform the image annotation task into a binary choice task. We start by using classical image processing algorithms with different parameter values to generate multiple, different segmentation masks for each input MRI image. Then, instead of segmenting the pixels of the images, the user only needs to decide whether a segmentation is acceptable or not. This method allows us to efficiently obtain high quality segmentations with minor human intervention. With the selected segmentations, we train a state-of-the-art neural network model. For the evaluation, we use a second MRI dataset (1.5T Dataset), acquired with a different protocol and containing annotations. We show that the trained network i) is able to automatically segment cases where none of the classical methods obtain a high quality result ; ii) generalizes to the second MRI dataset, which was acquired with a different protocol and was never seen at training time ; and iii) enables detection of miss-annotations in this second dataset. Quantitatively, the trained network obtains very good results: DICE score - mean 0.98, median 0.99- and Hausdorff distance (in pixels) - mean 4.7, median 2.0-.

ps

[BibTex]

[BibTex]


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

arxiv project page code [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires that (1) the generated human bodies to be semantically plausible within the 3D environment (e.g. people sitting on the sofa or cooking near the stove), and (2) the generated human-scene interaction to be physically feasible such that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human poses conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR. Our project page for data and code can be seen at: \url{https://vlg.inf.ethz.ch/projects/PSI/}.

ps

Code PDF [BibTex]

Code PDF [BibTex]


SIMULTANEOUS CALIBRATION METHOD FOR MAGNETIC LOCALIZATION AND ACTUATION SYSTEMS
SIMULTANEOUS CALIBRATION METHOD FOR MAGNETIC LOCALIZATION AND ACTUATION SYSTEMS

Sitti, M., Son, D., Dong, X.

June 2020, US Patent App. 16/696,605 (misc)

Abstract
The invention relates to a method of simultaneously calibrating magnetic actuation and sensing systems for a workspace, wherein the actuation system comprises a plurality of magnetic actuators and the sensing system comprises a plurality of magnetic sensors, wherein all the measured data is fed into a calibration model, wherein the calibration model is based on a sensor measurement model and a magnetic actuation model, and wherein a solution of the model parameters is found via a numerical solver order to calibrate both the actuation and sensing systems at the same time.

pi

[BibTex]


Learning Constrained Dynamics with Gauss Principle adhering Gaussian Processes
Learning Constrained Dynamics with Gauss Principle adhering Gaussian Processes

Geist, A. R., Trimpe, S.

In 2nd Annual Conference on Learning for Dynamics and Control, June 2020 (inproceedings) Accepted

Abstract
The identification of the constrained dynamics of mechanical systems is often challenging. Learning methods promise to ease an analytical analysis, but require considerable amounts of data for training. We propose to combine insights from analytical mechanics with Gaussian process regression to improve the model's data efficiency and constraint integrity. The result is a Gaussian process model that incorporates a priori constraint knowledge such that its predictions adhere to Gauss' principle of least constraint. In return, predictions of the system's acceleration naturally respect potentially non-ideal (non-)holonomic equality constraints. As corollary results, our model enables to infer the acceleration of the unconstrained system from data of the constrained system and enables knowledge transfer between differing constraint configurations.

ics

Arxiv preprint [BibTex]

Arxiv preprint [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

ps

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

ps

arXiv code video supplemental video [BibTex]


Bayesian Optimization in Robot Learning - Automatic Controller Tuning and Sample-Efficient Methods
Bayesian Optimization in Robot Learning - Automatic Controller Tuning and Sample-Efficient Methods

Marco-Valle, A.

University of Tübingen, June 2020 (thesis)

Abstract
The problem of designing controllers to regulate dynamical systems has been studied by engineers during the past millennia. Ever since, suboptimal performance lingers in many closed loops as an unavoidable side effect of manually tuning the parameters of the controllers. Nowadays, industrial settings remain skeptic about data-driven methods that allow one to automatically learn controller parameters. In the context of robotics, machine learning (ML) keeps growing its influence on increasing autonomy and adaptability, for example to aid automating controller tuning. However, data-hungry ML methods, such as standard reinforcement learning, require a large number of experimental samples, prohibitive in robotics, as hardware can deteriorate and break. This brings about the following question: Can manual controller tuning, in robotics, be automated by using data-efficient machine learning techniques? In this thesis, we tackle the question above by exploring Bayesian optimization (BO), a data-efficient ML framework, to buffer the human effort and side effects of manual controller tuning, while retaining a low number of experimental samples. We focus this work in the context of robotic systems, providing thorough theoretical results that aim to increase data-efficiency, as well as demonstrations in real robots. Specifically, we present four main contributions. We first consider using BO to replace manual tuning in robotic platforms. To this end, we parametrize the design weights of a linear quadratic regulator (LQR) and learn its parameters using an information-efficient BO algorithm. Such algorithm uses Gaussian processes (GPs) to model the unknown performance objective. The GP model is used by BO to suggest controller parameters that are expected to increment the information about the optimal parameters, measured as a gain in entropy. The resulting “automatic LQR tuning” framework is demonstrated on two robotic platforms: A robot arm balancing an inverted pole and a humanoid robot performing a squatting task. In both cases, an existing controller is automatically improved in a handful of experiments without human intervention. BO compensates for data scarcity by means of the GP, which is a probabilistic model that encodes prior assumptions about the unknown performance objective. Usually, incorrect or non-informed assumptions have negative consequences, such as higher number of robot experiments, poor tuning performance or reduced sample-efficiency. The second to fourth contributions presented herein attempt to alleviate this issue. The second contribution proposes to include the robot simulator into the learning loop as an additional information source for automatic controller tuning. While doing a real robot experiment generally entails high associated costs (e.g., require preparation and take time), simulations are cheaper to obtain (e.g., they can be computed faster). However, because the simulator is an imperfect model of the robot, its information is biased and could have negative repercussions in the learning performance. To address this problem, we propose “simu-vs-real”, a principled multi-fidelity BO algorithm that trades off cheap, but inaccurate information from simulations with expensive and accurate physical experiments in a cost-effective manner. The resulting algorithm is demonstrated on a cart-pole system, where simulations and real experiments are alternated, thus sparing many real evaluations. The third contribution explores how to adequate the expressiveness of the probabilistic prior to the control problem at hand. To this end, the mathematical structure of LQR controllers is leveraged and embedded into the GP, by means of the kernel function. Specifically, we propose two different “LQR kernel” designs that retain the flexibility of Bayesian nonparametric learning. Simulated results indicate that the LQR kernel yields superior performance than non-informed kernel choices when used for controller learning with BO. Finally, the fourth contribution specifically addresses the problem of handling controller failures, which are typically unavoidable in practice while learning from data, specially if non-conservative solutions are expected. Although controller failures are generally problematic (e.g., the robot has to be emergency-stopped), they are also a rich information source about what should be avoided. We propose “failures-aware excursion search”, a novel algorithm for Bayesian optimization under black-box constraints, where failures are limited in number. Our results in numerical benchmarks indicate that by allowing a confined number of failures, better optima are revealed as compared with state-of-the-art methods. The first contribution of this thesis, “automatic LQR tuning”, lies among the first on applying BO to real robots. While it demonstrated automatic controller learning from few experimental samples, it also revealed several important challenges, such as the need of higher sample-efficiency, which opened relevant research directions that we addressed through several methodological contributions. Summarizing, we proposed “simu-vs-real”, a novel BO algorithm that includes the simulator as an additional information source, an “LQR kernel” design that learns faster than standard choices and “failures-aware excursion search”, a new BO algorithm for constrained black-box optimization problems, where the number of failures is limited.

ics

Repository (Universitätsbibliothek) - University of Tübingen PDF DOI [BibTex]


Statistical reprogramming of macroscopic self-assembly with dynamic boundaries
Statistical reprogramming of macroscopic self-assembly with dynamic boundaries

Culha, U., Davidson, Z. S., Mastrangeli, M., Sitti, M.

Proceedings of the National Academy of Sciences, 117(21):11306-11313, May 2020 (article)

Abstract
Self-assembly is a ubiquitous process that can generate complex and functional structures via local interactions among a large set of simpler components. The ability to program the self-assembly pathway of component sets elucidates fundamental physics and enables alternative competitive fabrication technologies. Reprogrammability offers further opportunities for tuning structural and material properties but requires reversible selection from multistable self-assembling patterns, which remains a challenge. Here, we show statistical reprogramming of two-dimensional (2D), noncompact self-assembled structures by the dynamic confinement of orbitally shaken and magnetically repulsive millimeter-scale particles. Under a constant shaking regime, we control the rate of radius change of an assembly arena via moving hard boundaries and select among a finite set of self-assembled patterns repeatably and reversibly. By temporarily trapping particles in topologically identified stable states, we also demonstrate 2D reprogrammable stiffness and three-dimensional (3D) magnetic clutching of the self-assembled structures. Our reprogrammable system has prospective implications for the design of granular materials in a multitude of physical scales where out-of-equilibrium self-assembly can be realized with different numbers or types of particles. Our dynamic boundary regulation may also enable robust bottom-up control strategies for novel robotic assembly applications by designing more complex spatiotemporal interactions using mobile robots.

pi

DOI [BibTex]

DOI [BibTex]


Data-efficient Auto-tuning with Bayesian Optimization: An Industrial Control Study
Data-efficient Auto-tuning with Bayesian Optimization: An Industrial Control Study

Neumann-Brosig, M., Marco, A., Schwarzmann, D., Trimpe, S.

IEEE Transactions on Control Systems Technology, 28(3):730-740, May 2020 (article)

Abstract
Bayesian optimization is proposed for automatic learning of optimal controller parameters from experimental data. A probabilistic description (a Gaussian process) is used to model the unknown function from controller parameters to a user-defined cost. The probabilistic model is updated with data, which is obtained by testing a set of parameters on the physical system and evaluating the cost. In order to learn fast, the Bayesian optimization algorithm selects the next parameters to evaluate in a systematic way, for example, by maximizing information gain about the optimum. The algorithm thus iteratively finds the globally optimal parameters with only few experiments. Taking throttle valve control as a representative industrial control example, the proposed auto-tuning method is shown to outperform manual calibration: it consistently achieves better performance with a low number of experiments. The proposed auto-tuning framework is flexible and can handle different control structures and objectives.

ics

arXiv (PDF) DOI Project Page [BibTex]

arXiv (PDF) DOI Project Page [BibTex]


General Movement Assessment from videos of computed {3D} infant body models is equally effective compared to conventional {RGB} Video rating
General Movement Assessment from videos of computed 3D infant body models is equally effective compared to conventional RGB Video rating

Schroeder, S., Hesse, N., Weinberger, R., Tacke, U., Gerstl, L., Hilgendorff, A., Heinen, F., Arens, M., Bodensteiner, C., Dijkstra, L. J., Pujades, S., Black, M., Hadders-Algra, M.

Early Human Development, 144, May 2020 (article)

Abstract
Background: General Movement Assessment (GMA) is a powerful tool to predict Cerebral Palsy (CP). Yet, GMA requires substantial training hampering its implementation in clinical routine. This inspired a world-wide quest for automated GMA. Aim: To test whether a low-cost, marker-less system for three-dimensional motion capture from RGB depth sequences using a whole body infant model may serve as the basis for automated GMA. Study design: Clinical case study at an academic neurodevelopmental outpatient clinic. Subjects: Twenty-nine high-risk infants were recruited and assessed at their clinical follow-up at 2-4 month corrected age (CA). Their neurodevelopmental outcome was assessed regularly up to 12-31 months CA. Outcome measures: GMA according to Hadders-Algra by a masked GMA-expert of conventional and computed 3D body model (“SMIL motion”) videos of the same GMs. Agreement between both GMAs was assessed, and sensitivity and specificity of both methods to predict CP at ≥12 months CA. Results: The agreement of the two GMA ratings was substantial, with κ=0.66 for the classification of definitely abnormal (DA) GMs and an ICC of 0.887 (95% CI 0.762;0.947) for a more detailed GM-scoring. Five children were diagnosed with CP (four bilateral, one unilateral CP). The GMs of the child with unilateral CP were twice rated as mildly abnormal. DA-ratings of both videos predicted bilateral CP well: sensitivity 75% and 100%, specificity 88% and 92% for conventional and SMIL motion videos, respectively. Conclusions: Our computed infant 3D full body model is an attractive starting point for automated GMA in infants at risk of CP.

ps

DOI [BibTex]

DOI [BibTex]


Physical Variables Underlying Tactile Stickiness during Fingerpad Detachment
Physical Variables Underlying Tactile Stickiness during Fingerpad Detachment

Nam, S., Vardar, Y., Gueorguiev, D., Kuchenbecker, K. J.

Frontiers in Neuroscience, 14(235):1-14, April 2020 (article)

Abstract
One may notice a relatively wide range of tactile sensations even when touching the same hard, flat surface in similar ways. Little is known about the reasons for this variability, so we decided to investigate how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their finger on a flat glass plate with a normal force close to 1.5 N and detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse.

hi

link (url) DOI Project Page [BibTex]


Learning Multi-Human Optical Flow
Learning Multi-Human Optical Flow

Ranjan, A., Hoffmann, D. T., Tzionas, D., Tang, S., Romero, J., Black, M. J.

International Journal of Computer Vision (IJCV), (128):873-890, April 2020 (article)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.

ps

Paper Publisher Version poster link (url) DOI [BibTex]

Paper Publisher Version poster link (url) DOI [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]

arXiv [BibTex]


A Fabric-Based Sensing System for Recognizing Social Touch
A Fabric-Based Sensing System for Recognizing Social Touch

Burns, R. B., Lee, H., Seifi, H., Kuchenbecker, K. J.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, Washington, DC, USA, March 2020 (misc)

Abstract
We present a fabric-based piezoresistive tactile sensor system designed to detect social touch gestures on a robot. The unique sensor design utilizes three layers of low-conductivity fabric sewn together on alternating edges to form an accordion pattern and secured between two outer high-conductivity layers. This five-layer design demonstrates a greater resistance range and better low-force sensitivity than previous designs that use one layer of low-conductivity fabric with or without a plastic mesh layer. An individual sensor from our system can presently identify six different communication gestures – squeezing, patting, scratching, poking, hand resting without movement, and no touch – with an average accuracy of 90%. A layer of foam can be added beneath the sensor to make a rigid robot more appealing for humans to touch without inhibiting the system’s ability to register social touch gestures.

hi

Project Page [BibTex]

Project Page [BibTex]


Do Touch Gestures Affect How Electrovibration Feels?
Do Touch Gestures Affect How Electrovibration Feels?

Vardar, Y., Kuchenbecker, K. J.

Hands-on demonstration (1 page) presented at the IEEE Haptics Symposium, Washington, DC, USA, March 2020 (misc)

hi

[BibTex]

[BibTex]


Gripping apparatus and method of producing a gripping apparatus
Gripping apparatus and method of producing a gripping apparatus

Song, S., Sitti, M., Drotlef, D., Majidi, C.

Google Patents, Febuary 2020, US Patent App. 16/610,209 (patent)

Abstract
The present invention relates to a gripping apparatus comprising a membrane; a flexible housing; with said membrane being fixedly connected to a periphery of the housing. The invention further relates to a method of producing a gripping apparatus.

pi

[BibTex]

[BibTex]


Learning to Predict Perceptual Distributions of Haptic Adjectives
Learning to Predict Perceptual Distributions of Haptic Adjectives

Richardson, B. A., Kuchenbecker, K. J.

Frontiers in Neurorobotics, 13(116):1-16, Febuary 2020 (article)

Abstract
When humans touch an object with their fingertips, they can immediately describe its tactile properties using haptic adjectives, such as hardness and roughness; however, human perception is subjective and noisy, with significant variation across individuals and interactions. Recent research has worked to provide robots with similar haptic intelligence but was focused on identifying binary haptic adjectives, ignoring both attribute intensity and perceptual variability. Combining ordinal haptic adjective labels gathered from human subjects for a set of 60 objects with features automatically extracted from raw multi-modal tactile data collected by a robot repeatedly touching the same objects, we designed a machine-learning method that incorporates partial knowledge of the distribution of object labels into training; then, from a single interaction, it predicts a probability distribution over the set of ordinal labels. In addition to analyzing the collected labels (10 basic haptic adjectives) and demonstrating the quality of our method's predictions, we hold out specific features to determine the influence of individual sensor modalities on the predictive performance for each adjective. Our results demonstrate the feasibility of modeling both the intensity and the variation of haptic perception, two crucial yet previously neglected components of human haptic perception.

hi

DOI Project Page [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

ps

pdf [BibTex]

pdf [BibTex]


no image
Exercising with Baxter: Preliminary Support for Assistive Social-Physical Human-Robot Interaction

Fitter, N. T., Mohan, M., Kuchenbecker, K. J., Johnson, M. J.

Journal of NeuroEngineering and Rehabilitation, 17(19), Febuary 2020 (article)

Abstract
Background: The worldwide population of older adults will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active at home. Methods: Building on related literature as well as guidance from experts in game design, rehabilitation, and physical and occupational therapy, we developed eight human-robot exercise games for the Baxter Research Robot, six of which involve physical human-robot contact. After extensive iteration, these games were tested in an exploratory user study including 20 younger adult and 20 older adult users. Results: Only socially and physically interactive games fell in the highest ranges for pleasantness, enjoyment, engagement, cognitive challenge, and energy level. Our games successfully spanned three different physical, cognitive, and temporal challenge levels. User trust and confidence in Baxter increased significantly between pre- and post-study assessments. Older adults experienced higher exercise, energy, and engagement levels than younger adults, and women rated the robot more highly than men on several survey questions. Conclusions: The results indicate that social-physical exercise with a robot is more pleasant, enjoyable, engaging, cognitively challenging, and energetic than similar interactions that lack physical touch. In addition to this main finding, researchers working in similar areas can build on our design practices, our open-source resources, and the age-group and gender differences that we found.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Sliding Mode Control with Gaussian Process Regression for Underwater Robots

Lima, G. S., Trimpe, S., Bessa, W. M.

Journal of Intelligent & Robotic Systems, January 2020 (article)

ics

DOI [BibTex]

DOI [BibTex]


Hierarchical Event-triggered Learning for Cyclically Excited Systems with Application to Wireless Sensor Networks
Hierarchical Event-triggered Learning for Cyclically Excited Systems with Application to Wireless Sensor Networks

Beuchert, J., Solowjow, F., Raisch, J., Trimpe, S., Seel, T.

IEEE Control Systems Letters, 4(1):103-108, January 2020 (article)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


no image
Real Time Trajectory Prediction Using Deep Conditional Generative Models

Gomez-Gonzalez, S., Prokudin, S., Schölkopf, B., Peters, J.

IEEE Robotics and Automation Letters, 5(2):970-976, IEEE, January 2020 (article)

ei ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Method of actuating a shape changeable member, shape changeable member and actuating system
Method of actuating a shape changeable member, shape changeable member and actuating system

Hu, W., Lum, G. Z., Mastrangeli, M., Sitti, M.

Google Patents, January 2020, US Patent App. 16/477,593 (patent)

Abstract
The present invention relates to a method of actuating a shape changeable member of actuatable material. The invention further relates to a shape changeable member and to a system comprising such a shape changeable member and a magnetic field apparatus.

pi

[BibTex]


Control-guided Communication: Efficient Resource Arbitration and Allocation in Multi-hop Wireless Control Systems
Control-guided Communication: Efficient Resource Arbitration and Allocation in Multi-hop Wireless Control Systems

Baumann, D., Mager, F., Zimmerling, M., Trimpe, S.

IEEE Control Systems Letters, 4(1):127-132, January 2020 (article)

ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Self-supervised motion deblurring
Self-supervised motion deblurring

Liu, P., Janai, J., Pollefeys, M., Sattler, T., Geiger, A.

IEEE Robotics and Automation Letters, 2020 (article)

Abstract
Motion blurry images challenge many computer vision algorithms, e.g., feature detection, motion estimation, or object recognition. Deep convolutional neural networks are state-of-the-art for image deblurring. However, obtaining training data with corresponding sharp and blurry image pairs can be difficult. In this paper, we present a differentiable reblur model for self-supervised motion deblurring, which enables the network to learn from real-world blurry image sequences without relying on sharp images for supervision. Our key insight is that motion cues obtained from consecutive images yield sufficient information to inform the deblurring task. We therefore formulate deblurring as an inverse rendering problem, taking into account the physical image formation process: we first predict two deblurred images from which we estimate the corresponding optical flow. Using these predictions, we re-render the blurred images and minimize the difference with respect to the original blurry inputs. We use both synthetic and real dataset for experimental evaluations. Our experiments demonstrate that self-supervised single image deblurring is really feasible and leads to visually compelling results.

avg

pdf Project Page Blog [BibTex]

pdf Project Page Blog [BibTex]


Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium
Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium

Yunusa, M., Lahlou, A., Sitti, M.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Although substrates play an important role upon crystallization of supercooled liquids, the influences of surface temperature and thermal property have remained elusive. Here, the crystallization of supercooled phase‐change gallium (Ga) on substrates with different thermal conductivity is studied. The effect of interfacial temperature on the crystallization kinetics, which dictates thermo‐mechanical stresses between the substrate and the crystallized Ga, is investigated. At an elevated surface temperature, close to the melting point of Ga, an extended single‐crystal growth of Ga on dielectric substrates due to layering effect and annealing is realized without the application of external fields. Adhesive strength at the interfaces depends on the thermal conductivity and initial surface temperature of the substrates. This insight can be applicable to other liquid metals for industrial applications, and sheds more light on phase‐change memory crystallization.

pi

[BibTex]


Nanoerythrosome-functionalized biohybrid microswimmers
Nanoerythrosome-functionalized biohybrid microswimmers

Buss, N., Yasa, O., Alapan, Y., Akolpoglu, M. B., Sitti, M.

APL Bioengineering, 4, AIP Publishing LLC, 2020 (article)

pi

[BibTex]

[BibTex]


Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice
Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice

Kozielski, K. L., Jahanshahi, A., Gilbert, H. B., Yu, Y., Erin, O., Francisco, D., Alosaimi, F., Temel, Y., Sitti, M.

bioRxiv, Cold Spring Harbor Laboratory, 2020 (article)

pi

[BibTex]

[BibTex]


Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy
Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy

Son, D., Gilbert, H., Sitti, M.

Soft robotics, Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New …, 2020 (article)

pi

[BibTex]

[BibTex]


Mechanical coupling of puller and pusher active microswimmers influences motility
Mechanical coupling of puller and pusher active microswimmers influences motility

Singh, A. V., Kishore, V., Santamauro, G., Yasa, O., Bill, J., Sitti, M.

Langmuir, ACS Publications, 2020 (article)

pi

[BibTex]


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]

pdf suppmat Video 2 Project Page Slides Poster Video 1 [BibTex]


Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures
Excursion Search for Constrained Bayesian Optimization under a Limited Budget of Failures

Marco, A., Rohr, A. V., Baumann, D., Hernández-Lobato, J. M., Trimpe, S.

2020 (proceedings) In revision

Abstract
When learning to ride a bike, a child falls down a number of times before achieving the first success. As falling down usually has only mild consequences, it can be seen as a tolerable failure in exchange for a faster learning process, as it provides rich information about an undesired behavior. In the context of Bayesian optimization under unknown constraints (BOC), typical strategies for safe learning explore conservatively and avoid failures by all means. On the other side of the spectrum, non conservative BOC algorithms that allow failing may fail an unbounded number of times before reaching the optimum. In this work, we propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures. Empirical validation shows that our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods. In addition, we propose an original algorithm for unconstrained Bayesian optimization inspired by the notion of excursion sets in stochastic processes, upon which the failures-aware algorithm is built.

ics am

arXiv code (python) PDF [BibTex]


Towards 5-DoF Control of an Untethered Magnetic Millirobot via MRI Gradient Coils
Towards 5-DoF Control of an Untethered Magnetic Millirobot via MRI Gradient Coils

Onder Erin, D. A. M. E. T., Sitti, M.

In IEEE International Conference on Robotics and Automation (ICRA), 2020 (inproceedings)

pi

[BibTex]

[BibTex]


Microribbons composed of directionally self-assembled nanoflakes as highly stretchable ionic neural electrodes
Microribbons composed of directionally self-assembled nanoflakes as highly stretchable ionic neural electrodes

Zhang, M., Guo, R., Chen, K., Wang, Y., Niu, J., Guo, Y., Zhang, Y., Yin, Z., Xia, K., Zhou, B., Wang, H., He, W., Liu, J., Sitti, M., Zhang, Y.

Proceedings of the National Academy of Sciences, National Academy of Sciences, 2020 (article)

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms
Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms

Dong, X., Sitti, M.

The International Journal of Robotics Research, 2020 (article)

Abstract
Magnetically actuated mobile microrobots can access distant, enclosed, and small spaces, such as inside microfluidic channels and the human body, making them appealing for minimally invasive tasks. Despite their simplicity when scaling down, creating collective microrobots that can work closely and cooperatively, as well as reconfigure their formations for different tasks, would significantly enhance their capabilities such as manipulation of objects. However, a challenge of realizing such cooperative magnetic microrobots is to program and reconfigure their formations and collective motions with under-actuated control signals. This article presents a method of controlling 2D static and time-varying formations among collective self-repelling ferromagnetic microrobots (100 μm to 350 μm in diameter, up to 260 in number) by spatially and temporally programming an external magnetic potential energy distribution at the air–water interface or on solid surfaces. A general design method is introduced to program external magnetic potential energy using ferromagnets. A predictive model of the collective system is also presented to predict the formation and guide the design procedure. With the proposed method, versatile complex static formations are experimentally demonstrated and the programmability and scaling effects of formations are analyzed. We also demonstrate the collective mobility of these magnetic microrobots by controlling them to exhibit bio-inspired collective behaviors such as aggregation, directional motion with arbitrary swarm headings, and rotational swarming motion. Finally, the functions of the produced microrobotic swarm are demonstrated by controlling them to navigate through cluttered environments and complete reconfigurable cooperative manipulation tasks.

pi

DOI [BibTex]


Magnetic Resonance Imaging System--Driven Medical Robotics
Magnetic Resonance Imaging System–Driven Medical Robotics

Erin, O., Boyvat, M., Tiryaki, M. E., Phelan, M., Sitti, M.

Advanced Intelligent Systems, 2, Wiley Online Library, 2020 (article)

Abstract
Magnetic resonance imaging (MRI) system–driven medical robotics is an emerging field that aims to use clinical MRI systems not only for medical imaging but also for actuation, localization, and control of medical robots. Submillimeter scale resolution of MR images for soft tissues combined with the electromagnetic gradient coil–based magnetic actuation available inside MR scanners can enable theranostic applications of medical robots for precise image‐guided minimally invasive interventions. MRI‐driven robotics typically does not introduce new MRI instrumentation for actuation but instead focuses on converting already available instrumentation for robotic purposes. To use the advantages of this technology, various medical devices such as untethered mobile magnetic robots and tethered active catheters have been designed to be powered magnetically inside MRI systems. Herein, the state‐of‐the‐art progress, challenges, and future directions of MRI‐driven medical robotic systems are reviewed.

pi

[BibTex]

[BibTex]


Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs
Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs

Colmenares, D., Kania, R., Liu, M., Sitti, M.

arXiv preprint arXiv:2002.00798, 2020 (article)

Abstract
In this paper, we characterize the performance of and develop thermal management solutions for a DC motor-driven resonant actuator developed for flapping wing micro air vehicles. The actuator, a DC micro-gearmotor connected in parallel with a torsional spring, drives reciprocal wing motion. Compared to the gearmotor alone, this design increased torque and power density by 161.1% and 666.8%, respectively, while decreasing the drawn current by 25.8%. Characterization of the actuator, isolated from nonlinear aerodynamic loading, results in standard metrics directly comparable to other actuators. The micro-motor, selected for low weight considerations, operates at high power for limited duration due to thermal effects. To predict system performance, a lumped parameter thermal circuit model was developed. Critical model parameters for this micro-motor, two orders of magnitude smaller than those previously characterized, were identified experimentally. This included the effects of variable winding resistance, bushing friction, speed-dependent forced convection, and the addition of a heatsink. The model was then used to determine a safe operation envelope for the vehicle and to design a weight-optimal heatsink. This actuator design and thermal modeling approach could be applied more generally to improve the performance of any miniature mobile robot or device with motor-driven oscillating limbs or loads.

pi

[BibTex]


Pros and Cons: Magnetic versus Optical Microrobots
Pros and Cons: Magnetic versus Optical Microrobots

Sitti, M., Wiersma, D. S.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Mobile microrobotics has emerged as a new robotics field within the last decade to create untethered tiny robots that can access and operate in unprecedented, dangerous, or hard‐to‐reach small spaces noninvasively toward disruptive medical, biotechnology, desktop manufacturing, environmental remediation, and other potential applications. Magnetic and optical actuation methods are the most widely used actuation methods in mobile microrobotics currently, in addition to acoustic and biological (cell‐driven) actuation approaches. The pros and cons of these actuation methods are reported here, depending on the given context. They can both enable long‐range, fast, and precise actuation of single or a large number of microrobots in diverse environments. Magnetic actuation has unique potential for medical applications of microrobots inside nontransparent tissues at high penetration depths, while optical actuation is suitable for more biotechnology, lab‐/organ‐on‐a‐chip, and desktop manufacturing types of applications with much less surface penetration depth requirements or with transparent environments. Combining both methods in new robot designs can have a strong potential of combining the pros of both methods. There is still much progress needed in both actuation methods to realize the potential disruptive applications of mobile microrobots in real‐world conditions.

pi

[BibTex]

[BibTex]


Selectively Controlled Magnetic Microrobots with Opposing Helices
Selectively Controlled Magnetic Microrobots with Opposing Helices

Giltinan, J., Katsamba, P., Wang, W., Lauga, E., Sitti, M.

Applied Physics Letters, 116, AIP Publishing LLC, 2020 (article)

pi

[BibTex]

[BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]

pdf suppmat Video 2 Project Page Video 1 Slides Poster [BibTex]


Microscale Polarization Color Pixels from Liquid Crystal Elastomers
Microscale Polarization Color Pixels from Liquid Crystal Elastomers

Guo, Y., Shahsavan, H., Sitti, M.

Advanced Optical Materials, Wiley Online Library, 2020 (article)

pi

[BibTex]

[BibTex]