Header logo is


2018


no image
From Deterministic ODEs to Dynamic Structural Causal Models

Rubenstein, P. K., Bongers, S., Schölkopf, B., Mooij, J. M.

Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI), August 2018 (conference)

ei

Arxiv link (url) [BibTex]

2018


Arxiv link (url) [BibTex]


no image
Generalized Score Functions for Causal Discovery

Huang, B., Zhang, K., Lin, Y., Schölkopf, B., Glymour, C.

Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages: 1551-1560, (Editors: Yike Guo and Faisal Farooq), ACM, August 2018 (conference)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl aircap ca 3
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios

Tallamraju, R., Rajappa, S., Black, M. J., Karlapalem, K., Ahmad, A.

2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages: 1-8, IEEE, August 2018 (conference)

Abstract
In this work, we consider the problem of decentralized multi-robot target tracking and obstacle avoidance in dynamic environments. Each robot executes a local motion planning algorithm which is based on model predictive control (MPC). The planner is designed as a quadratic program, subject to constraints on robot dynamics and obstacle avoidance. Repulsive potential field functions are employed to avoid obstacles. The novelty of our approach lies in embedding these non-linear potential field functions as constraints within a convex optimization framework. Our method convexifies nonconvex constraints and dependencies, by replacing them as pre-computed external input forces in robot dynamics. The proposed algorithm additionally incorporates different methods to avoid field local minima problems associated with using potential field functions in planning. The motion planner does not enforce predefined trajectories or any formation geometry on the robots and is a comprehensive solution for cooperative obstacle avoidance in the context of multi-robot target tracking. We perform simulation studies for different scenarios to showcase the convergence and efficacy of the proposed algorithm.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


no image
A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming

Yurtsever, A., Fercoq, O., Locatello, F., Cevher, V.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 5713-5722, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Blind Justice: Fairness with Encrypted Sensitive Attributes

Kilbertus, N., Gascon, A., Kusner, M., Veale, M., Gummadi, K., Weller, A.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 2635-2644, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Detecting non-causal artifacts in multivariate linear regression models

Janzing, D., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 2250-2258, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning-based solution to phase error correction in T2*-weighted GRE scans

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

1st International conference on Medical Imaging with Deep Learning (MIDL), July 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
The Mirage of Action-Dependent Baselines in Reinforcement Learning

Tucker, G., Bhupatiraju, S., Gu, S., Turner, R., Ghahramani, Z., Levine, S.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 5022-5031, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

PDF link (url) Project Page [BibTex]

PDF link (url) Project Page [BibTex]


no image
Intrinsic disentanglement: an invariance view for deep generative models

Besserve, M., Sun, R., Schölkopf, B.

Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML, July 2018 (conference)

ei

PDF [BibTex]

PDF [BibTex]


Thumb xl ar
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M. P.

Politecnico di Milano, Milan, Italy, July 2018, Department of Electronic, Information, and Biomedical Engineering (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl 2018 prd
Assessing Generative Models via Precision and Recall

Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.

Workshop on Theoretical Foundations and Applications of Deep Generative Models (TADGM) at the 35th International Conference on Machine Learning (ICML), July 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Task-Driven PCA-Based Design Optimization of Wearable Cutaneous Devices

Pacchierotti, C., Young, E. M., Kuchenbecker, K. J.

IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)

Abstract
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 4448-4456, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


no image
PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos

Parmas, P., Rasmussen, C., Peters, J., Doya, K.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 4065-4074, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Learning Independent Causal Mechanisms

Parascandolo, G., Kilbertus, N., Rojas-Carulla, M., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 4033-4041, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl mazen
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

mg ps

pdf video Project Page Project Page [BibTex]

pdf video Project Page Project Page [BibTex]


no image
Nonstationary GANs: Analysis as Nonautonomous Dynamical Systems

Mehrjou, A., Schölkopf, B.

Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML, July 2018 (conference)

ei

PDF [BibTex]

PDF [BibTex]


Thumb xl fitter18 frai imus
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


no image
Differentially Private Database Release via Kernel Mean Embeddings

Balog, M., Tolstikhin, I., Schölkopf, B.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 423-431, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
On Matching Pursuit and Coordinate Descent

Locatello, F., Raj, A., Praneeth Karimireddy, S., Rätsch, G., Schölkopf, B., Stich, S. U., Jaggi, M.

Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 3204-3213, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Iterative Model-Fitting and Local Controller Optimization - Towards a Better Understanding of Convergence Properties

Wüthrich, M., Schölkopf, B.

Workshop on Prediction and Generative Modeling in Reinforcement Learning at ICML, July 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference

Muandet, K., Kanagawa, M., Saengkyongam, S., Marukata, S.

Workshop on Machine Learning for Causal Inference, Counterfactual Prediction, and Autonomous Action (CausalML) at ICML, July 2018 (conference)

ei

[BibTex]

[BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Infinite Factorial Finite State Machine for Blind Multiuser Channel Estimation

Ruiz, F. J. R., Valera, I., Svensson, L., Perez-Cruz, F.

IEEE Transactions on Cognitive Communications and Networking, 4(2):177-191, June 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Thumb xl 2017 frvsr
Frame-Recurrent Video Super-Resolution

Sajjadi, M. S. M., Vemulapalli, R., Brown, M.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2018 (conference)

ei

ArXiv link (url) [BibTex]

ArXiv link (url) [BibTex]


no image
Haptipedia: Exploring Haptic Device Design Through Interactive Visualizations

Seifi, H., Fazlollahi, F., Park, G., Kuchenbecker, K. J., MacLean, K. E.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
How many haptic devices have been proposed in the last 30 years? How can we leverage this rich source of design knowledge to inspire future innovations? Our goal is to make historical haptic invention accessible through interactive visualization of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. In this demonstration, participants can explore Haptipedia’s growing library of grounded force feedback devices through several prototype visualizations, interact with 3D simulations of the device mechanisms and movements, and tell us about the attributes and devices that could make Haptipedia a useful resource for the haptic design community.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Learning Face Deblurring Fast and Wide

Jin, M., Hirsch, M., Favaro, P.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages: 745-753, June 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Delivering 6-DOF Fingertip Tactile Cues

Young, E., Kuchenbecker, K. J.

Work-in-progress paper (5 pages) presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the Robotics: Science and Systems Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, Pittsburgh, USA, June 2018 (misc)

Abstract
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Soft Multi-Axis Boundary-Electrode Tactile Sensors for Whole-Body Robotic Skin

Lee, H., Kim, J., Kuchenbecker, K. J.

Workshop paper (2 pages) presented at the RSS Pioneers Workshop, Pittsburgh, USA, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Wasserstein Auto-Encoders

Tolstikhin, I., Bousquet, O., Gelly, S., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Fidelity-Weighted Learning

Dehghani, M., Mehrjou, A., Gouws, S., Kamps, J., Schölkopf, B.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Inducing Probabilistic Context-Free Grammars for the Sequencing of Movement Primitives

Lioutikov, R., Maeda, G., Veiga, F., Kersting, K., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 1-8, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Sobolev GAN

Mroueh, Y., Li*, C., Sercu*, T., Raj*, A., Cheng, Y.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Assisting Movement Training and Execution With Visual and Haptic Feedback

Ewerton, M., Rother, D., Weimar, J., Kollegger, G., Wiemeyer, J., Peters, J., Maeda, G.

Frontiers in Neurorobotics, 12, May 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Temporal Difference Models: Model-Free Deep RL for Model-Based Control

Pong*, V., Gu*, S., Dalal, M., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018, *equal contribution (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Haptics and Haptic Interfaces

Kuchenbecker, K. J.

In Encyclopedia of Robotics, (Editors: Marcelo H. Ang and Oussama Khatib and Bruno Siciliano), Springer, May 2018 (incollection)

Abstract
Haptics is an interdisciplinary field that seeks to both understand and engineer touch-based interaction. Although a wide range of systems and applications are being investigated, haptics researchers often concentrate on perception and manipulation through the human hand. A haptic interface is a mechatronic system that modulates the physical interaction between a human and his or her tangible surroundings. Haptic interfaces typically involve mechanical, electrical, and computational layers that work together to sense user motions or forces, quickly process these inputs with other information, and physically respond by actuating elements of the user’s surroundings, thereby enabling him or her to act on and feel a remote and/or virtual environment.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning

Eysenbach, B., Gu, S., Ibarz, J., Levine, S.

6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

Videos link (url) Project Page [BibTex]

Videos link (url) Project Page [BibTex]


Thumb xl 2018 tgan
Tempered Adversarial Networks

Sajjadi, M. S. M., Parascandolo, G., Mehrjou, A., Schölkopf, B.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning Coupled Forward-Inverse Models with Combined Prediction Errors

Koert, D., Maeda, G., Neumann, G., Peters, J.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 2433-2439, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning Disentangled Representations with Wasserstein Auto-Encoders

Rubenstein, P. K., Schölkopf, B., Tolstikhin, I.

Workshop at the 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Automatic Estimation of Modulation Transfer Functions

Bauer, M., Volchkov, V., Hirsch, M., Schölkopf, B.

IEEE International Conference on Computational Photography (ICCP), May 2018 (conference)

ei sf

DOI [BibTex]

DOI [BibTex]


no image
Causal Discovery Using Proxy Variables

Rojas-Carulla, M., Baroni, M., Lopez-Paz, D.

Workshop at 6th International Conference on Learning Representations (ICLR), May 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Pinsler, R., Akrour, R., Osa, T., Peters, J., Neumann, G.

IEEE International Conference on Robotics and Automation, (ICRA), pages: 596-601, IEEE, May 2018 (conference)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Group invariance principles for causal generative models

Besserve, M., Shajarisales, N., Schölkopf, B., Janzing, D.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 557-565, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Boosting Variational Inference: an Optimization Perspective

Locatello, F., Khanna, R., Ghosh, J., Rätsch, G.

Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), 84, pages: 464-472, Proceedings of Machine Learning Research, (Editors: Amos Storkey and Fernando Perez-Cruz), PMLR, April 2018 (conference)

ei

link (url) Project Page Project Page [BibTex]

link (url) Project Page Project Page [BibTex]


no image
Mixture of Attractors: A Novel Movement Primitive Representation for Learning Motor Skills From Demonstrations

Manschitz, S., Gienger, M., Kober, J., Peters, J.

IEEE Robotics and Automation Letters, 3(2):926-933, April 2018 (article)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Automatically Rating Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 32(4):1840-1857, April 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl thesis cover2
Model-based Optical Flow: Layers, Learning, and Geometry

Wulff, J.

Tuebingen University, April 2018 (phdthesis)

Abstract
The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One reason for these issues is that most optical flow algorithms only reason about the motion of pixels on the image plane, while not taking the image formation pipeline or the 3D structure of the world into account. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. The goal of this dissertation is to show ways to inject additional knowledge about the scene into layered methods, making them more robust, faster, and more accurate. First, this thesis demonstrates the modeling power of layers using the example of motion blur in videos, which is caused by fast motion relative to the exposure time of the camera. Layers segment the scene into regions that move coherently while preserving their occlusion relationships. The motion of each layer therefore directly determines its motion blur. At the same time, the layered model captures complex blur overlap effects at motion discontinuities. Using layers, we can thus formulate a generative model for blurred video sequences, and use this model to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Next, we consider the representation of the motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. We show how this subspace can be learned from training data using principal component analysis (PCA), and that flow estimation using this subspace is computationally efficient. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace. Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we consider only static regions, and explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows to better combine information from multiple frames, resulting in high accuracies even in occluded regions. For moving regions, we compute the flow using a generic optical flow method, and combine it with the flow computed for the static regions to obtain a full optical flow field. By combining layered models of the scene with reasoning about the dynamic behavior of the real, three-dimensional world, the methods presented herein push the envelope of optical flow computation in terms of robustness, speed, and accuracy, giving state-of-the-art results on benchmarks and pointing to important future research directions for the estimation of motion in natural scenes.

ps

Official link DOI Project Page [BibTex]