Header logo is


2018


no image
Non-factorised Variational Inference in Dynamical Systems

Ialongo, A. D., Van Der Wilk, M., Hensman, J., Rasmussen, C. E.

1st Symposion on Advances in Approximate Bayesian Inference, December 2018 (conference)

ei

PDF link (url) [BibTex]

2018


PDF link (url) [BibTex]


no image
Enhancing the Accuracy and Fairness of Human Decision Making

Valera, I., Singla, A., Gomez Rodriguez, M.

Advances in Neural Information Processing Systems 31, pages: 1774-1783, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


no image
Consolidating the Meta-Learning Zoo: A Unifying Perspective as Posterior Predictive Inference

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Workshop on Meta-Learning (MetaLearn 2018) at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Versa: Versatile and Efficient Few-shot Learning

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

Harder, F., Köhler, J., Welling, M., Park, M.

Workshop on Privacy Preserving Machine Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Boosting Black Box Variational Inference

Locatello*, F., Dresdner*, G., R., K., Valera, I., Rätsch, G.

Advances in Neural Information Processing Systems 31, pages: 3405-3415, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


no image
Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

Mehrjou, A., Schölkopf, B.

Workshop: Infer to Control: Probabilistic Reinforcement Learning and Structured Control at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Invariances using the Marginal Likelihood

van der Wilk, M., Bauer, M., John, S. T., Hensman, J.

Advances in Neural Information Processing Systems 31, pages: 9960-9970, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Data-Efficient Hierarchical Reinforcement Learning

Nachum, O., Gu, S., Lee, H., Levine, S.

Advances in Neural Information Processing Systems 31, pages: 3307-3317, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Generalisation in humans and deep neural networks

Geirhos, R., Temme, C. R. M., Rauber, J., Schütt, H., Bethge, M., Wichmann, F. A.

Advances in Neural Information Processing Systems 31, pages: 7549-7561, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Parallel and functionally segregated processing of task phase and conscious content in the prefrontal cortex

Kapoor, V., Besserve, M., Logothetis, N. K., Panagiotaropoulos, T. I.

Communications Biology, 1(215):1-12, December 2018 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
A Computational Camera with Programmable Optics for Snapshot High Resolution Multispectral Imaging

Chen, J., Hirsch, M., Eberhardt, B., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, December 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models

Neitz, A., Parascandolo, G., Bauer, S., Schölkopf, B.

Advances in Neural Information Processing Systems 31, pages: 9838-9848, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


Assessing Generative Models via Precision and Recall
Assessing Generative Models via Precision and Recall

Sajjadi, M. S. M., Bachem, O., Lucic, M., Bousquet, O., Gelly, S.

Advances in Neural Information Processing Systems 31, pages: 5234-5243, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]


Efficient Encoding of Dynamical Systems through Local Approximations
Efficient Encoding of Dynamical Systems through Local Approximations

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 6073 - 6079 , Miami, Fl, USA, December 2018 (inproceedings)

ei ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


no image
Flex-Convolution (Million-Scale Point-Cloud Learning Beyond Grid-Worlds)

Groh*, F., Wieschollek*, P., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, December 2018, *equal contribution (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Bayesian Nonparametric Hawkes Processes

Kapoor, J., Vergari, A., Gomez Rodriguez, M., Valera, I.

Bayesian Nonparametrics workshop at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Informative Features for Model Comparison

Jitkrittum, W., Kanagawa, H., Sangkloy, P., Hays, J., Schölkopf, B., Gretton, A.

Advances in Neural Information Processing Systems 31, pages: 816-827, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Hyperbolic Neural Networks

Ganea*, O., Becigneul*, G., Hofmann, T.

Advances in Neural Information Processing Systems 31, pages: 5350-5360, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration (4 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
For simple and realistic vibrotactile feedback, 3D accelerations from real contact interactions are usually rendered using a single-axis vibration actuator; this dimensional reduction can be performed in many ways. This demonstration implements a real-time conversion system that simultaneously measures 3D accelerations and renders corresponding 1D vibrations using a two-pen interface. In the demonstration, a user freely interacts with various objects using an In-Pen that contains a 3-axis accelerometer. The captured accelerations are converted to a single-axis signal, and an Out-Pen renders the reduced signal for the user to feel. We prepared seven conversion methods from the simple use of a single-axis signal to applying principal component analysis (PCA) so that users can compare the performance of each conversion method in this demonstration.

hi

Project Page [BibTex]

Project Page [BibTex]


A Large-Scale Fabric-Based Tactile Sensor Using Electrical Resistance Tomography
A Large-Scale Fabric-Based Tactile Sensor Using Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

Hands-on demonstration (3 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
Large-scale tactile sensing is important for household robots and human-robot interaction because contacts can occur all over a robot’s body surface. This paper presents a new fabric-based tactile sensor that is straightforward to manufacture and can cover a large area. The tactile sensor is made of conductive and non-conductive fabric layers, and the electrodes are stitched with conductive thread, so the resulting device is flexible and stretchable. The sensor utilizes internal array electrodes and a reconstruction method called electrical resistance tomography (ERT) to achieve a high spatial resolution with a small number of electrodes. The developed sensor shows that only 16 electrodes can accurately estimate single and multiple contacts over a square that measures 20 cm by 20 cm.

hi

Project Page [BibTex]

Project Page [BibTex]


Motion-based Object Segmentation based on Dense RGB-D Scene Flow
Motion-based Object Segmentation based on Dense RGB-D Scene Flow

Shao, L., Shah, P., Dwaracherla, V., Bohg, J.

IEEE Robotics and Automation Letters, 3(4):3797-3804, IEEE, IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2018 (conference)

Abstract
Given two consecutive RGB-D images, we propose a model that estimates a dense 3D motion field, also known as scene flow. We take advantage of the fact that in robot manipulation scenarios, scenes often consist of a set of rigidly moving objects. Our model jointly estimates (i) the segmentation of the scene into an unknown but finite number of objects, (ii) the motion trajectories of these objects and (iii) the object scene flow. We employ an hourglass, deep neural network architecture. In the encoding stage, the RGB and depth images undergo spatial compression and correlation. In the decoding stage, the model outputs three images containing a per-pixel estimate of the corresponding object center as well as object translation and rotation. This forms the basis for inferring the object segmentation and final object scene flow. To evaluate our model, we generated a new and challenging, large-scale, synthetic dataset that is specifically targeted at robotic manipulation: It contains a large number of scenes with a very diverse set of simultaneously moving 3D objects and is recorded with a commonly-used RGB-D camera. In quantitative experiments, we show that we significantly outperform state-of-the-art scene flow and motion-segmentation methods. In qualitative experiments, we show how our learned model transfers to challenging real-world scenes, visually generating significantly better results than existing methods.

am

Project Page arXiv DOI [BibTex]

Project Page arXiv DOI [BibTex]


On the Integration of Optical Flow and Action Recognition
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


no image
Regularizing Reinforcement Learning with State Abstraction

Akrour, R., Veiga, F., Peters, J., Neuman, G.

Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2018 (conference) Accepted

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm
A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm

Anderson, M., Anderson, S., Berenz, V.

Proceedings of the IEEE, pages: 1,15, October 2018 (article)

Abstract
In this paper, a case-supported principle-based behavior paradigm is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake. We believe that this is the case since we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles that balance these features when they are in conflict. Such principles not only help ensure ethical behavior of complex and dynamic systems but also can serve as a basis for justification of this behavior. The requirements, methods, implementation, and evaluation components of the paradigm are detailed as well as its instantiation in both a simulated and real robot functioning in the domain of eldercare.

am

link (url) DOI [BibTex]


Towards Robust Visual Odometry with a Multi-Camera System
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
Learning to Categorize Bug Reports with LSTM Networks

Gondaliya, K., Peters, J., Rueckert, E.

Proceedings of the 10th International Conference on Advances in System Testing and Validation Lifecycle (VALID), pages: 7-12, October 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment

Muratore, F., Treede, F., Gienger, M., Peters, J.

2nd Annual Conference on Robot Learning (CoRL), 87, pages: 700-713, Proceedings of Machine Learning Research, PMLR, October 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving Targets

Maeda, G., Koc, O., Morimoto, J.

Proceedings of The 2nd Conference on Robot Learning (CoRL), 87, pages: 630-640, (Editors: Aude Billard, Anca Dragan, Jan Peters, Jun Morimoto ), PMLR, October 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Control of Musculoskeletal Systems using Learned Dynamics Models
Control of Musculoskeletal Systems using Learned Dynamics Models

Büchler, D., Calandra, R., Schölkopf, B., Peters, J.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3(4):3161-3168, IEEE, 2018 (article)

Abstract
Controlling musculoskeletal systems, especially robots actuated by pneumatic artificial muscles, is a challenging task due to nonlinearities, hysteresis effects, massive actuator de- lay and unobservable dependencies such as temperature. Despite such difficulties, muscular systems offer many beneficial prop- erties to achieve human-comparable performance in uncertain and fast-changing tasks. For example, muscles are backdrivable and provide variable stiffness while offering high forces to reach high accelerations. In addition, the embodied intelligence deriving from the compliance might reduce the control demands for specific tasks. In this paper, we address the problem of how to accurately control musculoskeletal robots. To address this issue, we propose to learn probabilistic forward dynamics models using Gaussian processes and, subsequently, to employ these models for control. However, Gaussian processes dynamics models cannot be set-up for our musculoskeletal robot as for traditional motor- driven robots because of unclear state composition etc. We hence empirically study and discuss in detail how to tune these approaches to complex musculoskeletal robots and their specific challenges. Moreover, we show that our model can be used to accurately control an antagonistic pair of pneumatic artificial muscles for a trajectory tracking task while considering only one- step-ahead predictions of the forward model and incorporating model uncertainty.

ei

RAL18final link (url) DOI Project Page [BibTex]

RAL18final link (url) DOI Project Page [BibTex]


no image
Constraint-Space Projection Direct Policy Search

Akrour, R., Peters, J., Neuman, G.

14th European Workshop on Reinforcement Learning (EWRL), October 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Softness, Warmth, and Responsiveness Improve Robot Hugs
Softness, Warmth, and Responsiveness Improve Robot Hugs

Block, A. E., Kuchenbecker, K. J.

International Journal of Social Robotics, 11(1):49-64, October 2018 (article)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.

hi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Multi-objective Optimization of Nonconventional Laminated Composite Panels

Serhat, G.

Koc University, October 2018 (phdthesis)

Abstract
Laminated composite panels are extensively used in various industries due to their high stiffness-to-weight ratio and directional properties that allow optimization of stiffness characteristics for specific applications. With the recent improvements in the manufacturing techniques, the technology trend has been shifting towards the development of nonconventional composites. This work aims to develop new methods for the design and optimization of nonconventional laminated composites. Lamination parameters method is used to characterize laminate stiffness matrices in a compact form. An optimization framework based on finite element analysis was developed to calculate the solutions for different panel geometries, boundary conditions and load cases. The first part of the work addresses the multi-objective optimization of composite laminates to maximize dynamic and load-carrying performances simultaneously. Conforming and conflicting behaviors of multiple objective functions are investigated by determining Pareto-optimal solutions, which provide a valuable insight for multi-objective optimization problems. In the second part, design of curved laminated panels for optimal dynamic response is studied in detail. Firstly, the designs yielding maximum fundamental frequency values are computed. Next, optimal designs minimizing equivalent radiated power are obtained for the panels under harmonic pressure excitation, and their effective frequency bands are shown. The relationship between these two design sets is investigated to study the effectiveness of the frequency maximization technique. In the last part, a new method based on lamination parameters is proposed for the design of variable-stiffness composite panels. The results demonstrate that the proposed method provides manufacturable designs with smooth fiber paths that outperform the constant-stiffness laminates, while utilizing the advantages of lamination parameters formulation.

hi

Multi-objective Optimization of Nonconventional Laminated Composite Panels DOI [BibTex]


no image
Spatio-temporal Transformer Network for Video Restoration

Kim, T. H., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

15th European Conference on Computer Vision (ECCV), Part III, 11207, pages: 111-127, Lecture Notes in Computer Science, (Editors: Vittorio Ferrari, Martial Hebert,Cristian Sminchisescu and Yair Weiss), Springer, September 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Learning Priors for Semantic 3D Reconstruction
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

In Computer Vision – ECCV 2018, Springer International Publishing, Cham, September 2018 (inproceedings)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat Project Page Video DOI Project Page [BibTex]

pdf suppmat Project Page Video DOI Project Page [BibTex]


no image
Separating Reflection and Transmission Images in the Wild

Wieschollek, P., Gallo, O., Gu, J., Kautz, J.

European Conference on Computer Vision (ECCV), September 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Risk-Sensitivity in Simulation Based Online Planning

Schmid, K., Belzner, L., Kiermeier, M., Neitz, A., Phan, T., Gabor, T., Linnhoff, C.

KI 2018: Advances in Artificial Intelligence - 41st German Conference on AI, pages: 229-240, (Editors: F. Trollmann and A. Y. Turhan), Springer, Cham, September 2018 (conference)

ei

DOI [BibTex]

DOI [BibTex]


Playful: Reactive Programming for Orchestrating Robotic Behavior
Playful: Reactive Programming for Orchestrating Robotic Behavior

Berenz, V., Schaal, S.

IEEE Robotics Automation Magazine, 25(3):49-60, September 2018 (article) In press

Abstract
For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior.

am

playful website playful_IEEE_RAM link (url) DOI [BibTex]


no image
The Unreasonable Effectiveness of Texture Transfer for Single Image Super-resolution

Gondal, M. W., Schölkopf, B., Hirsch, M.

Workshop and Challenge on Perceptual Image Restoration and Manipulation (PIRM) at the 15th European Conference on Computer Vision (ECCV), September 2018 (conference)

ei

arXiv URL [BibTex]

arXiv URL [BibTex]


ClusterNet: Instance Segmentation in RGB-D Images
ClusterNet: Instance Segmentation in RGB-D Images

Shao, L., Tian, Y., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
We propose a method for instance-level segmentation that uses RGB-D data as input and provides detailed information about the location, geometry and number of {\em individual\/} objects in the scene. This level of understanding is fundamental for autonomous robots. It enables safe and robust decision-making under the large uncertainty of the real-world. In our model, we propose to use the first and second order moments of the object occupancy function to represent an object instance. We train an hourglass Deep Neural Network (DNN) where each pixel in the output votes for the 3D position of the corresponding object center and for the object's size and pose. The final instance segmentation is achieved through clustering in the space of moments. The object-centric training loss is defined on the output of the clustering. Our method outperforms the state-of-the-art instance segmentation method on our synthesized dataset. We show that our method generalizes well on real-world data achieving visually better segmentation results.

am

link (url) [BibTex]

link (url) [BibTex]


Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11220, pages: 713-731, Springer, Cham, September 2018 (inproceedings)

avg ps

pdf suppmat Video Project Page DOI Project Page [BibTex]

pdf suppmat Video Project Page DOI Project Page [BibTex]


SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images
SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images

Coors, B., Condurache, A. P., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.

avg

pdf suppmat Project Page [BibTex]


no image
Complexity, Rate, and Scale in Sliding Friction Dynamics Between a Finger and Textured Surface

Khojasteh, B., Janko, M., Visell, Y.

Nature Scientific Reports, 8(13710), September 2018 (article)

Abstract
Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch.

hi

DOI [BibTex]

DOI [BibTex]


Statistical Modelling of Fingertip Deformations and Contact Forces during Tactile Interaction
Statistical Modelling of Fingertip Deformations and Contact Forces during Tactile Interaction

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Extended abstract presented at the Hand, Brain and Technology conference (HBT), Ascona, Switzerland, August 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
From Deterministic ODEs to Dynamic Structural Causal Models

Rubenstein, P. K., Bongers, S., Schölkopf, B., Mooij, J. M.

Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI), August 2018 (conference)

ei

Arxiv link (url) [BibTex]

Arxiv link (url) [BibTex]


no image
Design of curved composite panels for optimal dynamic response using lamination parameters

Serhat, G., Basdogan, I.

Composites Part B: Engineering, 147, pages: 135–146, August 2018 (article)

Abstract
In this paper, dynamic response of composite panels is investigated using lamination parameters as design variables. Finite element analyses are performed to observe the individual and combined effects of different panel aspect ratios, curvatures and boundary conditions on the dynamic responses. Fundamental frequency contours for curved panels are obtained in lamination parameters domain and optimal points yielding maximum values are found. Subsequently, forced dynamic analyses are carried out to calculate equivalent radiated power (ERP) for the panels under harmonic pressure excitation. ERP contours at the maximum fundamental frequency are presented. Optimal lamination parameters providing minimum ERP are determined for different excitation frequencies and their effective frequency bands are shown. The relationship between the designs optimized for maximum fundamental frequency and minimum ERP responses is investigated to study the effectiveness of the frequency maximization technique. The results demonstrate the potential of using lamination parameters technique in the design of curved composite panels for optimal dynamic response and provide valuable insight on the effect of various design parameters.

hi

DOI [BibTex]

DOI [BibTex]


no image
Generalized Score Functions for Causal Discovery

Huang, B., Zhang, K., Lin, Y., Schölkopf, B., Glymour, C.

Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages: 1551-1560, (Editors: Yike Guo and Faisal Farooq), ACM, August 2018 (conference)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A Robust Soft Lens for Tunable Camera Application Using Dielectric Elastomer Actuators

Nam, S., Yun, S., Yoon, J. W., Park, S., Park, S. K., Mun, S., Park, B., Kyung, K.

Soft robotics, Mary Ann Liebert, Inc., August 2018 (article)

Abstract
Developing tunable lenses, an expansion-based mechanism for dynamic focus adjustment can provide a larger focal length tuning range than a contraction-based mechanism. Here, we develop an expansion-tunable soft lens module using a disk-type dielectric elastomer actuator (DEA) that creates axially symmetric pulling forces on a soft lens. Adopted from a biological accommodation mechanism in human eyes, a soft lens at the annular center of a disk-type DEA pair is efficiently stretched to change the focal length in a highly reliable manner. A soft lens with a diameter of 3mm shows a 65.7% change in the focal length (14.3–23.7mm) under a dynamic driving voltage signal control. We confirm a quadratic relation between lens expansion and focal length that leads to large focal length tunability obtainable in the proposed approach. The fabricated tunable lens module can be used for soft, lightweight, and compact vision components in robots, drones, vehicles, and so on.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]