Header logo is


2016


no image
Qualitative User Reactions to a Hand-Clapping Humanoid Robot

Fitter, N. T., Kuchenbecker, K. J.

In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 317-327, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)

hi

[BibTex]

2016


[BibTex]


no image
Designing and Assessing Expressive Open-Source Faces for the Baxter Robot

Fitter, N. T., Kuchenbecker, K. J.

In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 340-350, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Rhythmic Timing in Playful Human-Robot Social Motor Coordination

Fitter, N. T., Hawkes, D. T., Kuchenbecker, K. J.

In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 296-305, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
An electro-active polymer based lens module for dynamically varying focal system

Yun, S., Park, S., Nam, S., Park, B., Park, S. K., Mun, S., Lim, J. M., Kyung, K.

Applied Physics Letters, 109(14):141908, October 2016 (article)

Abstract
We demonstrate a polymer-based active-lens module allowing a dynamic focus controllable optical system with a wide tunable range. The active-lens module is composed of parallelized two active- lenses with a convex and a concave shaped hemispherical lens structure, respectively. Under opera- tion with dynamic input voltage signals, each active-lens produces translational movement bi-directionally responding to a hybrid driving force that is a combination of an electro-active response of a thin dielectric elastomer membrane and an electro-static attraction force. Since the proposed active lens module widely modulates a gap-distance between lens-elements, an optical system based on the active-lens module provides widely-variable focusing for selective imaging of objects in arbitrary position.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Using IMU Data to Demonstrate Hand-Clapping Games to a Robot

Fitter, N. T., Kuchenbecker, K. J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages: 851 - 856, October 2016, Interactive presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Wrinkle structures formed by formulating UV-crosslinkable liquid prepolymers

Park, S. K., Kwark, Y., Nam, S., Park, S., Park, B., Yun, S., Moon, J., Lee, J., Yu, B., Kyung, K.

Polymer, 99, pages: 447-452, September 2016 (article)

Abstract
Artificial wrinkles have recently been in the spotlight due to their potential use in high-tech applications. A spontaneously wrinkled film can be fabricated from UV-crosslinkable liquid prepolymers. Here, we controlled the wrinkle formation by simply formulating two UV-crosslinkable liquid prepolymers, tetraethylene glycol bis(4-ethenyl-2,3,5,6-tetrafluorophenyl) ether (TEGDSt) and tetraethylene glycol diacrylate (TEGDA). The wrinkles were formed from the TEGDSt/TEGDA formulated prepolymer layers containing up to 30 wt% of TEGDA. The wrinkle formation depended upon the rate of photo-crosslinking reaction of the formulated prepolymers. The first order apparent rate constant, kapp, was between ca. 5.7 × 10−3 and 12.2 × 10−3 s−1 for the wrinkle formation. The wrinkle structures were modulated within the kapp mainly due to variation in the extent of shrinkage of the formulated prepolymer layers with the content of TEGDA

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Numerical Investigation of Frictional Forces Between a Finger and a Textured Surface During Active Touch

Khojasteh, B., Janko, M., Visell, Y.

Extended abstract presented in form of an oral presentation at the 3rd International Conference on BioTribology (ICoBT), London, England, September 2016 (misc)

Abstract
The biomechanics of the human finger pad has been investigated in relation to motor behaviour and sensory function in the upper limb. While the frictional properties of the finger pad are important for grip and grasp function, recent attention has also been given to the roles played by friction when perceiving a surface via sliding contact. Indeed, the mechanics of sliding contact greatly affect stimuli felt by the finger scanning a surface. Past research has shed light on neural mechanisms of haptic texture perception, but the relation with time-resolved frictional contact interactions is unknown. Current biotribological models cannot predict time-resolved frictional forces felt by a finger as it slides on a rough surface. This constitutes a missing link in understanding the mechanical basis of texture perception. To ameliorate this, we developed a two-dimensional finite element numerical simulation of a human finger pad in sliding contact with a textured surface. Our model captures bulk mechanical properties, including hyperelasticity, dissipation, and tissue heterogeneity, and contact dynamics. To validate it, we utilized a database of measurements that we previously captured with a variety of human fingers and surfaces. By designing the simulations to match the measurements, we evaluated the ability of the FEM model to predict time-resolved sliding frictional forces. We varied surface texture wavelength, sliding speed, and normal forces in the experiments. An analysis of the results indicated that both time- and frequency-domain features of forces produced during finger-surface sliding interactions were reproduced, including many of the phenomena that we observed in analyses of real measurements, including quasiperiodicity, harmonic distortion and spectral decay in the frequency domain, and their dependence on kinetics and surface properties. The results shed light on frictional signatures of surface texture during active touch, and may inform understanding of the role played by friction in texture discrimination.

hi

[BibTex]

[BibTex]


no image
ProtonPack: A Visuo-Haptic Data Acquisition System for Robotic Learning of Surface Properties

Burka, A., Hu, S., Helgeson, S., Krishnan, S., Gao, Y., Hendricks, L. A., Darrell, T., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pages: 58-65, 2016, Oral presentation given by Burka (inproceedings)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Equipping the Baxter Robot with Human-Inspired Hand-Clapping Skills

Fitter, N. T., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 105-112, 2016 (inproceedings)

hi

[BibTex]

[BibTex]


Thumb xl romo and mini
Behavioral Analysis Automation for Music-Based Robotic Therapy for Children with Autism Spectrum Disorder

Burns, R., Nizambad, S., Park, C. H., Jeon, M., Howard, A.

Workshop paper (5 pages) at the RO-MAN Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics, August 2016 (misc)

Abstract
In this full workshop paper, we discuss the positive impacts of robot, music, and imitation therapies on children with autism. We also discuss the use of Laban Motion Analysis (LMA) to identify emotion through movement and posture cues. We present our preliminary studies of the "Five Senses" game that our two robots, Romo the penguin and Darwin Mini, partake in. Using an LMA-focused approach (enabled by our skeletal tracking Kinect algorithm), we find that our participants show increased frequency of movement and speed when the game has a musical accompaniment. Therefore, participants may have increased engagement with our robots and game if music is present. We also begin exploring motion learning for future works.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
Reproducing a Laser Pointer Dot on a Secondary Projected Screen

Hu, S., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pages: 1645-1650, 2016, Oral presentation given by Hu (inproceedings)

hi

[BibTex]

[BibTex]


Thumb xl capital
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

YouTube pdf poster suppmat Project Page [BibTex]


no image
Design and evaluation of a novel mechanical device to improve hemiparetic gait: a case report

Fjeld, K., Hu, S., Kuchenbecker, K. J., Vasudevan, E. V.

Extended abstract presented at the Biomechanics and Neural Control of Movement Conference (BANCOM), 2016, Poster presentation given by Fjeld (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl jun teaser
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


no image
Deep Learning for Tactile Understanding From Visual and Haptic Data

Gao, Y., Hendricks, L. A., Kuchenbecker, K. J., Darrell, T.

In Proceedings of the IEEE International Conference on Robotics and Automation, pages: 536-543, May 2016, Oral presentation given by Gao (inproceedings)

hi

[BibTex]

[BibTex]


no image
Robust Tactile Perception of Artificial Tumors Using Pairwise Comparisons of Sensor Array Readings

Hui, J. C. T., Block, A. E., Taylor, C. J., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium, pages: 305-312, Philadelphia, Pennsylvania, USA, April 2016, Oral presentation given by Hui (inproceedings)

hi

[BibTex]

[BibTex]


no image
Data-Driven Comparison of Four Cutaneous Displays for Pinching Palpation in Robotic Surgery

Brown, J. D., Ibrahim, M., Chase, E. D. Z., Pacchierotti, C., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium, pages: 147-154, Philadelphia, Pennsylvania, USA, April 2016, Oral presentation given by Brown (inproceedings)

hi

[BibTex]

[BibTex]


Thumb xl romo breakdown
Multisensory Robotic Therapy through Motion Capture and Imitation for Children with ASD

Burns, R., Nizambad, S., Park, C. H., Jeon, M., Howard, A.

Proceedings of the ASEE Spring 2016 Middle Atlantic Section Conference, April 2016 (conference)

Abstract
It is known that children with autism have difficulty with emotional communication. As the population of children with autism increases, it is crucial we create effective therapeutic programs that will improve their communication skills. We present an interactive robotic system that delivers emotional and social behaviors for multi­sensory therapy for children with autism spectrum disorders. Our framework includes emotion­-based robotic gestures and facial expressions, as well as tracking and understanding the child’s responses through Kinect motion capture.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
Design and Implementation of a Visuo-Haptic Data Acquisition System for Robotic Learning of Surface Properties

Burka, A., Hu, S., Helgeson, S., Krishnan, S., Gao, Y., Hendricks, L. A., Darrell, T., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium, pages: 350-352, April 2016, Work-in-progress paper. Poster presentation given by Burka (inproceedings)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Objective assessment of robotic surgical skill using instrument contact vibrations

Gomez, E. D., Aggarwal, R., McMahan, W., Bark, K., Kuchenbecker, K. J.

Surgical Endoscopy, 30(4):1419-1431, 2016 (article)

hi

[BibTex]

[BibTex]


no image
One Sensor, Three Displays: A Comparison of Tactile Rendering from a BioTac Sensor

Brown, J. D., Ibrahim, M., Chase, E. D. Z., Pacchierotti, C., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Philadelphia, Pennsylvania, USA, April 2016 (misc)

hi

[BibTex]

[BibTex]


Thumb xl angry romo
Multisensory robotic therapy to promote natural emotional interaction for children with ASD

Burns, R., Azzi, P., Spadafora, M., Park, C. H., Jeon, M., Kim, H. J., Lee, J., Raihan, K., Howard, A.

Proceedings of the Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI), pages: 571-571, March 2016 (conference)

Abstract
In this video submission, we are introduced to two robots, Romo the penguin and Darwin Mini. We have programmed these robots to perform a variety of emotions through facial expression and body language, respectively. We aim to use these robots with children with autism, to demo safe emotional and social responses in various sensory situations.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl interactive
Interactive Robotic Framework for Multi-Sensory Therapy for Children with Autism Spectrum Disorder

Burns, R., Park, C. H., Kim, H. J., Lee, J., Rennie, A., Jeon, M., Howard, A.

In Proceedings of the Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI), pages: 421-422, March 2016 (inproceedings)

Abstract
In this abstract, we present the overarching goal of our interactive robotic framework - to teach emotional and social behavior to children with autism spectrum disorders via multi-sensory therapy. We introduce our robot characters, Romo and Darwin Mini, and the "Five Senses" scenario they will undergo. This sensory game will develop the children's interest, and will model safe and appropriate reactions to typical sensory overload stimuli.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Cutaneous Feedback of Fingertip Deformation and Vibration for Palpation in Robotic Surgery

Pacchierotti, C., Prattichizzo, D., Kuchenbecker, K. J.

IEEE Transactions on Biomedical Engineering, 63(2):278-287, February 2016 (article)

hi

[BibTex]

[BibTex]


no image
Structure modulated electrostatic deformable mirror for focus and geometry control

Nam, S., Park, S., Yun, S., Park, B., Park, S. K., Kyung, K.

Optics Express, 24(1):55-66, OSA, January 2016 (article)

Abstract
We suggest a way to electrostatically control deformed geometry of an electrostatic deformable mirror (EDM) based on geometric modulation of a basement. The EDM is composed of a metal coated elastomeric membrane (active mirror) and a polymeric basement with electrode (ground). When an electrical voltage is applied across the components, the active mirror deforms toward the stationary basement responding to electrostatic attraction force in an air gap. Since the differentiated gap distance can induce change in electrostatic force distribution between the active mirror and the basement, the EDMs are capable of controlling deformed geometry of the active mirror with different basement structures (concave, flat, and protrusive). The modulation of the deformed geometry leads to significant change in the range of the focal length of the EDMs. Even under dynamic operations, the EDM shows fairly consistent and large deformation enough to change focal length in a wide frequency range (1~175 Hz). The geometric modulation of the active mirror with dynamic focus tunability can allow the EDM to be an active mirror lens for optical zoom devices as well as an optical component controlling field of view.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl teaser
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]


no image
Designing Human-Robot Exercise Games for Baxter

Fitter, N. T., Hawkes, D. T., Johnson, M. J., Kuchenbecker, K. J.

2016, Late-breaking results report presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Psychophysical Power Optimization of Friction Modulation for Tactile Interfaces

Sednaoui, T., Vezzoli, E., Gueorguiev, D., Amberg, M., Chappaz, C., Lemaire-Semail, B.

In Haptics: Perception, Devices, Control, and Applications, pages: 354-362, Springer International Publishing, Cham, 2016 (inproceedings)

Abstract
Ultrasonic vibration and electrovibration can modulate the friction between a surface and a sliding finger. The power consumption of these devices is critical to their integration in modern mobile devices such as smartphones. This paper presents a simple control solution to reduce up to 68.8 {\%} this power consumption by taking advantage of the human perception limits.

hi

[BibTex]

[BibTex]


Thumb xl screen shot 2018 05 04 at 11.40.29
Effect of Waveform in Haptic Perception of Electrovibration on Touchscreens

Vardar, Y., Güçlü, B., Basdogan, C.

In Haptics: Perception, Devices, Control, and Applications, pages: 190-203, Springer International Publishing, Cham, 2016 (inproceedings)

Abstract
The perceived intensity of electrovibration can be altered by modulating the amplitude, frequency, and waveform of the input voltage signal applied to the conductive layer of a touchscreen. Even though the effect of the first two has been already investigated for sinusoidal signals, we are not aware of any detailed study investigating the effect of the waveform on our haptic perception in the domain of electrovibration. This paper investigates how input voltage waveform affects our haptic perception of electrovibration on touchscreens. We conducted absolute detection experiments using square wave and sinusoidal input signals at seven fundamental frequencies (15, 30, 60, 120, 240, 480 and 1920 Hz). Experimental results depicted the well-known U-shaped tactile sensitivity across frequencies. However, the sensory thresholds were lower for the square wave than the sinusoidal wave at fundamental frequencies less than 60 Hz while they were similar at higher frequencies. Using an equivalent circuit model of a finger-touchscreen system, we show that the sensation difference between the waveforms at low fundamental frequencies can be explained by frequency-dependent electrical properties of human skin and the differential sensitivity of mechanoreceptor channels to individual frequency components in the electrostatic force. As a matter of fact, when the electrostatic force waveforms are analyzed in the frequency domain based on human vibrotactile sensitivity data from the literature [15], the electrovibration stimuli caused by square-wave input signals at all the tested frequencies in this study are found to be detected by the Pacinian psychophysical channel.

hi

vardar_eurohaptics_2016 [BibTex]

vardar_eurohaptics_2016 [BibTex]


Thumb xl img02
Probabilistic Duality for Parallel Gibbs Sampling without Graph Coloring

Mescheder, L., Nowozin, S., Geiger, A.

Arxiv, 2016 (article)

Abstract
We present a new notion of probabilistic duality for random variables involving mixture distributions. Using this notion, we show how to implement a highly-parallelizable Gibbs sampler for weakly coupled discrete pairwise graphical models with strictly positive factors that requires almost no preprocessing and is easy to implement. Moreover, we show how our method can be combined with blocking to improve mixing. Even though our method leads to inferior mixing times compared to a sequential Gibbs sampler, we argue that our method is still very useful for large dynamic networks, where factors are added and removed on a continuous basis, as it is hard to maintain a graph coloring in this setup. Similarly, our method is useful for parallelizing Gibbs sampling in graphical models that do not allow for graph colorings with a small number of colors such as densely connected graphs.

avg

pdf [BibTex]


no image
Peripheral vs. central determinants of vibrotactile adaptation

Klöcker, A., Gueorguiev, D., Thonnard, J. L., Mouraux, A.

Journal of Neurophysiology, 115(2):685-691, 2016, PMID: 26581868 (article)

Abstract
Long-lasting mechanical vibrations applied to the skin induce a reversible decrease in the perception of vibration at the stimulated skin site. This phenomenon of vibrotactile adaptation has been studied extensively, yet there is still no clear consensus on the mechanisms leading to vibrotactile adaptation. In particular, the respective contributions of 1) changes affecting mechanical skin impedance, 2) peripheral processes, and 3) central processes are largely unknown. Here we used direct electrical stimulation of nerve fibers to bypass mechanical transduction processes and thereby explore the possible contribution of central vs. peripheral processes to vibrotactile adaptation. Three experiments were conducted. In the first, adaptation was induced with mechanical vibration of the fingertip (51- or 251-Hz vibration delivered for 8 min, at 40× detection threshold). In the second, we attempted to induce adaptation with transcutaneous electrical stimulation of the median nerve (51- or 251-Hz constant-current pulses delivered for 8 min, at 1.5× detection threshold). Vibrotactile detection thresholds were measured before and after adaptation. Mechanical stimulation induced a clear increase of vibrotactile detection thresholds. In contrast, thresholds were unaffected by electrical stimulation. In the third experiment, we assessed the effect of mechanical adaptation on the detection thresholds to transcutaneous electrical nerve stimuli, measured before and after adaptation. Electrical detection thresholds were unaffected by the mechanical adaptation. Taken together, our results suggest that vibrotactile adaptation is predominantly the consequence of peripheral mechanoreceptor processes and/or changes in biomechanical properties of the skin.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Silent Expectations: Dynamic Causal Modeling of Cortical Prediction and Attention to Sounds That Weren’t

Chennu, S., Noreika, V., Gueorguiev, D., Shtyrov, Y., Bekinschtein, T. A., Henson, R.

Journal of Neuroscience, 36(32):8305-8316, Society for Neuroscience, 2016 (article)

Abstract
There is increasing evidence that human perception is realized by a hierarchy of neural processes in which predictions sent backward from higher levels result in prediction errors that are fed forward from lower levels, to update the current model of the environment. Moreover, the precision of prediction errors is thought to be modulated by attention. Much of this evidence comes from paradigms in which a stimulus differs from that predicted by the recent history of other stimuli (generating a so-called {\textquotedblleft}mismatch response{\textquotedblright}). There is less evidence from situations where a prediction is not fulfilled by any sensory input (an {\textquotedblleft}omission{\textquotedblright} response). This situation arguably provides a more direct measure of {\textquotedblleft}top-down{\textquotedblright} predictions in the absence of confounding {\textquotedblleft}bottom-up{\textquotedblright} input. We applied Dynamic Causal Modeling of evoked electromagnetic responses recorded by EEG and MEG to an auditory paradigm in which we factorially crossed the presence versus absence of {\textquotedblleft}bottom-up{\textquotedblright} stimuli with the presence versus absence of {\textquotedblleft}top-down{\textquotedblright} attention. Model comparison revealed that both mismatch and omission responses were mediated by increased forward and backward connections, differing primarily in the driving input. In both responses, modeling results suggested that the presence of attention selectively modulated backward {\textquotedblleft}prediction{\textquotedblright} connections. Our results provide new model-driven evidence of the pure top-down prediction signal posited in theories of hierarchical perception, and highlight the role of attentional precision in strengthening this prediction.SIGNIFICANCE STATEMENT Human auditory perception is thought to be realized by a network of neurons that maintain a model of and predict future stimuli. Much of the evidence for this comes from experiments where a stimulus unexpectedly differs from previous ones, which generates a well-known {\textquotedblleft}mismatch response.{\textquotedblright} But what happens when a stimulus is unexpectedly omitted altogether? By measuring the brain{\textquoteright}s electromagnetic activity, we show that it also generates an {\textquotedblleft}omission response{\textquotedblright} that is contingent on the presence of attention. We model these responses computationally, revealing that mismatch and omission responses only differ in the location of inputs into the same underlying neuronal network. In both cases, we show that attention selectively strengthens the brain{\textquoteright}s prediction of the future.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Touch uses frictional cues to discriminate flat materials

Gueorguiev, D., Bochereau, S., Mouraux, A., Hayward, V., Thonnard, J.

Scientific reports, 6, pages: 25553, Nature Publishing Group, 2016 (article)

hi

[BibTex]

[BibTex]


Thumb xl pami
Map-Based Probabilistic Visual Self-Localization

Brubaker, M. A., Geiger, A., Urtasun, R.

IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), 2016 (article)

Abstract
Accurate and efficient self-localization is a critical problem for autonomous systems. This paper describes an affordable solution to vehicle self-localization which uses odometry computed from two video cameras and road maps as the sole inputs. The core of the method is a probabilistic model for which an efficient approximate inference algorithm is derived. The inference algorithm is able to utilize distributed computation in order to meet the real-time requirements of autonomous systems in some instances. Because of the probabilistic nature of the model the method is capable of coping with various sources of uncertainty including noise in the visual odometry and inherent ambiguities in the map (e.g., in a Manhattan world). By exploiting freely available, community developed maps and visual odometry measurements, the proposed method is able to localize a vehicle to 4m on average after 52 seconds of driving on maps which contain more than 2,150km of drivable roads.

avg ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


no image
IMU-Mediated Real-Time Human-Baxter Hand-Clapping Interaction

Fitter, N. T., Huang, Y. E., Mayer, J. P., Kuchenbecker, K. J.

2016, Late-breaking results report presented at the {\em IEEE/RSJ International Conference on Intelligent Robots and Systems} (misc)

hi

[BibTex]

[BibTex]

2015


Thumb xl zhou
Exploiting Object Similarity in 3D Reconstruction

Zhou, C., Güney, F., Wang, Y., Geiger, A.

In International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
Despite recent progress, reconstructing outdoor scenes in 3D from movable platforms remains a highly difficult endeavor. Challenges include low frame rates, occlusions, large distortions and difficult lighting conditions. In this paper, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by locating objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows us to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. We evaluate our approach with respect to LIDAR ground truth on a novel challenging suburban dataset and show its advantages over the state-of-the-art.

avg ps

pdf suppmat [BibTex]

2015


pdf suppmat [BibTex]


Thumb xl philip
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation

Lenz, P., Geiger, A., Urtasun, R.

In International Conference on Computer Vision (ICCV), International Conference on Computer Vision (ICCV), December 2015 (inproceedings)

Abstract
One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.

avg ps

pdf suppmat video project [BibTex]

pdf suppmat video project [BibTex]


no image
Reducing Student Anonymity and Increasing Engagement

Kuchenbecker, K. J.

University of Pennsylvania Almanac, 62(18):8, November 2015 (article)

hi

[BibTex]

[BibTex]


Thumb xl teaser
Towards Probabilistic Volumetric Reconstruction using Ray Potentials

(Best Paper Award)

Ulusoy, A. O., Geiger, A., Black, M. J.

In 3D Vision (3DV), 2015 3rd International Conference on, pages: 10-18, Lyon, October 2015 (inproceedings)

Abstract
This paper presents a novel probabilistic foundation for volumetric 3-d reconstruction. We formulate the problem as inference in a Markov random field, which accurately captures the dependencies between the occupancy and appearance of each voxel, given all input images. Our main contribution is an approximate highly parallelized discrete-continuous inference algorithm to compute the marginal distributions of each voxel's occupancy and appearance. In contrast to the MAP solution, marginals encode the underlying uncertainty and ambiguity in the reconstruction. Moreover, the proposed algorithm allows for a Bayes optimal prediction with respect to a natural reconstruction loss. We compare our method to two state-of-the-art volumetric reconstruction algorithms on three challenging aerial datasets with LIDAR ground truth. Our experiments demonstrate that the proposed algorithm compares favorably in terms of reconstruction accuracy and the ability to expose reconstruction uncertainty.

avg ps

code YouTube pdf suppmat DOI Project Page [BibTex]

code YouTube pdf suppmat DOI Project Page [BibTex]


no image
Surgeons and Non-Surgeons Prefer Haptic Feedback of Instrument Vibrations During Robotic Surgery

Koehn, J. K., Kuchenbecker, K. J.

Surgical Endoscopy, 29(10):2970-2983, October 2015 (article)

hi

[BibTex]

[BibTex]


no image
Displaying Sensed Tactile Cues with a Fingertip Haptic Device

Pacchierotti, C., Prattichizzo, D., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 8(4):384-396, October 2015 (article)

hi

[BibTex]

[BibTex]


no image
A thin film active-lens with translational control for dynamically programmable optical zoom

Yun, S., Park, S., Park, B., Nam, S., Park, S. K., Kyung, K.

Applied Physics Letters, 107(8):081907, AIP Publishing, August 2015 (article)

Abstract
We demonstrate a thin film active-lens for rapidly and dynamically controllable optical zoom. The active-lens is composed of a convex hemispherical polydimethylsiloxane (PDMS) lens structure working as an aperture and a dielectric elastomer (DE) membrane actuator, which is a combination of a thin DE layer made with PDMS and a compliant electrode pattern using silver-nanowires. The active-lens is capable of dynamically changing focal point of the soft aperture as high as 18.4% through its translational movement in vertical direction responding to electrically induced bulged-up deformation of the DE membrane actuator. Under operation with various sinusoidal voltage signals, the movement responses are fairly consistent with those estimated from numerical simulation. The responses are not only fast, fairly reversible, and highly durable during continuous cyclic operations, but also large enough to impart dynamic focus tunability for optical zoom in microscopic imaging devices with a light-weight and ultra-slim configuration.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl img displet
Displets: Resolving Stereo Ambiguities using Object Knowledge

Güney, F., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 4165-4175, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
Stereo techniques have witnessed tremendous progress over the last decades, yet some aspects of the problem still remain challenging today. Striking examples are reflecting and textureless surfaces which cannot easily be recovered using traditional local regularizers. In this paper, we therefore propose to regularize over larger distances using object-category specific disparity proposals (displets) which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The proposed displets encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel based CRF framework and demonstrate its benefits on the KITTI stereo evaluation.

avg ps

pdf abstract suppmat [BibTex]

pdf abstract suppmat [BibTex]


no image
Toward a large-scale visuo-haptic dataset for robotic learning

Burka, A., Hu, S., Krishnan, S., Kuchenbecker, K. J., Hendricks, L. A., Gao, Y., Darrell, T.

In Proc. CVPR Workshop on the Future of Datasets in Vision, 2015 (inproceedings)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl img sceneflow
Object Scene Flow for Autonomous Vehicles

Menze, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, pages: 3061-3070, IEEE, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2015 (inproceedings)

Abstract
This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which can't be handled by existing methods.

avg ps

pdf abstract suppmat DOI [BibTex]

pdf abstract suppmat DOI [BibTex]


no image
Detecting Lumps in Simulated Tissue via Palpation with a BioTac

Hui, J., Block, A., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, 2015, Work-in-progress paper. Poster presentation given by Hui (inproceedings)

hi

[BibTex]

[BibTex]


no image
Analysis of the Instrument Vibrations and Contact Forces Caused by an Expert Robotic Surgeon Doing FRS Tasks

Brown, J. D., O’Brien, C., Miyasaka, K., Dumon, K. R., Kuchenbecker, K. J.

In Proc. Hamlyn Symposium on Medical Robotics, pages: 75-76, London, England, June 2015, Poster presentation given by Brown (inproceedings)

hi

[BibTex]

[BibTex]


no image
Should Haptic Texture Vibrations Respond to User Force and Speed?

Culbertson, H., Kuchenbecker, K. J.

In IEEE World Haptics Conference, pages: 106 - 112, Evanston, Illinois, USA, June 2015, Oral presentation given by Culbertson (inproceedings)

hi

[BibTex]

[BibTex]


no image
Enabling the Baxter Robot to Play Hand-Clapping Games

Fitter, N. T., Neuburger, M., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, June 2015, Work-in-progress paper. Poster presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Data-Driven Motion Mappings Improve Transparency in Teleoperation

Khurshid, R. P., Kuchenbecker, K. J.

Presence: Teleoperators and Virtual Environments, 24(2):132-154, May 2015 (article)

hi

[BibTex]

[BibTex]