Header logo is


2024


Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative BIology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Skin is a complex biological composite consisting of layers with distinct mechanical properties, morphologies, and mechanosensory capabilities. This work seeks to expand the comparative biomechanics field to comparative haptics, analyzing elephant trunk touch by redesigning a previously published human finger-pad model with morphological parameters measured from an elephant trunk. The dorsal surface of the elephant trunk has a thick, wrinkled epidermis covered with whiskers at the distal tip and deep folds at the proximal base. We hypothesize that this thick dorsal skin protects the trunk from mechanical damage but significantly dulls its tactile sensing ability. To facilitate safe and dexterous motion, the distributed dorsal whiskers might serve as pre-touch antennae, transmitting an amplified version of impending contact to the mechanoreceptors beneath the elephant's armor. We tested these hypotheses by simulating soft tissue deformation through high-fidelity finite element analyses involving representative skin layers and whiskers, modeled based on frozen African elephant trunk (Loxodonta africana) morphology. For a typical contact force, quintupling the stratum corneum thickness to match dorsal trunk skin reduces the von Mises stress communicated to the dermis by 18%. However, adding a whisker offsets this dulled sensing, as hypothesized, amplifying the stress by more than 15 at the same location. We hope this work will motivate further investigations of mammalian touch using approaches and models from the ample literature on human touch.

hi

[BibTex]

2024


[BibTex]


no image
MPI-10: Haptic-Auditory Measurements from Tool-Surface Interactions

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise" in IEEE Transactions on Haptics, January 2024 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers
Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers

Schulz, A., Kaufmann, L., Brecht, M., Richter, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative BIology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Whiskers are so named because these hairs often actuate circularly, whisking, via collagen wrapping at the root of the hair follicle to increase their sensing volumes. Elephant trunks are a unique case study for whiskers, as the dorsal and lateral sections of the elephant proboscis have scattered sensory hairs that lack individual actuation. We hypothesize that the actuation limitations of these non-whisking whiskers led to anisotropic morphology and non-homogeneous composition to meet the animal's sensory needs. To test these hypotheses, we examined trunk whiskers from a 35-year-old female African savannah elephant (Loxodonta africana). Whisker morphology was evaluated through micro-CT and polarized light microscopy. The whiskers from the distal tip of the trunk were found to be axially asymmetric, with an ovular cross-section at the root, shifting to a near-square cross-section at the point. Nanoindentation and additional microscopy revealed that elephant whiskers have a composition unlike any other mammalian hair ever studied: we recorded an elastic modulus of 3 GPa at the root and 0.05 GPa at the point of a single 4-cm-long whisker. This work challenges the assumption that hairs have circular cross-sections and isotropic mechanical properties. With such striking differences compared to other mammals, including the mouse (Mus musculus), rat (Rattus norvegicus), and cat (Felis catus), we conclude that whisker morphology and composition play distinct and complementary roles in elephant trunk mechanosensing.

zwe-csfm hi

[BibTex]

[BibTex]


no image
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
2024 (misc)

Abstract
Humans excel at robust bipedal walking in complex natural environments. In each step, they adequately tune the interaction of biomechanical muscle dynamics and neuronal signals to be robust against uncertainties in ground conditions. However, it is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem considering stability, robustness, and energy efficiency. In computer simulations, energy minimization has been shown to be a successful optimization target, reproducing natural walking with trajectory optimization or reflex-based control methods. However, these methods focus on particular motions at a time and the resulting controllers are limited when compensating for perturbations. In robotics, reinforcement learning~(RL) methods recently achieved highly stable (and efficient) locomotion on quadruped systems, but the generation of human-like walking with bipedal biomechanical models has required extensive use of expert data sets. This strong reliance on demonstrations often results in brittle policies and limits the application to new behaviors, especially considering the potential variety of movements for high-dimensional musculoskeletal models in 3D. Achieving natural locomotion with RL without sacrificing its incredible robustness might pave the way for a novel approach to studying human walking in complex natural environments. Videos: this https://sites.google.com/view/naturalwalkingrl

al

link (url) [BibTex]


no image
Discrete fourier transform three-to-one (DFT321): Code

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

MATLAB code of discrete fourier transform three-to-one (DFT321), 2024 (misc)

hi

Code Project Page [BibTex]

Code Project Page [BibTex]

2023


Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction
Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the IROS Workshop on Causality for Robotics: Answering the Question of Why, Detroit, USA, October 2023 (misc)

Abstract
Causal inference could give future learning robots strong generalization and scalability capabilities, which are crucial for safety, fault diagnosis and error prevention. One application area of interest consists of the haptic recognition of surfaces. We seek to understand cause and effect during physical surface interaction by examining surface and tool identity, their interplay, and other contact-irrelevant factors. To work toward elucidating the mechanism of surface encoding, we attempt to recognize surfaces from haptic-auditory data captured by previously unseen hemispherical steel tools that differ from the recording tool in diameter and mass. In this context, we leverage ideas from kernel methods to quantify surface similarity through descriptive differences in signal distributions. We find that the effect of the tool is significantly present in higher-order statistical moments of contact data: aligning the means of the distributions being compared somewhat improves recognition but does not fully separate tool identity from surface identity. Our findings shed light on salient aspects of haptic-auditory data from tool-surface interaction and highlight the challenges involved in generalizing artificial surface discrimination capabilities.

hi

Manuscript Project Page [BibTex]

2023


Manuscript Project Page [BibTex]


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (misc)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

hi

DOI [BibTex]

DOI [BibTex]


no image
NearContact: Accurate Human Detection using Tomographic Proximity and Contact Sensing with Cross-Modal Attention

Garrofé, G., Schoeffmann, C., Zangl, H., Kuchenbecker, K. J., Lee, H.

Extended abstract (4 pages) presented at the International Workshop on Human-Friendly Robotics (HFR), Munich, Germany, September 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics
The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

Abstract presented at the American Society of Biomechanics (ASB), Knoxville, USA, August 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device
Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device

Rokhmanova, N., Faulkner, R., Martus, J., Fiene, J., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
Wearable haptic devices can provide salient real-time feedback (typically vibration) for rehabilitation, sports training, and skill acquisition. Although the body provides many sites for such cues, the influence of the mounting location on vibrotactile mechanics is commonly ignored. This study builds on previous research by quantifying how changes in strap tightness and local tissue composition affect the physical acceleration generated by a typical vibrotactile device.

hi

Project Page [BibTex]

Project Page [BibTex]


Toward a Device for Reliable Evaluation of Vibrotactile Perception
Toward a Device for Reliable Evaluation of Vibrotactile Perception

Ballardini, G., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

[BibTex]

[BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test: Code

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

Code published as a companion to the journal article "Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test" in IEEE Transactions on Automation Science and Engineering, July 2023 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces
Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces

Fazlollahi, F., Taghizadeh, Z., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Capturing Rich Auditory-Haptic Contact Data for Surface Recognition
Capturing Rich Auditory-Haptic Contact Data for Surface Recognition

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
The sophistication of biological sensing and transduction processes during finger-surface and tool-surface interaction is remarkable, enabling humans to perform ubiquitous tasks such as discriminating and manipulating surfaces. Capturing and processing these rich contact-elicited signals during surface exploration with similar success is an important challenge for artificial systems. Prior research introduced sophisticated mobile surface-sensing systems, but it remains less clear what quality, resolution and acuity of sensor data are necessary to perform human tasks with the same efficiency and accuracy. In order to address this gap in our understanding about artificial surface perception, we have designed a novel auditory-haptic test bed. This study aims to inspire new designs for artificial sensing tools in human-machine and robotic applications.

hi

Project Page [BibTex]

Project Page [BibTex]


Airo{T}ouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction
AiroTouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
CAPT Motor: A Strong Direct-Drive Haptic Interface

Javot, B., Nguyen, V. H., Ballardini, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?
Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?

Gourishetti, R., Javot, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal for Children with Autism
Creating a Haptic Empathetic Robot Animal for Children with Autism

Burns, R. B.

Workshop paper (4 pages) presented at the RSS Pioneers Workshop, Daegu, South Korea, July 2023 (misc)

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals
The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals

Gueorguiev, D., Rohou–Claquin, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Vibrotactile Playback for Teaching Manual Skills from Expert Recordings
Vibrotactile Playback for Teaching Manual Skills from Expert Recordings

Gourishetti, R., Hughes, A. G., Javot, B., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Poster presented at the ICRA Workshop on Future of Construction: Robot Perception, Mapping, Navigation, Control in Unstructured and Cluttered Environments, London, UK, May 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Airo{T}ouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction-Related Tasks
AiroTouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction-Related Tasks

Gong, Y., Tashiro, N., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the ICRA Workshop on Communicating Robot Learning across Human-Robot Interaction, London, UK, May 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
3D Reconstruction for Minimally Invasive Surgery: Lidar Versus Learning-Based Stereo Matching

Caccianiga, G., Nubert, J., Hutter, M., Kuchenbecker., K. J.

Workshop paper (2 pages) presented at the ICRA Workshop on Robot-Assisted Medical Imaging, London, UK, May 2023 (misc)

Abstract
This work investigates real-time 3D surface reconstruction for minimally invasive surgery. Specifically, we analyze depth sensing through laser-based time-of-flight sensing (lidar) and stereo endoscopy on ex-vivo porcine tissue samples. When compared to modern learning-based stereo matching from endoscopic images, lidar achieves lower processing delay, higher frame rate, and superior robustness against sensor distance and poor illumination. Furthermore, we report on the negative effect of near-infrared light penetration on the accuracy of time-of-flight measurements across different tissue types.

hi

Project Page [BibTex]

Project Page [BibTex]


Surface Perception through Haptic-Auditory Contact Data
Surface Perception through Haptic-Auditory Contact Data

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the ICRA Workshop on Embracing Contacts, London, UK, May 2023 (misc)

Abstract
Sliding a finger or tool along a surface generates rich haptic and auditory contact signals that encode properties crucial for manipulation, such as friction and hardness. To engage in contact-rich manipulation, future robots would benefit from having surface-characterization capabilities similar to humans, but the optimal sensing configuration is not yet known. Thus, we developed a test bed for capturing high-quality measurements as a human touches surfaces with different tools: it includes optical motion capture, a force/torque sensor under the surface sample, high-bandwidth accelerometers on the tool and the fingertip, and a high-fidelity microphone. After recording data from three tool diameters and nine surfaces, we describe a surface-classification pipeline that uses the maximum mean discrepancy (MMD) to compare newly gathered data to each surface in our known library. The results achieved under several pipeline variations are compared, and future investigations are outlined.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


{OCRA}: An Optimization-Based Customizable Retargeting Algorithm for Teleoperation
OCRA: An Optimization-Based Customizable Retargeting Algorithm for Teleoperation

Mohan, M., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop Toward Robot Avatars, London, UK, May 2023 (misc)

Abstract
This paper presents a real-time optimization-based algorithm for mapping motion between two kinematically dissimilar serial linkages, such as a human arm and a robot arm. OCRA can be customized based on the target task to weight end-effector orientation versus the configuration of the central line of the arm, which we call the skeleton. A video-watching study (N=70) demonstrated that when this algorithm considers both the hand orientation and the arm skeleton, it creates robot arm motions that users perceive to be highly similar to those of the human operator, indicating OCRA would be suitable for telerobotics and telepresence through avatars.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Wearable Biofeedback for Knee Joint Health

Rokhmanova, N.

Extended abstract (5 pages) presented at the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI) Doctoral Consortium, Hamburg, Germany, April 2023 (misc)

Abstract
The human body has the tremendous capacity to learn a new way of walking that reduces its risk of musculoskeletal disease progression. Wearable haptic biofeedback has been used to guide gait retraining in patients with knee osteoarthritis, enabling reductions in pain and improvement in function. However, this promising therapy is not yet a part of standard clinical practice. Here, I propose a two-pronged approach to improving the design and deployment of biofeedback for gait retraining. The first section concerns prescription, with the aim of providing clinicians with an interpretable model of gait retraining outcome in order to best guide their treatment decisions. The second section concerns learning, by examining how internal physiological state and external environmental factors influence the process of learning a therapeutic gait. This work aims to address the challenges keeping a highly promising intervention from being widely used to maintain pain-free mobility throughout the lifespan.

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Lasting Impact: Using Second-Order Dynamics to Customize the Continuous Emotional Expression of a Social Robot
A Lasting Impact: Using Second-Order Dynamics to Customize the Continuous Emotional Expression of a Social Robot

Burns, R. B., Kuchenbecker, K. J.

Workshop paper (5 pages) presented at the HRI Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI), Stockholm, Sweden, March 2023 (misc)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots need social skills to interact warmly and effectively with their users, as well as adaptive behavior to maintain user interest. While complex emotion models exist for chat bots and virtual agents, autonomous physical robots often lack a dynamic internal affective state, instead displaying brief, fixed emotion routines to promote or discourage specific user actions. We address this need by creating a mathematical emotion model that can easily be implemented in a social robot to enable it to react intelligently to external stimuli. The robot's affective state is modeled as a second-order dynamic system analogous to a mass connected to ground by a parallel spring and damper. The present position of this imaginary mass shows the robot's valence, which we visualize as the height of its displayed smile (positive) or frown (negative). Associating positive and negative stimuli with appropriately oriented and sized force pulses applied to the mass enables the robot to respond to social touch and other inputs with a valence that evolves over a longer timescale, capturing essential features of approach-avoidance theory. By adjusting the parameters of this emotion model, one can modify three main aspects of the robot's personality, which we term disposition, stoicism, and calmness.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Synchronizing Machine Learning Algorithms, Realtime Robotic Control and Simulated Environment with o80
Synchronizing Machine Learning Algorithms, Realtime Robotic Control and Simulated Environment with o80

Berenz, V., Widmaier, F., Guist, S., Schölkopf, B., Büchler, D.

Robot Software Architectures Workshop (RSA) 2023, ICRA, 2023 (techreport)

Abstract
Robotic applications require the integration of various modalities, encompassing perception, control of real robots and possibly the control of simulated environments. While the state-of-the-art robotic software solutions such as ROS 2 provide most of the required features, flexible synchronization between algorithms, data streams and control loops can be tedious. o80 is a versatile C++ framework for robotics which provides a shared memory model and a command framework for real-time critical systems. It enables expert users to set up complex robotic systems and generate Python bindings for scientists. o80's unique feature is its flexible synchronization between processes, including the traditional blocking commands and the novel ``bursting mode'', which allows user code to control the execution of the lower process control loop. This makes it particularly useful for setups that mix real and simulated environments.

ei

arxiv poster link (url) [BibTex]


no image
Challenging Common Assumptions in Multi-task Learning

Elich, C., Kirchdorfer, L., Köhler, J. M., Schott, L.

abs/2311.04698, CoRR/arxiv, 2023 (techreport)

ev

paper link (url) [BibTex]

paper link (url) [BibTex]

2022


no image
A Sequential Group VAE for Robot Learning of Haptic Representations

Richardson, B. A., Kuchenbecker, K. J., Martius, G.

pages: 1-11, Workshop paper (8 pages) presented at the CoRL Workshop on Aligning Robot Representations with Humans, Auckland, New Zealand, December 2022 (misc)

Abstract
Haptic representation learning is a difficult task in robotics because information can be gathered only by actively exploring the environment over time, and because different actions elicit different object properties. We propose a Sequential Group VAE that leverages object persistence to learn and update latent general representations of multimodal haptic data. As a robot performs sequences of exploratory procedures on an object, the model accumulates data and learns to distinguish between general object properties, such as size and mass, and trial-to-trial variations, such as initial object position. We demonstrate that after very few observations, the general latent representations are sufficiently refined to accurately encode many haptic object properties.

al hi

link (url) Project Page [BibTex]

2022


link (url) Project Page [BibTex]


Reconstructing Expressive {3D} Humans from {RGB} Images
Reconstructing Expressive 3D Humans from RGB Images

Choutas, V.

ETH Zurich, Max Planck Institute for Intelligent Systems and ETH Zurich, December 2022 (thesis)

Abstract
To interact with our environment, we need to adapt our body posture and grasp objects with our hands. During a conversation our facial expressions and hand gestures convey important non-verbal cues about our emotional state and intentions towards our fellow speakers. Thus, modeling and capturing 3D full-body shape and pose, hand articulation and facial expressions are necessary to create realistic human avatars for augmented and virtual reality. This is a complex task, due to the large number of degrees of freedom for articulation, body shape variance, occlusions from objects and self-occlusions from body parts, e.g. crossing our hands, and subject appearance. The community has thus far relied on expensive and cumbersome equipment, such as multi-view cameras or motion capture markers, to capture the 3D human body. While this approach is effective, it is limited to a small number of subjects and indoor scenarios. Using monocular RGB cameras would greatly simplify the avatar creation process, thanks to their lower cost and ease of use. These advantages come at a price though, since RGB capture methods need to deal with occlusions, perspective ambiguity and large variations in subject appearance, in addition to all the challenges posed by full-body capture. In an attempt to simplify the problem, researchers generally adopt a divide-and-conquer strategy, estimating the body, face and hands with distinct methods using part-specific datasets and benchmarks. However, the hands and face constrain the body and vice-versa, e.g. the position of the wrist depends on the elbow, shoulder, etc.; the divide-and-conquer approach can not utilize this constraint. In this thesis, we aim to reconstruct the full 3D human body, using only readily accessible monocular RGB images. In a first step, we introduce a parametric 3D body model, called SMPL-X, that can represent full-body shape and pose, hand articulation and facial expression. Next, we present an iterative optimization method, named SMPLify-X, that fits SMPL-X to 2D image keypoints. While SMPLify-X can produce plausible results if the 2D observations are sufficiently reliable, it is slow and susceptible to initialization. To overcome these limitations, we introduce ExPose, a neural network regressor, that predicts SMPL-X parameters from an image using body-driven attention, i.e. by zooming in on the hands and face, after predicting the body. From the zoomed-in part images, dedicated part networks predict the hand and face parameters. ExPose combines the independent body, hand, and face estimates by trusting them equally. This approach though does not fully exploit the correlation between parts and fails in the presence of challenges such as occlusion or motion blur. Thus, we need a better mechanism to aggregate information from the full body and part images. PIXIE uses neural networks called moderators that learn to fuse information from these two image sets before predicting the final part parameters. Overall, the addition of the hands and face leads to noticeably more natural and expressive reconstructions. Creating high fidelity avatars from RGB images requires accurate estimation of 3D body shape. Although existing methods are effective at predicting body pose, they struggle with body shape. We identify the lack of proper training data as the cause. To overcome this obstacle, we propose to collect internet images from fashion models websites, together with anthropometric measurements. At the same time, we ask human annotators to rate images and meshes according to a pre-defined set of linguistic attributes. We then define mappings between measurements, linguistic shape attributes and 3D body shape. Equipped with these mappings, we train a neural network regressor, SHAPY, that predicts accurate 3D body shapes from a single RGB image. We observe that existing 3D shape benchmarks lack subject variety and/or ground-truth shape. Thus, we introduce a new benchmark, Human Bodies in the Wild (HBW), which contains images of humans and their corresponding 3D ground-truth body shape. SHAPY shows how we can overcome the lack of in-the-wild images with 3D shape annotations through easy-to-obtain anthropometric measurements and linguistic shape attributes. Regressors that estimate 3D model parameters are robust and accurate, but often fail to tightly fit the observations. Optimization-based approaches tightly fit the data, by minimizing an energy function composed of a data term that penalizes deviations from the observations and priors that encode our knowledge of the problem. Finding the balance between these terms and implementing a performant version of the solver is a time-consuming and non-trivial task. Machine-learned continuous optimizers combine the benefits of both regression and optimization approaches. They learn the priors directly from data, avoiding the need for hand-crafted heuristics and loss term balancing, and benefit from optimized neural network frameworks for fast inference. Inspired from the classic Levenberg-Marquardt algorithm, we propose a neural optimizer that outperforms classic optimization, regression and hybrid optimization-regression approaches. Our proposed update rule uses a weighted combination of gradient descent and a network-predicted update. To show the versatility of the proposed method, we apply it on three other problems, namely full body estimation from (i) 2D keypoints, (ii) head and hand location from a head-mounted device and (iii) face tracking from dense 2D landmarks. Our method can easily be applied to new model fitting problems and offers a competitive alternative to well-tuned traditional model fitting pipelines, both in terms of accuracy and speed. To summarize, we propose a new and richer representation of the human body, SMPL-X, that is able to jointly model the 3D human body pose and shape, facial expressions and hand articulation. We propose methods, SMPLify-X, ExPose and PIXIE that estimate SMPL-X parameters from monocular RGB images, progressively improving the accuracy and realism of the predictions. To further improve reconstruction fidelity, we demonstrate how we can use easy-to-collect internet data and human annotations to overcome the lack of 3D shape data and train a model, SHAPY, that predicts accurate 3D body shape from a single RGB image. Finally, we propose a flexible learnable update rule for parametric human model fitting that outperforms both classic optimization and neural network approaches. This approach is easily applicable to a variety of problems, unlocking new applications in AR/VR scenarios.

ps

pdf [BibTex]

pdf [BibTex]


no image
Semi-Automated Robotic Pleural Cavity Access in Space

L’Orsa, R., Lotbiniere-Bassett, M. D., Zareinia, K., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

Poster presented at the Canadian Space Health Research Symposium (CSHRS), Alberta, Canada, November 2022 (misc)

Abstract
Astronauts are at risk for pneumothorax, a medical condition where air accumulating between the chest wall and the lungs impedes breathing and can result in fatality. Treatments include needle decompression (ND) and chest tube insertion (tube thoracostomy, TT). Unfortunately, the literature reports very high failure rates for ND and high complication rates for TT– especially whenn performed urgently, infrequently, or by inexperienced operators. These statistics are problematic in the context of skill retention for physician astronauts on long-duration exploration-class missions, or for non-medical astronauts if the physician astronaut is the one in need of treatment. We propose reducing the medical risk for exploration-class missions by improving ND/TT outcomes using a robot-based paradigm that automates tool depth control. Our goal is to produce a robotic system that improves the safety of pneumothorax treatments regardless of operator skill and without the use of ground resources. This poster provides an overview of our team's work toward this goal, including robot instrumentation schemes, tool-tissue interaction characterization, and automated puncture detection.

hi

Project Page [BibTex]

Project Page [BibTex]


Do-It-Yourself Whole-Body Social-Touch Perception for a {NAO} Robot
Do-It-Yourself Whole-Body Social-Touch Perception for a NAO Robot

Burns, R. B., Rosenthal, R., Garg, K., Kuchenbecker, K. J.

Workshop paper (1 page) presented at the IROS Workshop on Large-Scale Robotic Skin: Perception, Interaction and Control, Kyoto, Japan, October 2022 (misc)

hi

Poster link (url) Project Page [BibTex]

Poster link (url) Project Page [BibTex]


A Soft Vision-Based Tactile Sensor for Robotic Fingertip Manipulation
A Soft Vision-Based Tactile Sensor for Robotic Fingertip Manipulation

Andrussow, I., Sun, H., Kuchenbecker, K. J., Martius, G.

Workshop paper (1 page) presented at the IROS Workshop on Large-Scale Robotic Skin: Perception, Interaction and Control, Kyoto, Japan, October 2022 (misc)

Abstract
For robots to become fully dexterous, their hardware needs to provide rich sensory feedback. High-resolution haptic sensing similar to the human fingertip can enable robots to execute delicate manipulation tasks like picking up small objects, inserting a key into a lock, or handing a cup of coffee to a human. Many tactile sensors have emerged in recent years; one especially promising direction is vision-based tactile sensors due to their low cost, low wiring complexity and high-resolution sensing capabilities. In this work, we build on previous findings to create a soft fingertip-sized tactile sensor. It can sense normal and shear contact forces all around its 3D surface with an average prediction error of 0.05 N, and it localizes contact on its shell with an average prediction error of 0.5 mm. The software of this sensor uses a data-efficient machine-learning pipeline to run in real time on hardware with low computational power like a Raspberry Pi. It provides a maximum data frame rate of 60 Hz via USB.

al hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Sensor Patterns Dataset for Endowing a {NAO} Robot with Practical Social-Touch Perception
Sensor Patterns Dataset for Endowing a NAO Robot with Practical Social-Touch Perception

Burns, R. B., Lee, H., Seifi, H., Faulkner, R., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Endowing a NAO Robot with Practical Social-Touch Perception" in Frontiers in Robotics and AI, October 2022 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Predicting Knee Adduction Moment Response to Gait Retraining

Rokhmanova, N., Kuchenbecker, K. J., Shull, P. B., Ferber, R., Halilaj, E.

Extended abstract presented at North American Congress of Biomechanics (NACOB), Ottawa, Canada, August 2022 (misc)

Abstract
Personalized gait retraining has shown promise as a conservative intervention for slowing knee osteoarthritis (OA) progression [1,2]. Changing the foot progression angle is an easy-to-learn gait modification that often reduces the knee adduction moment (KAM), a correlate of medial joint loading. Deployment to clinics is challenging, however, because customizing gait retraining still requires gait lab instrumentation. Innovation in wearable sensing and vision-based motion tracking could bring lab-level accuracy to the clinic, but current markerless motion-tracking algorithms cannot accurately assess if gait retraining will reduce someone's KAM by a clinically meaningful margin. To assist clinicians in determining if a patient will benefit from toe-in gait, we built a predictive model to estimate KAM reduction using only measurements that can be easily obtained in the clinic.

hi

Project Page [BibTex]

Project Page [BibTex]


Data of: Gastrocnemius and Power Amplifier Soleus Spring-Tendons Achieve Fast Human-like Walking in a Bipedal Robot
Data of: Gastrocnemius and Power Amplifier Soleus Spring-Tendons Achieve Fast Human-like Walking in a Bipedal Robot

Kiss, B., Gonen, E. C., Mo, A., Buchmann, A., Renjewski, D., Badri-Spröwitz, A.

July 2022 (misc)

Abstract
Data, code, and CAD for IROS 2022 publication Gastrocnemius and Power Amplifier Soleus Spring-Tendons Achieve Fast Human-like Walking in a Bipedal Robot

dlg

link (url) DOI [BibTex]


no image
A Sensorized Needle-Insertion Device for Characterizing Percutaneous Thoracic Tool-Tissue Interactions

L’Orsa, R., Zareinia, K., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

Short paper (2 pages) presented at the Hamlyn Symposium on Medical Robotics (HSMR), London, UK, June 2022 (misc)

Abstract
Serious complications during chest tube insertion are relatively rare, but can have catastrophic repercussions. We propose semi-automating tool insertion to help protect against non-target tissue puncture, and report first steps collecting and characterizing needle-tissue interaction forces in a tissue phantom used for chest tube insertion training.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Dense 3D Reconstruction Through Lidar: A New Perspective on Computer-Integrated Surgery

Caccianiga, G., Kuchenbecker, K. J.

Short paper (2 pages) presented at the Hamlyn Symposium on Medical Robotics (HSMR), London, UK, June 2022 (misc)

Abstract
Technical innovations in sensing and computation are quickly advancing the field of computer-integrated surgery. In this fast-evolving panorama, we strongly believe there is still a need for robust geometric reconstruction of the surgical field. 3D reconstruction in surgery has been investigated almost only in the space of mono and stereoscopic visual imaging because surgeons always view the procedure through a clinical endoscope. Meanwhile, lidar (light detection and ranging) has greatly expanded in use, especially in SLAM for robotics, terrestrial vehicles, and drones. In parallel to these developments, the concept of multiple-viewpoint surgical imaging was proposed in the early 2010's in the context of magnetic actuation and micro-invasive surgery. We here propose an approach in which each surgical cannula can potentially hold a miniature lidar. Direct comparison between lidar from different viewpoints and a state-of-the-art 3D reconstruction method on stereoendoscope images showed that lidar-generated point clouds achieve better accuracy and scene coverage. This experiment especially hints at the potential of lidar imaging when deployed in a multiple-viewpoint approach.

hi

Project Page [BibTex]

Project Page [BibTex]


Comparing Two Grounded Force-Feedback Haptic Devices
Comparing Two Grounded Force-Feedback Haptic Devices

Fazlollahi, F., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Hamburg, Germany, May 2022 (misc)

Abstract
Even when they are not powered, grounded force-feedback haptic devices apply forces on the user's hand. These undesired forces stem from gravity, friction, and other nonidealities, and they still exist when the device renders a virtual environment. This demo invites users to compare how the 3D Systems Touch and Touch X devices render the same haptic content. Participants will try both devices in free space and touch a stiff frictionless virtual surface. After reflecting on the differences between the two devices, each person will receive a booklet showing the quantitative performance criteria we measured for both devices using Haptify, our benchmarking system.

hi

Project Page [BibTex]

Project Page [BibTex]


Finger Contact during Pressing and Sliding on a Glass Plate
Finger Contact during Pressing and Sliding on a Glass Plate

Nam, S., Gueorguiev, D., Kuchenbecker, K. J.

Poster presented at the EuroHaptics Workshop on Skin Mechanics and its Role in Manipulation and Perception, Hamburg, Germany, May 2022 (misc)

Abstract
Light contact between the finger and the surface of an object sometimes causes an unanticipated slip. However, conditions causing this slip have not been fully understood, mainly because the biological components interact in complex ways to generate the skin-surface frictional properties. We investigated how the contact area starts slipping in various conditions of moisture, occlusion, and temperature during a lateral motion performed while pressing lightly on the surface.

hi

Project Page [BibTex]

Project Page [BibTex]


Huggie{B}ot: A Human-Sized Haptic Interface
HuggieBot: A Human-Sized Haptic Interface

Block, A. E., Seifi, H., Christen, S., Javot, B., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Hamburg, Germany, May 2022, Award for best hands-on demonstration (misc)

Abstract
How many people have you hugged in these past two years of social distancing? Unfortunately, many people we interviewed exchanged fewer hugs with friends and family since the onset of the COVID-19 pandemic. Hugging has several health benefits, such as improved oxytocin levels, lowered blood pressure, and alleviated stress and anxiety. We created a human-sized haptic interface called HuggieBot to provide the benefits of hugs in situations when receiving a hug from another person is difficult or impossible. In this demonstration, participants of all shapes and sizes can walk up to HuggieBot, enter an embrace, perform several intra-hug gestures (hold still, rub, pat, or squeeze the robot) if desired, feel the robot's response, and leave the hug when they are ready.

hi

Project Page [BibTex]

Project Page [BibTex]


User Study Dataset for Endowing a {NAO} Robot with Practical Social-Touch Perception
User Study Dataset for Endowing a NAO Robot with Practical Social-Touch Perception

Burns, R. B., Lee, H., Seifi, H., Faulkner, R., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Endowing a NAO Robot with Practical Social-Touch Perception" in Frontiers in Robotics and AI, April 2022 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Proceedings of the First Conference on Causal Learning and Reasoning (CLeaR 2022)

Schölkopf, B., Uhler, C., Zhang, K.

177, Proceedings of Machine Learning Research, PMLR, April 2022 (proceedings)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Observability Analysis of Visual-Inertial Odometry with Online Calibration of Velocity-Control Based Kinematic Motion Models

Li, H., Stueckler, J.

abs/2204.06651, CoRR/arxiv, 2022 (techreport)

Abstract
In this paper, we analyze the observability of the visual-inertial odometry (VIO) using stereo cameras with a velocity-control based kinematic motion model. Previous work shows that in general case the global position and yaw are unobservable in VIO system, additionally the roll and pitch become also unobservable if there is no rotation. We prove that by integrating a planar motion constraint roll and pitch become observable. We also show that the parameters of the motion model are observable.

ev

link (url) [BibTex]

2021


Teaching Safe Social Touch Interactions Using a Robot Koala
Teaching Safe Social Touch Interactions Using a Robot Koala

Burns, R. B.

Workshop paper (1 page) presented at the IROS Workshop on Proximity Perception in Robotics: Increasing Safety for Human-Robot Interaction Using Tactile and Proximity Perception, Prague, Czech Republic, September 2021 (misc)

hi

link (url) Project Page [BibTex]

2021


link (url) Project Page [BibTex]