Header logo is


2024


no image
Demonstration: OCRA - A Kinematic Retargeting Algorithm for Expressive Whole-Arm Teleoperation

Mohan, M., Kuchenbecker, K. J.

Hands-on demonstration presented at the Conference on Robot Learning (CoRL), Munich, Germany, November 2024 (misc) Accepted

Abstract
Traditional teleoperation systems focus on controlling the pose of the end-effector (task space), often neglecting the additional degrees of freedom present in human and many robotic arms. This demonstration presents the Optimization-based Customizable Retargeting Algorithm (OCRA), which was designed to map motions from one serial kinematic chain to another in real time. OCRA is versatile, accommodating any robot joint counts and segment lengths, and it can retarget motions from human arms to kinematically different serial robot arms with revolute joints both expressively and efficiently. One of OCRA's key features is its customizability, allowing the user to adjust the emphasis between hand orientation error and the configuration error of the arm's central line, which we call the arm skeleton. To evaluate the perceptual quality of the motions generated by OCRA, we conducted a video-watching study with 70 participants; the results indicated that the algorithm produces robot motions that closely resemble human movements, with a median rating of 78/100, particularly when the arm skeleton error weight and hand orientation error are balanced. In this demonstration, the presenter will wear an Xsens MVN Link and teleoperate the arms of a NAO child-size humanoid robot to highlight OCRA's ability to create intuitive and human-like whole-arm motions.

hi

Project Page [BibTex]

2024


Project Page [BibTex]


no image
Demonstration: Minsight - A Soft Vision-Based Tactile Sensor for Robotic Fingertips

Andrussow, I., Sun, H., Martius, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the Conference on Robot Learning (CoRL), Munich, Germany, November 2024 (misc) Accepted

Abstract
Beyond vision and hearing, tactile sensing enhances a robot's ability to dexterously manipulate unfamiliar objects and safely interact with humans. Giving touch sensitivity to robots requires compact, robust, affordable, and efficient hardware designs, especially for high-resolution tactile sensing. We present a soft vision-based tactile sensor engineered to meet these requirements. Comparable in size to a human fingertip, Minsight uses machine learning to output high-resolution directional contact force distributions at 60 Hz. Minsight's tactile force maps enable precise sensing of fingertip contacts, which we use in this hands-on demonstration to allow a 3-DoF robot arm to physically track contact with a user's finger. While observing the colorful image captured by Minsight's internal camera, attendees can experience how its ability to detect delicate touches in all directions facilitates real-time robot interaction.

al hi ei

Project Page [BibTex]

Project Page [BibTex]


no image
Active Haptic Feedback for a Virtual Wrist-Anchored User Interface

Bartels, J. U., Sanchez-Tamayo, N., Sedlmair, M., Kuchenbecker, K. J.

Hands-on demonstration presented at the ACM Symposium on User Interface Software and Technology (UIST), Pittsburgh, USA, October 2024 (misc) Accepted

hi

DOI [BibTex]

DOI [BibTex]


Leveraging Unpaired Data for the Creation of Controllable Digital Humans
Leveraging Unpaired Data for the Creation of Controllable Digital Humans

Sanyal, S.

Max Planck Institute for Intelligent Systems and Eberhard Karls Universität Tübingen, September 2024 (phdthesis) To be published

Abstract
Digital humans have grown increasingly popular, offering transformative potential across various fields such as education, entertainment, and healthcare. They enrich user experiences by providing immersive and personalized interactions. Enhancing these experiences involves making digital humans controllable, allowing for manipulation of aspects like pose and appearance, among others. Learning to create such controllable digital humans necessitates extensive data from diverse sources. This includes 2D human images alongside their corresponding 3D geometry and texture, 2D images showcasing similar appearances across a wide range of body poses, etc., for effective control over pose and appearance. However, the availability of such “paired data” is limited, making its collection both time-consuming and expensive. Despite these challenges, there is an abundance of unpaired 2D images with accessible, inexpensive labels—such as identity, type of clothing, appearance of clothing, etc. This thesis capitalizes on these affordable labels, employing informed observations from “unpaired data” to facilitate the learning of controllable digital humans through reconstruction, transposition, and generation processes. The presented methods—RingNet, SPICE, and SCULPT—each tackles different aspects of controllable digital human modeling. RingNet (Sanyal et al. [2019]) exploits the consistent facial geometry across different images of the same individual to estimate 3D face shapes and poses without 2D-to-3D supervision. This method illustrates how leveraging the inherent properties of unpaired images—such as identity consistency—can circumvent the need for expensive paired datasets. Similarly, SPICE (Sanyal et al. [2021]) employs a self-supervised learning framework that harnesses unpaired images to generate realistic transpositions of human poses by understanding the underlying 3D body structure and maintaining consistency in body shape and appearance features across different poses. Finally, SCULPT (Sanyal et al. [2024] generates clothed and textured 3D meshes by integrating insights from unpaired 2D images and medium-sized 3D scans. This process employs an unpaired learning approach, conditioning texture and geometry generation on attributes easily derived from data, like the type and appearance of clothing. In conclusion, this thesis highlights how unpaired data and innovative learning techniques can address the challenges of data scarcity and high costs in developing controllable digital humans by advancing reconstruction, transposition, and generation techniques.

ps

[BibTex]

[BibTex]


Realistic Digital Human Characters: Challenges, Models and Algorithms
Realistic Digital Human Characters: Challenges, Models and Algorithms

Osman, A. A. A.

University of Tübingen, September 2024 (phdthesis)

Abstract
Statistical models for the body, head, and hands are essential in various computer vision tasks. However, popular models like SMPL, MANO, and FLAME produce unrealistic deformations due to inherent flaws in their modeling assumptions and how they are trained, which have become standard practices in constructing models for the body and its parts. This dissertation addresses these limitations by proposing new modeling and training algorithms to improve the realism and generalization of current models. We introduce a new model, STAR (Sparse Trained Articulated Human Body Regressor), which learns a sparse representation of the human body deformations, significantly reducing the number of model parameters compared to models like SMPL. This approach ensures that deformations are spatially localized, leading to more realistic deformations. STAR also incorporates shape-dependent pose deformations, accounting for variations in body shape to enhance overall model accuracy and realism. Additionally, we present a novel federated training algorithm for developing a comprehensive suite of models for the body and its parts. We train an expressive body model, SUPR (Sparse Unified Part-Based Representation), on a federated dataset of full-body scans, including detailed scans of the head, hands, and feet. We then separate SUPR into a full suite of state-of-the-art models for the head, hands, and foot. The new foot model captures complex foot deformations, addressing challenges related to foot shape, pose, and ground contact dynamics. The dissertation concludes by introducing AVATAR (Articulated Virtual Humans Trained By Bayesian Inference From a Single Scan), a novel, data-efficient training algorithm. AVATAR allows the creation of personalized, high-fidelity body models from a single scan by framing model construction as a Bayesian inference problem, thereby enabling training from small-scale datasets while reducing the risk of overfitting. These advancements push the state of the art in human body modeling and training techniques, making them more accessible for broader research and practical applications.

ps

[BibTex]


no image
Advances in Probabilistic Methods for Deep Learning

Immer, A.

ETH Zurich, Switzerland, September 2024, CLS PhD Program (phdthesis)

ei

[BibTex]

[BibTex]


no image
Modeling Shank Tissue Properties and Quantifying Body Composition with a Wearable Actuator-Accelerometer Set

Rokhmanova, N., Martus, J., Faulkner, R., Fiene, J., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Madison, USA, August 2024 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Madison, USA, August 2024 (misc)

hi

[BibTex]

[BibTex]


no image
Engineering and Evaluating Naturalistic Vibrotactile Feedback for Telerobotic Assembly

Gong, Y.

University of Stuttgart, Stuttgart, Germany, August 2024, Faculty of Design, Production Engineering and Automotive Engineering (phdthesis)

Abstract
Teleoperation allows workers on a construction site to assemble pre-fabricated building components by controlling powerful machines from a safe distance. However, teleoperation's primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, compromising their situational awareness and thus increasing the difficulty of the task; it also makes construction machines more difficult to learn to operate. To bridge this gap, we propose that reliable, economical, and easy-to-implement naturalistic vibrotactile feedback could improve telerobotic control interfaces in construction and other application areas such as surgery. This type of feedback enables the operator to feel the natural vibrations experienced by the robot, which contain crucial information about its motions and its physical interactions with the environment. This dissertation explores how to deliver naturalistic vibrotactile feedback from a robot's end-effector to the hand of an operator performing telerobotic assembly tasks; furthermore, it seeks to understand the effects of such haptic cues. The presented research can be divided into four parts. We first describe the engineering of AiroTouch, a naturalistic vibrotactile feedback system tailored for use on construction sites but suitable for many other applications of telerobotics. Then we evaluate AiroTouch and explore the effects of the naturalistic vibrotactile feedback it delivers in three user studies conducted either in laboratory settings or on a construction site. We begin this dissertation by developing guidelines for creating a haptic feedback system that provides high-quality naturalistic vibrotactile feedback. These guidelines include three sections: component selection, component placement, and system evaluation. We detail each aspect with the parameters that need to be considered. Based on these guidelines, we adapt widely available commercial audio equipment to create our system called AiroTouch, which measures the vibration experienced by each robot tool with a high-bandwidth three-axis accelerometer and enables the user to feel this vibration in real time through a voice-coil actuator. Accurate haptic transmission is achieved by optimizing the positions of the system's off-the-shelf sensors and actuators and is then verified through measurements. The second part of this thesis presents our initial validation of AiroTouch. We explored how adding this naturalistic type of vibrotactile feedback affects the operator during small-scale telerobotic assembly. Due to the limited accessibility of teleoperated robots and to maintain safety, we conducted a user study in lab with a commercial bimanual dexterous teleoperation system developed for surgery (Intuitive da Vinci Si). Thirty participants used this robot equipped with AiroTouch to assemble a small stiff structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that participants learn to take advantage of both tested versions of the haptic feedback in the given tasks, as significantly lower vibrations and forces are observed in the second trial. Subjective responses indicate that naturalistic vibrotactile feedback increases the realism of the interaction and reduces the perceived task duration, task difficulty, and fatigue. To test our approach on a real construction site, we enhanced AiroTouch using wireless signal-transmission technologies and waterproofing, and then we adapted it to a mini-crane construction robot. A study was conducted to evaluate how naturalistic vibrotactile feedback affects an observer's understanding of telerobotic assembly performed by this robot on a construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicates that all participants had positive responses to this technology and believed it would be beneficial for construction activities. Finally, we evaluated the effects of naturalistic vibrotactile feedback provided by wireless AiroTouch during live teleoperation of the mini-crane. Twenty-eight participants remotely controlled the mini-crane to complete three large-scale assembly-related tasks in lab, both with and without this type of haptic feedback. Our results show that naturalistic vibrotactile feedback enhances the participants' awareness of both robot motion and contact between the robot and other objects, particularly in scenarios with limited visibility. These effects increase participants' confidence when controlling the robot. Moreover, there is a noticeable trend of reduced vibration magnitude in the conditions where this type of haptic feedback is provided. The primary contribution of this dissertation is the clear explanation of details that are essential for the effective implementation of naturalistic vibrotactile feedback. We demonstrate that our accessible, audio-based approach can enhance user performance and experience during telerobotic assembly in construction and other application domains. These findings lay the foundation for further exploration of the potential benefits of incorporating haptic cues to enhance user experience during teleoperation.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
A Measure-Theoretic Axiomatisation of Causality and Kernel Regression

Park, J.

University of Tübingen, Germany, July 2024 (phdthesis)

ei

[BibTex]

[BibTex]


Modelling Dynamic 3D Human-Object Interactions: From Capture to Synthesis
Modelling Dynamic 3D Human-Object Interactions: From Capture to Synthesis

Taheri, O.

University of Tübingen, July 2024 (phdthesis) To be published

Abstract
Modeling digital humans that move and interact realistically with virtual 3D worlds has emerged as an essential research area recently, with significant applications in computer graphics, virtual and augmented reality, telepresence, the Metaverse, and assistive technologies. In particular, human-object interaction, encompassing full-body motion, hand-object grasping, and object manipulation, lies at the core of how humans execute tasks and represents the complex and diverse nature of human behavior. Therefore, accurate modeling of these interactions would enable us to simulate avatars to perform tasks, enhance animation realism, and develop applications that better perceive and respond to human behavior. Despite its importance, this remains a challenging problem, due to several factors such as the complexity of human motion, the variance of interaction based on the task, and the lack of rich datasets capturing the complexity of real-world interactions. Prior methods have made progress, but limitations persist as they often focus on individual aspects of interaction, such as body, hand, or object motion, without considering the holistic interplay among these components. This Ph.D. thesis addresses these challenges and contributes to the advancement of human-object interaction modeling through the development of novel datasets, methods, and algorithms.

ps

[BibTex]

[BibTex]


no image
Errors in Long-Term Robotic Surgical Training

Lev, H. K., Sharon, Y., Geftler, A., Nisky, I.

Work-in-progress paper (3 pages) presented at the EuroHaptics Conference, Lille, France, June 2024 (misc)

Abstract
Robotic surgeries offer many advantages but require surgeons to master complex motor tasks over years. Most motor-control studies focus on simple tasks and span days at most. To help bridge this gap, we followed surgical residents learning complex tasks on a surgical robot over six months. Here, we focus on the task of moving a ring along a curved wire as quickly and accurately as possible. We wrote an image processing algorithm to locate the errors in the task and computed error metrics and task completion time. We found that participants decreased their completion time and number of errors over the six months, however, the percentage of error time in the task remained constant. This long-term study sheds light on the learning process of the surgeons and opens the possibility of further studying their errors with the aim of minimizing them.

hi

DOI [BibTex]

DOI [BibTex]


no image
Advancing Normalising Flows to Model Boltzmann Distributions

Stimper, V.

University of Cambridge, UK, Cambridge, June 2024, (Cambridge-Tübingen-Fellowship-Program) (phdthesis)

ei

[BibTex]

[BibTex]


no image
GaitGuide: A Wearable Device for Vibrotactile Motion Guidance

Rokhmanova, N., Martus, J., Faulkner, R., Fiene, J., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop on Advancing Wearable Devices and Applications Through Novel Design, Sensing, Actuation, and AI, Yokohama, Japan, May 2024 (misc)

Abstract
Wearable vibrotactile devices can provide salient sensations that attract the user's attention or guide them to change. The future integration of such feedback into medical or consumer devices would benefit from understanding how vibrotactile cues vary in amplitude and perceived strength across the heterogeneity of human skin. Here, we developed an adhesive vibrotactile device (the GaitGuide) that uses two individually mounted linear resonant actuators to deliver directional motion guidance. By measuring the mechanical vibrations of the actuators via small on-board accelerometers, we compared vibration amplitudes and perceived signal strength across 20 subjects at five signal voltages and four sites around the shank. Vibrations were consistently smallest in amplitude—but perceived to be strongest—at the site located over the tibia. We created a fourth-order linear dynamic model to capture differences in tissue properties across subjects and sites via optimized stiffness and damping parameters. The anterior site had significantly higher skin stiffness and damping; these values also correlate with subject-specific body-fat percentages. Surprisingly, our study shows that the perception of vibrotactile stimuli does not solely depend on the vibration magnitude delivered to the skin. These findings also help to explain the clinical practice of evaluating vibrotactile sensitivity over a bony prominence.

hi zwe-rob

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Three-Dimensional Surface Reconstruction of a Soft System via Distributed Magnetic Sensing

Sundaram, V. H., Smith, L., Turin, Z., Rentschler, M. E., Welker, C. G.

Workshop paper (3 pages) presented at the ICRA Workshop on Advancing Wearable Devices and Applications Through Novel Design, Sensing, Actuation, and AI, Yokohama, Japan, May 2024 (misc)

Abstract
This study presents a new method for reconstructing continuous 3D surface deformations for a soft pneumatic actuation system using embedded magnetic sensors. A finite element analysis (FEA) model was developed to quantify the surface deformation given the magnetometer readings, with a relative error between the experimental and the simulated sensor data of 7.8%. Using the FEA simulation solutions and a basic model-based mapping, our method achieves sub-millimeter accuracy in measuring deformation from sensor data with an absolute error between the experimental and simulated sensor data of 13.5%. These results show promise for real-time adjustments to deformation, crucial in environments like prosthetic and orthotic interfaces with human limbs.

hi

[BibTex]

[BibTex]


{CAPT} Motor: A Strong Direct-Drive Rotary Haptic Interface
CAPT Motor: A Strong Direct-Drive Rotary Haptic Interface

Javot, B., Nguyen, V. H., Ballardini, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE Haptics Symposium, Long Beach, USA, April 2024 (misc)

Abstract
We have designed and built a new motor named CAPT Motor that delivers continuous and precise torque. It is a brushless ironless motor using a Halbach-magnet ring and a planar axial Lorentz-coil array. This motor is unique as we use a two-phase design allowing for higher fill factor and geometrical accuracy of the coils, as they can all be made separately. This motor outperforms existing Halbach ring and cylinder motors with a torque constant per magnet volume of 9.94 (Nm/A)/dm3, a record in the field. The angular position of the rotor is measured by a high-resolution incremental optical encoder and tracked by a multimodal data acquisition device. The system's control firmware uses this angle measurement to calculate the two-phase motor currents needed to produce the torque commanded by the virtual environment at the rotor's position. The strength and precision of the CAPT Motor's torque and the lack of any mechanical transmission enable unusually high haptic rendering quality, indicating the promise of this new motor design.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Quantifying Haptic Quality: External Measurements Match Expert Assessments of Stiffness Rendering Across Devices

Fazlollahi, F., Seifi, H., Ballardini, G., Taghizadeh, Z., Schulz, A., MacLean, K. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE Haptics Symposium, Long Beach, USA, April 2024 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Multimodal Haptic Feedback

Sanchez-Tamayo, N., Yoder, Z., Ballardini, G., Rothemund, P., Keplinger, C., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the IEEE RoboSoft Workshop on Multimodal Soft Robots for Multifunctional Manipulation, Locomotion, and Human-Machine Interaction, San Diego, USA, April 2024 (misc)

hi rm

[BibTex]

[BibTex]


no image
Cutaneous Electrohydraulic Wearable Devices for Expressive and Salient Haptic Feedback

Sanchez-Tamayo, N., Yoder, Z., Ballardini, G., Rothemund, P., Keplinger, C., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE Haptics Symposium, Long Beach, USA, April 2024 (misc)

hi rm

[BibTex]


no image
Language Models Can Reduce Asymmetry in Information Markets

Rahaman, N., Weiss, M., Wüthrich, M., Bengio, Y., Li, E., Pal, C., Schölkopf, B.

arXiv:2403.14443, March 2024, Published as: Redesigning Information Markets in the Era of Language Models, Conference on Language Modeling (COLM) (techreport)

Abstract
This work addresses the buyer's inspection paradox for information markets. The paradox is that buyers need to access information to determine its value, while sellers need to limit access to prevent theft. To study this, we introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants. The central mechanism enabling this marketplace is the agents' dual capabilities: they not only have the capacity to assess the quality of privileged information but also come equipped with the ability to forget. This ability to induce amnesia allows vendors to grant temporary access to proprietary information, significantly reducing the risk of unauthorized retention while enabling agents to accurately gauge the information's relevance to specific queries or tasks. To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information. Concretely, our experiments (a) uncover biases in language models leading to irrational behavior and evaluate techniques to mitigate these biases, (b) investigate how price affects demand in the context of informational goods, and (c) show that inspection and higher budgets both lead to higher quality outcomes.

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Identifiable Causal Representation Learning

von Kügelgen, J.

University of Cambridge, UK, Cambridge, February 2024, (Cambridge-Tübingen-Fellowship) (phdthesis)

ei

[BibTex]

[BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

hi

Project Page [BibTex]

Project Page [BibTex]


Elephants develop wrinkles through both form and function
Elephants develop wrinkles through both form and function

Kaufmann, L., Schulz, A., Reveyaz, N., Ritter, C., Hildebrandt, T., Brecht, M.

Society of Integrative and Comparative Biology, Seattle, USA, January 2024 (misc) Accepted

Abstract
Elephant trunks have prominent folds and wrinkles from birth, but we have little information on how wrinkle patterns differ between elephant species and how elephant trunks and their wrinkles develop. We assessed wrinkle patterns in Asian (Elephas maximus) and African savanna (Loxodonta africana) elephants. We find that adult Asian elephants have more dorsal trunk wrinkles (~126) than African elephants (~83). In both species, we find more dorsal than ventral trunk wrinkles and a closer spacing of wrinkles in the distal than in the proximal trunk. We also observed slight (10%) differences in wrinkle numbers as a function of trunk-lateralization, suggesting the wrinkle-pattern is use-dependent. MicroCT imaging revealed that the outer elephant trunk skin has a relatively constant thickness, whereas the inner skin parts are thicker between folds than in folds. In both elephant species in early fetuses the trunk shows the greatest length growth of all body parts and the ventral trunk tip develops before the dorsal trunk finger. In development, trunk wrinkles are added in two distinct phases, an early exponential phase and a later, slower phase. We suggest that wrinkles improve the ability of trunk skin to bend, and that differential flexibility requirements might explain dorsoventral, proximal-distal, left-right side, and species differences in wrinkle distribution.

hi

[BibTex]

[BibTex]


Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative Biology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Skin is a complex biological composite consisting of layers with distinct mechanical properties, morphologies, and mechanosensory capabilities. This work seeks to expand the comparative biomechanics field to comparative haptics, analyzing elephant trunk touch by redesigning a previously published human finger-pad model with morphological parameters measured from an elephant trunk. The dorsal surface of the elephant trunk has a thick, wrinkled epidermis covered with whiskers at the distal tip and deep folds at the proximal base. We hypothesize that this thick dorsal skin protects the trunk from mechanical damage but significantly dulls its tactile sensing ability. To facilitate safe and dexterous motion, the distributed dorsal whiskers might serve as pre-touch antennae, transmitting an amplified version of impending contact to the mechanoreceptors beneath the elephant's armor. We tested these hypotheses by simulating soft tissue deformation through high-fidelity finite element analyses involving representative skin layers and whiskers, modeled based on frozen African elephant trunk (Loxodonta africana) morphology. For a typical contact force, quintupling the stratum corneum thickness to match dorsal trunk skin reduces the von Mises stress communicated to the dermis by 18%. However, adding a whisker offsets this dulled sensing, as hypothesized, amplifying the stress by more than 15 at the same location. We hope this work will motivate further investigations of mammalian touch using approaches and models from the ample literature on human touch.

hi

[BibTex]

[BibTex]


Simplifying the Wrinkled Complexity of Elephant Trunks using Knitted Biomimicry
Simplifying the Wrinkled Complexity of Elephant Trunks using Knitted Biomimicry

Singal, K., Schulz, A., Dimitriyev, M., Matsumoto, E.

Society of Integrative and Comparative Biology, Seattle, USA, January 2024 (misc) Accepted

Abstract
The elephant trunk skin’s unique morphology and composition result in prominent wrinkles and folds along the trunk; these features provide the trunk versatility and flexibility and contribute to the functionality of the trunk in different situations. These wrinkled and folded structures change throughout the trunk giving different functional trade-offs. The tip has skin for gripping and the proximal base has skin for protection. We attempt to capture these unique properties by manipulating the programmable nature of knitted fabrics and knit bio-inspired mimics of the wrinkles and folds. Using MicroCTs scans of the African elephant trunk tissue we look at the morphology and composition of different sites along the elephant trunk. Knitted fabrics provide a design space where one can program not only visual characteristics but directed elasticity as well. We use mechanical testing experiments to compare the likeness of the knitted mimics with actual elephant trunk skin samples – collected humanely – to test the viability of the mimics and better understand trunk properties. This work gives way to understanding wrinkled phenomenon in nature and ways we can re-create the complexities of skin using novel material methods.

hi

[BibTex]

[BibTex]


Defining Mammalian Climbing Gaits and their influence criteria including morphology and mechanics
Defining Mammalian Climbing Gaits and their influence criteria including morphology and mechanics

Shriver, C., Schulz, A., Scott, D., Elgart, J., Mendelson, J., Hu, D., Chang, Y.

Society of Integrative and Comparative Biology, Seattle, USA, January 2024 (misc) Accepted

Abstract
In comparison to terrestrial locomotion, climbing presents a couple unique challenges. First, organisms must move upwards, meaning they lack a “restoring force” to bring them back into contact with the climbing surface as they continuously overcome gravitational forces. Second, organisms must possess morphology capable of gripping the climbing surface and perform appropriate contact patterns to prevent falling. While recent studies have examined climbing via van der Waals forces and capillary adhesion, these are often limited to non-mammalian species less than 500 grams. Even amongst the studies for mammals, many are focused on primates, which take advantage of highly specialized opposable thumbs, elongated digits, and/or prehensile tails. Despite the phylogenetic diversity of mammalian climbers, basic concepts like climbing gaits are still limited to insects, primates, and robots. In this work, we attempt to translate the foundational descriptions of terrestrial gaits, e.g. horses trotting, to mammalian climbing gaits. We performed kinematics analyses to identify common mammalian climbing gaits and discerned some underlying criteria influencing these gaits. Due to the aforementioned biomechanical constraints specific to climbing, we predict non-primate, mammalian climbing gaits will all fall within and occupy a smaller subspace of the known terrestrial gaits described by Hildebrand.

hi

[BibTex]

[BibTex]


no image
Collagen entanglement in elephant skin gives way to strain-stiffening mechanisms

Sordilla, S., Schulz, A., Hu, D., Higgins, C.

Society of Integrative and Comparative Biology, Seattle, USA, January 2024 (misc) Accepted

Abstract
Form-function relationships often have tradeoffs: if a material is tough, it is often inflexible, and vice versa. This is particularly relevant for the elephant trunk, where the skin should be protective yet elastic. To investigate how this is achieved, we used classical histochemical staining and second harmonic generation microscopy to describe the morphology and composition of elephant trunk skin. We report structure at the macro and micro scales, from the thickness of the dermis to the interaction of 10 µm thick collagen fibers. We analyzed several sites along the length of the trunk, to compare and contrast the dorsal-ventral and proximal-distal skin morphologies and compositions. We find the dorsal skin of the elephant trunk can have keratin armor layers over 2mm thick, which is nearly 100 times the thickness of the equivalent layer in human skin. We also found that the structural support layer (the dermis) of elephant trunk contains a distribution of collagen-I (COL1) fibers in both perpendicular and parallel arrangement. The bimodal distribution of collagen is seen across all portions of the trunk, and is dissimilar from that of human skin where one orientation dominates within a body site. We hypothesize that this distribution of COL1 in the elephant trunk allows both flexibility and load-bearing capabilities. Additionally, when viewing individual fiber interaction of 10 µm thick collagen, we find the fiber crossings per unit volume are five times more common than in human skin, suggesting that the fibers are entangled. We surmise that these intriguing structures permit both flexibility and strength in the elephant trunk. The complex nature of the elephant skin may inspire the design of materials that can combine strength and flexibility.

hi

[BibTex]

[BibTex]


no image
MPI-10: Haptic-Auditory Measurements from Tool-Surface Interactions

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise" in IEEE Transactions on Haptics, January 2024 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers
Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers

Schulz, A., Kaufmann, L., Brecht, M., Richter, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative Biology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Whiskers are so named because these hairs often actuate circularly, whisking, via collagen wrapping at the root of the hair follicle to increase their sensing volumes. Elephant trunks are a unique case study for whiskers, as the dorsal and lateral sections of the elephant proboscis have scattered sensory hairs that lack individual actuation. We hypothesize that the actuation limitations of these non-whisking whiskers led to anisotropic morphology and non-homogeneous composition to meet the animal's sensory needs. To test these hypotheses, we examined trunk whiskers from a 35-year-old female African savannah elephant (Loxodonta africana). Whisker morphology was evaluated through micro-CT and polarized light microscopy. The whiskers from the distal tip of the trunk were found to be axially asymmetric, with an ovular cross-section at the root, shifting to a near-square cross-section at the point. Nanoindentation and additional microscopy revealed that elephant whiskers have a composition unlike any other mammalian hair ever studied: we recorded an elastic modulus of 3 GPa at the root and 0.05 GPa at the point of a single 4-cm-long whisker. This work challenges the assumption that hairs have circular cross-sections and isotropic mechanical properties. With such striking differences compared to other mammals, including the mouse (Mus musculus), rat (Rattus norvegicus), and cat (Felis catus), we conclude that whisker morphology and composition play distinct and complementary roles in elephant trunk mechanosensing.

zwe-csfm hi

[BibTex]

[BibTex]


no image
Use the 4S (Signal-Safe Speckle Subtraction): Explainable Machine Learning reveals the Giant Exoplanet AF Lep b in High-Contrast Imaging Data from 2011

Bonse, M. J., Gebhard, T. D., Dannert, F. A., Absil, O., Cantalloube, F., Christiaens, V., Cugno, G., Garvin, E. O., Hayoz, J., Kasper, M., Matthews, E., Schölkopf, B., Quanz, S. P.

2024 (misc) Submitted

ei

arXiv [BibTex]

arXiv [BibTex]


no image
Learning a Terrain- and Robot-Aware Dynamics Model for Autonomous Mobile Robot Navigation

Achterhold, J., Guttikonda, S., Kreber, J. U., Li, H., Stueckler, J.

CoRR abs/2409.11452, 2024, Preprint submitted to Robotics and Autonomous Systems Journal. https://arxiv.org/abs/2409.11452 (techreport) Submitted

Abstract
Mobile robots should be capable of planning cost-efficient paths for autonomous navigation. Typically, the terrain and robot properties are subject to variations. For instance, properties of the terrain such as friction may vary across different locations. Also, properties of the robot may change such as payloads or wear and tear, e.g., causing changing actuator gains or joint friction. Autonomous navigation approaches should thus be able to adapt to such variations. In this article, we propose a novel approach for learning a probabilistic, terrain- and robot-aware forward dynamics model (TRADYN) which can adapt to such variations and demonstrate its use for navigation. Our learning approach extends recent advances in meta-learning forward dynamics models based on Neural Processes for mobile robot navigation. We evaluate our method in simulation for 2D navigation of a robot with uni-cycle dynamics with varying properties on terrain with spatially varying friction coefficients. In our experiments, we demonstrate that TRADYN has lower prediction error over long time horizons than model ablations which do not adapt to robot or terrain variations. We also evaluate our model for navigation planning in a model-predictive control framework and under various sources of noise. We demonstrate that our approach yields improved performance in planning control-efficient paths by taking robot and terrain properties into account.

ev

preprint [BibTex]

preprint [BibTex]


no image
Natural and Robust Walking using Reinforcement Learning without Demonstrations in High-Dimensional Musculoskeletal Models
2024 (misc)

Abstract
Humans excel at robust bipedal walking in complex natural environments. In each step, they adequately tune the interaction of biomechanical muscle dynamics and neuronal signals to be robust against uncertainties in ground conditions. However, it is still not fully understood how the nervous system resolves the musculoskeletal redundancy to solve the multi-objective control problem considering stability, robustness, and energy efficiency. In computer simulations, energy minimization has been shown to be a successful optimization target, reproducing natural walking with trajectory optimization or reflex-based control methods. However, these methods focus on particular motions at a time and the resulting controllers are limited when compensating for perturbations. In robotics, reinforcement learning~(RL) methods recently achieved highly stable (and efficient) locomotion on quadruped systems, but the generation of human-like walking with bipedal biomechanical models has required extensive use of expert data sets. This strong reliance on demonstrations often results in brittle policies and limits the application to new behaviors, especially considering the potential variety of movements for high-dimensional musculoskeletal models in 3D. Achieving natural locomotion with RL without sacrificing its incredible robustness might pave the way for a novel approach to studying human walking in complex natural environments. Videos: this https://sites.google.com/view/naturalwalkingrl

al

link (url) [BibTex]


no image
A Pontryagin Perspective on Reinforcement Learning

Eberhard, O., Vernade, C., Muehlebach, M.

Max Planck Institute for Intelligent Systems, 2024 (techreport)

lds

link (url) [BibTex]

link (url) [BibTex]


Self- and Interpersonal Contact in 3D Human Mesh Reconstruction
Self- and Interpersonal Contact in 3D Human Mesh Reconstruction

Müller, L.

University of Tübingen, Tübingen, 2024 (phdthesis)

Abstract
The ability to perceive tactile stimuli is of substantial importance for human beings in establishing a connection with the surrounding world. Humans rely on the sense of touch to navigate their environment and to engage in interactions with both themselves and other people. The field of computer vision has made great progress in estimating a person’s body pose and shape from an image, however, the investigation of self- and interpersonal contact has received little attention despite its considerable significance. Estimating contact from images is a challenging endeavor because it necessitates methodologies capable of predicting the full 3D human body surface, i.e. an individual’s pose and shape. The limitations of current methods become evident when considering the two primary datasets and labels employed within the community to supervise the task of human pose and shape estimation. First, the widely used 2D joint locations lack crucial information for representing the entire 3D body surface. Second, in datasets of 3D human bodies, e.g. collected from motion capture systems or body scanners, contact is usually avoided, since it naturally leads to occlusion which complicates data cleaning and can break the data processing pipelines. In this thesis, we first address the problem of estimating contact that humans make with themselves from RGB images. To do this, we introduce two novel methods that we use to create new datasets tailored for the task of human mesh estimation for poses with self-contact. We create (1) 3DCP, a dataset of 3D body scan and motion capture data of humans in poses with self-contact and (2) MTP, a dataset of images taken in the wild with accurate 3D reference data using pose mimicking. Next, we observe that 2D joint locations can be readily labeled at scale given an image, however, an equivalent label for self-contact does not exist. Consequently, we introduce (3) distrecte self-contact (DSC) annotations indicating the pairwise contact of discrete regions on the human body. We annotate three existing image datasets with discrete self-contact and use these labels during mesh optimization to bring body parts supposed to touch into contact. Then we train TUCH, a human mesh regressor, on our new datasets. When evaluated on the task of human body pose and shape estimation on public benchmarks, our results show that knowing about self-contact not only improves mesh estimates for poses with self-contact, but also for poses without self-contact. Next, we study contact humans make with other individuals during close social interaction. Reconstructing these interactions in 3D is a significant challenge due to the mutual occlusion. Furthermore, the existing datasets of images taken in the wild with ground-truth contact labels are of insufficient size to facilitate the training of a robust human mesh regressor. In this work, we employ a generative model, BUDDI, to learn the joint distribution of 3D pose and shape of two individuals during their interaction and use this model as prior during an optimization routine. To construct training data we leverage pre-existing datasets, i.e. motion capture data and Flickr images with discrete contact annotations. Similar to discrete self-contact labels, we utilize discrete human- human contact to jointly fit two meshes to detected 2D joint locations. The majority of methods for generating 3D humans focus on the motion of a single person and operate on 3D joint locations. While these methods can effectively generate motion, their representation of 3D humans is not sufficient for physical contact since they do not model the body surface. Our approach, in contrast, acts on the pose and shape parameters of a human body model, which enables us to sample 3D meshes of two people. We further demonstrate how the knowledge of human proxemics, incorporated in our model, can be used to guide an optimization routine. For this, in each optimization iteration, BUDDI takes the current mesh and proposes a refinement that we subsequently consider in the objective function. This procedure enables us to go beyond state of the art by forgoing ground-truth discrete human-human contact labels during optimization. Self- and interpersonal contact happen on the surface of the human body, however, the majority of existing art tends to predict bodies with similar, “average” body shape. This is due to a lack of training data of paired images taken in the wild and ground- truth 3D body shape and because 2D joint locations are not sufficient to explain body shape. The most apparent solution would be to collect body scans of people together with their photos. This is, however, a time-consuming and cost-intensive process that lacks scalability. Instead, we leverage the vocabulary humans use to describe body shape. First, we ask annotators to label how much a word like “tall” or “long legs” applies to a human body. We gather these ratings for rendered meshes of various body shapes, for which we have ground-truth body model shape parameters, and for images collected from model agency websites. Using this data, we learn a shape-to-attribute (A2S) model that predicts body shape ratings from body shape parameters. Then we train a human mesh regressor, SHAPY, on the model agency images wherein we supervise body shape via attribute annotations using A2S. Since no suitable test set of diverse 3D ground-truth body shape with images taken in natural settings exists, we introduce Human Bodies in the Wild (HBW). This novel dataset contains photographs of individuals together with their body scan. Our model predicts more realistic body shapes from an image and quantitatively improves body shape estimation on this new benchmark. In summary, we present novel datasets, optimization methods, a generative model, and regressors to advance the field of 3D human pose and shape estimation. Taken together, these methods open up ways to obtain more accurate and realistic 3D mesh estimates from images with multiple people in self- and mutual contact poses and with diverse body shapes. This line of research also enables generative approaches to create more natural, human-like avatars. We believe that knowing about self- and human-human contact through computer vision has wide-ranging implications in other fields as for example robotics, fitness, or behavioral science.

ps

[BibTex]

[BibTex]


no image
Distributed Event-Based Learning via ADMM

Er, D., Trimpe, S., Muehlebach, M.

Max Planck Institute for Intelligent Systems, 2024 (techreport)

lds

link (url) [BibTex]

link (url) [BibTex]


Natural Language Control for 3D Human Motion Synthesis
Natural Language Control for 3D Human Motion Synthesis

Petrovich, M.

LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, 2024 (phdthesis)

Abstract
3D human motions are at the core of many applications in the film industry, healthcare, augmented reality, virtual reality and video games. However, these applications often rely on expensive and time-consuming motion capture data. The goal of this thesis is to explore generative models as an alternative route to obtain 3D human motions. More specifically, our aim is to allow a natural language interface as a means to control the generation process. To this end, we develop a series of models that synthesize realistic and diverse motions following the semantic inputs. In our first contribution, described in Chapter 3, we address the challenge of generating human motion sequences conditioned on specific action categories. We introduce ACTOR, a conditional variational autoencoder (VAE) that learns an action-aware latent representation for human motions. We show significant gains over existing methods thanks to our new Transformer-based VAE formulation, encoding and decoding SMPL pose sequences through a single motion-level embedding. In our second contribution, described in Chapter 4, we go beyond categorical actions, and dive into the task of synthesizing diverse 3D human motions from textual descriptions allowing a larger vocabulary and potentially more fine-grained control. Our work stands out from previous research by not deterministically generating a single motion sequence, but by synthesizing multiple, varied sequences from a given text. We propose TEMOS, building on our VAE-based ACTOR architecture, but this time integrating a pretrained text encoder to handle large-vocabulary natural language inputs. In our third contribution, described in Chapter 5, we address the adjacent task of text-to-3D human motion retrieval, where the goal is to search in a motion collection by querying via text. We introduce a simple yet effective approach, named TMR, building on our earlier model TEMOS, by integrating a contrastive loss to enhance the structure of the cross-modal latent space. Our findings emphasize the importance of retaining the motion generation loss in conjunction with contrastive training for improved results. We establish a new evaluation benchmark and conduct analyses on several protocols. In our fourth contribution, described in Chapter 6, we introduce a new problem termed as “multi-track timeline control” for text-driven 3D human motion synthesis. Instead of a single textual prompt, users can organize multiple prompts in temporal intervals that may overlap. We introduce STMC, a test-time denoising method that can be integrated with any pre-trained motion diffusion model. Our evaluations demonstrate that our method generates motions that closely match the semantic and temporal aspects of the input timelines. In summary, our contributions in this thesis are as follows: (i) we develop a generative variational autoencoder, ACTOR, for action-conditioned generation of human motion sequences, (ii) we introduce TEMOS, a text-conditioned generative model that synthesizes diverse human motions from textual descriptions, (iii) we present TMR, a new approach for text-to-3D human motion retrieval, (iv) we propose STMC, a method for timeline control in text-driven motion synthesis, enabling the generation of detailed and complex motions.

ps

Thesis [BibTex]

Thesis [BibTex]


no image
Incremental Few-Shot Adaptation for Non-Prehensile Object Manipulation using Parallelizable Physics Simulators

Baumeister, F., Mack, L., Stueckler, J.

CoRR abs/2409.13228, CoRR, 2024, Submitted to IEEE International Conference on Robotics and Automation (ICRA) 2025 (techreport) Submitted

Abstract
Few-shot adaptation is an important capability for intelligent robots that perform tasks in open-world settings such as everyday environments or flexible production. In this paper, we propose a novel approach for non-prehensile manipulation which iteratively adapts a physics-based dynamics model for model-predictive control. We adapt the parameters of the model incrementally with a few examples of robot-object interactions. This is achieved by sampling-based optimization of the parameters using a parallelizable rigid-body physics simulation as dynamic world model. In turn, the optimized dynamics model can be used for model-predictive control using efficient sampling-based optimization. We evaluate our few-shot adaptation approach in several object pushing experiments in simulation and with a real robot.

ev

preprint supplemental video link (url) [BibTex]

preprint supplemental video link (url) [BibTex]


no image
Discrete Fourier Transform Three-to-One (DFT321): Code

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

MATLAB code of discrete fourier transform three-to-one (DFT321), 2024 (misc)

hi

Code Project Page [BibTex]

Code Project Page [BibTex]

2023


Fairness in Machine Learning: Limitations and Opportunities
Fairness in Machine Learning: Limitations and Opportunities

Barocas, S., Hardt, M., Narayanan, A.

MIT Press, December 2023 (book)

Abstract
An introduction to the intellectual foundations and practical utility of the recent work on fairness and machine learning. Fairness and Machine Learning introduces advanced undergraduate and graduate students to the intellectual foundations of this recently emergent field, drawing on a diverse range of disciplinary perspectives to identify the opportunities and hazards of automated decision-making. It surveys the risks in many applications of machine learning and provides a review of an emerging set of proposed solutions, showing how even well-intentioned applications may give rise to objectionable results. It covers the statistical and causal measures used to evaluate the fairness of machine learning models as well as the procedural and substantive aspects of decision-making that are core to debates about fairness, including a review of legal and philosophical perspectives on discrimination. This incisive textbook prepares students of machine learning to do quantitative work on fairness while reflecting critically on its foundations and its practical utility.• Introduces the technical and normative foundations of fairness in automated decision-making• Covers the formal and computational methods for characterizing and addressing problems• Provides a critical assessment of their intellectual foundations and practical utility• Features rich pedagogy and extensive instructor resources

sf

link (url) [BibTex]

2023


link (url) [BibTex]


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

hi

Project Page [BibTex]

Project Page [BibTex]


Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction
Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the IROS Workshop on Causality for Robotics: Answering the Question of Why, Detroit, USA, October 2023 (misc)

Abstract
Causal inference could give future learning robots strong generalization and scalability capabilities, which are crucial for safety, fault diagnosis and error prevention. One application area of interest consists of the haptic recognition of surfaces. We seek to understand cause and effect during physical surface interaction by examining surface and tool identity, their interplay, and other contact-irrelevant factors. To work toward elucidating the mechanism of surface encoding, we attempt to recognize surfaces from haptic-auditory data captured by previously unseen hemispherical steel tools that differ from the recording tool in diameter and mass. In this context, we leverage ideas from kernel methods to quantify surface similarity through descriptive differences in signal distributions. We find that the effect of the tool is significantly present in higher-order statistical moments of contact data: aligning the means of the distributions being compared somewhat improves recognition but does not fully separate tool identity from surface identity. Our findings shed light on salient aspects of haptic-auditory data from tool-surface interaction and highlight the challenges involved in generalizing artificial surface discrimination capabilities.

hi

Manuscript Project Page [BibTex]

Manuscript Project Page [BibTex]


no image
NearContact: Accurate Human Detection using Tomographic Proximity and Contact Sensing with Cross-Modal Attention

Garrofé, G., Schoeffmann, C., Zangl, H., Kuchenbecker, K. J., Lee, H.

Extended abstract (4 pages) presented at the International Workshop on Human-Friendly Robotics (HFR), Munich, Germany, September 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics
The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

Abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Knoxville, USA, August 2023 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device
Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device

Rokhmanova, N., Faulkner, R., Martus, J., Fiene, J., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
Wearable haptic devices can provide salient real-time feedback (typically vibration) for rehabilitation, sports training, and skill acquisition. Although the body provides many sites for such cues, the influence of the mounting location on vibrotactile mechanics is commonly ignored. This study builds on previous research by quantifying how changes in strap tightness and local tissue composition affect the physical acceleration generated by a typical vibrotactile device.

hi

Project Page [BibTex]

Project Page [BibTex]


Toward a Device for Reliable Evaluation of Vibrotactile Perception
Toward a Device for Reliable Evaluation of Vibrotactile Perception

Ballardini, G., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

hi

[BibTex]

[BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test: Code

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

Code published as a companion to the journal article "Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test" in IEEE Transactions on Automation Science and Engineering, July 2023 (misc)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]