Publications

DEPARTMENTS

Emperical Interference

Haptic Intelligence

Modern Magnetic Systems

Perceiving Systems

Physical Intelligence

Robotic Materials

Social Foundations of Computation


Research Groups

Autonomous Vision

Autonomous Learning

Bioinspired Autonomous Miniature Robots

Dynamic Locomotion

Embodied Vision

Human Aspects of Machine Learning

Intelligent Control Systems

Learning and Dynamical Systems

Locomotion in Biorobotic and Somatic Systems

Micro, Nano, and Molecular Systems

Movement Generation and Control

Neural Capture and Synthesis

Physics for Inference and Optimization

Organizational Leadership and Diversity

Probabilistic Learning Group


Topics

Robot Learning

Conference Paper

2022

Autonomous Learning

Robotics

AI

Career

Award


Haptic Intelligence Ph.D. Thesis Haptify: A Measurement-Based System for Quantifying the Quality of Haptic Interfaces Fazlollahi, F. University of Tübingen, Tübingen, Germany, March 2026, Department of Computer Science (Published)
Grounded force-feedback (GFF) devices, exoskeletons, and other haptic robots modulate human movement through carefully engineered mechanical, electrical, and computational designs. Given their significant societal potential and often high cost, it is essential to fairly and efficiently assess the quality of these intimate cyber-physical interfaces. However, existing device specifications and low-level performance metrics often fail to capture the nuanced qualities that expert users perceive during hands-on experimentation. To address this gap, this thesis introduces Haptify, a comprehensive benchmarking system that can thoroughly, fairly, and noninvasively evaluate GFF haptic devices. Haptify integrates multiple sensing modalities - a seven-camera optical motion-capture system, a custom-built 60-cm-square force plate, and an instrumented end-effector that can be adapted to different devices - to record the interaction between the human hand, the device, and the ground during both passive and active experiments. With this setup, users hold the device end-effector and move it through a series of carefully designed tasks while Haptify measures kinematic and kinetic responses. From this process, we establish six key ways to assess GFF device performance: workspace shape, global free-space forces, global free-space vibrations, local dynamic forces and torques, frictionless surface rendering, and stiffness rendering. These benchmarks enable systematic evaluation and comparison across devices. We first apply Haptify to benchmark two GFF devices produced by 3D Systems: the widely used Touch and the more expensive Touch X. Results reveal that the Touch X offers a slightly smaller workspace than the Touch, but it produces smaller and more predictable free-space forces, reduced vibrations, more consistent dynamic forces and torques, and higher-quality rendering of both frictionless surfaces and stiff virtual objects. To further validate and extend our approach, we conducted a user study with sixteen expert hapticians who used Haptify to evaluate four commercial GFF devices: Novint Falcon, Force Dimension Omega.3, Touch, and Touch X. Experts tested the devices in unpowered mode and across five representative virtual benchmark environments, providing extensive quantitative ratings and qualitative feedback. We distilled recurring themes from their input and analyzed correlations between expert opinions and sensor-based measurements. Our findings show that expert judgments of fundamental haptic quality indicators align closely with the metrics derived from Haptify. Moreover, device performance both unpowered and in active benchmarks can be used to predict its suitability for more complex applications, such as teleoperated surgery. By linking expert assessments with external measurement data, this thesis establishes a combined qualitative-quantitative framework for benchmarking haptic robots. This approach not only enables fair comparison across diverse devices but also establishes a direct connection between objective measurements and the subjective expertise of experienced hapticians. In doing so, it lays the foundation for more rigorous, transparent, and application-relevant evaluation of haptic technologies.
BibTeX

Haptic Intelligence Dynamic Locomotion Ph.D. Thesis The Human Leg Catapult: Biological Mechanisms for Walking Gait Replicated in the EcoWalker Robot Kiss, B. University of Stuttgart, Stuttgart, Germany, March 2026, Faculty of Civil and Environmental Engineering (Published)
Humanoid robots and assistive devices have yet to match the efficiency and adaptability of able-bodied human walking in challenging environments. To bridge this performance gap, my projects explored the underlying mechanisms of human locomotion, focusing on the ankle push-off. Ankle push-off has a prominent role in walking due to its high-power output at the end of the stance phase, and due to the impact of its timing on the adaptability to diverse environments. The human leg catapult analogy provides a framework for the projects to understand and replicate the complex biological mechanisms that govern human walking gait. As a platform for the replication, the human-like bipedal EcoWalker robot was developed from version 1 to 3 in three consecutive projects, with iterative design and control updates tailored to each project's goals. Our findings provide insights into the separate roles of mono- and biarticular muscle-tendon units in the human leg catapult, while we also show functional details of the human leg catapult release mechanism through five distinct release processes on the EcoWalker robot. Utilizing the robot in the projects ensures that our findings are relevant to practical applications, allowing humanoid robot and assistive device developers to build on our insights, potentially reducing the performance gap in efficiency and adaptability between able-bodied human walking and artificial walking.
BibTeX

Haptic Intelligence Ph.D. Thesis Modeling, Fabricating, and Evaluating Synergistic Soft‑Rigid Actuators Gertler, I. University of Stuttgart, Stuttgart, Germany, February 2026, Faculty of Engineering Design, Production Engineering and Automotive Engineering (Published)
Soft actuators offer lightweight, compliant, and safe alternatives to traditional mechanisms, but they often incur complicated actuation schemes, bulky support systems, and limited functionality when made solely from soft materials. Soft‑rigid designs that integrate rigid elements into primarily soft bodies are common, yet the potential of those rigid parts to shape actuation behavior without compromising the overall softness remains underexplored, and fabrication practices often lack reproducibility. This thesis presents two case studies of synergistic hybrid actuation systems that utilize the complementary roles of soft and rigid components to dictate temporal and spectral behavior in response to simple input commands. Between the soft and hard components, one is typically active, while the other is passive. The first case study implements a soft-active/rigid-passive approach for the medical robotics application of endoluminal locomotion. A thin hyperelastic balloon encased in an inextensible sleeve is coupled with a thicker, non-encased balloon on a single fluid supply to serve as front and rear anchors, respectively. Geometry and material selection reshape the pressure-stretch response so the rear anchor inflates and deflates before the front anchor, enabling asymmetric sequencing useful for peristaltic locomotion inside a lumen. Numerical simulation and experiments validate the characteristic curves of dip-molded balloons and alternating anchoring in rigid tubes. The approach can be extended to generate actuation patterns for sequential haptic feedback and other robotic applications. The second case study applies a soft-passive/rigid-active strategy in the domain of fingertip haptic actuation. A dip‑molded silicone sheath with embedded miniature magnets, excited by a single air‑core coil, produces localized, rich vibrotactile feedback. Simulations, mechanical measurements, and user experiments with a single-magnet design show consistent frequency‑dependent behavior and strong perceptual salience. In follow-on work, various dual‑magnet arrangements were also simulated, fabricated, and thoroughly evaluated. Classification tests indicate that frequency content is more important for perception than magnet orientation, while a realism‑rating experiment supports the feasibility of audio-driven simple commands for realistic haptic feedback. The device is demonstrated on the fingertip in virtual reality and could be adapted for other body locations for navigation, rehabilitation, or related applications. Together, these studies provide design rules, a simulation-fabrication-validation workflow, and reproducible fabrication practices for soft-rigid hybrid actuators that realize desired mechanical outputs from minimal actuation commands. The methods and findings generalize to other soft actuators and have potential applications in domains such as medical devices, wearable technologies, and soft sensing.
BibTeX

Haptic Intelligence Perceiving Systems Ph.D. Thesis An Interdisciplinary Approach to Human Pose Estimation: Application to Sign Language Forte, M. University of Tübingen, Tübingen, Germany, November 2025, Department of Computer Science (Published)
Accessibility legislation mandates equal access to information for Deaf communities. While videos of human interpreters provide optimal accessibility, they are costly and impractical for frequently updated content. AI-driven signing avatars offer a promising alternative, but their development is limited by the lack of high-quality 3D motion-capture data at scale. Vision-based motion-capture methods are scalable but struggle with the rapid hand movements, self-occlusion, and self-touch that characterize sign language. To address these limitations, this dissertation develops two complementary solutions. SGNify improves hand pose estimation by incorporating universal linguistic rules that apply to all sign languages as computational priors. Proficient signers recognize the reconstructed signs as accurately as those in the original videos, but depth ambiguities along the camera axis can still produce incorrect reconstructions for signs involving self-touch. To overcome this remaining limitation, BioTUCH integrates electrical bioimpedance sensing between the wrists of the person being captured. Systematic measurements show that skin-to-skin contact produces distinctive bioimpedance reductions at high frequencies (240 kHz to 4.1 MHz), enabling reliable contact detection. BioTUCH uses the timing of these self-touch events to refine arm poses, producing physically plausible arm configurations and significantly reducing reconstruction error. Together, these contributions support the scalable collection of high-quality 3D sign language motion data, facilitating progress toward AI-driven signing avatars.
BibTeX

Haptic Intelligence Ph.D. Thesis Towards Robust and Flexible Robot State and Motion Estimation through Optimization and Learning Nubert, J. ETH Zurich, Zurich, Switzerland, June 2025, Department of Mechanical and Process Engineering (Published) BibTeX

Haptic Intelligence Ph.D. Thesis Capturing and Recognizing Multimodal Surface Interactions as Embedded High-Dimensional Distributions Khojasteh, B. University of Stuttgart, Stuttgart, Germany, December 2024, Faculty of Engineering Design, Production Engineering and Automotive Engineering (Published)
Exploring a surface with a handheld tool generates complex contact signals that uniquely encode the surface's properties-a needle hidden in a haystack of data. Humans naturally integrate visual, auditory, and haptic sensory data during these interactions to accurately assess and recognize surfaces. However, enabling artificial systems to perceive and recognize surfaces with human-like proficiency remains a significant challenge. The complexity and dimensionality of multi-modal sensor data, particularly in the intricate and dynamic modality of touch, hinders effective sensing and processing. Successfully overcoming these challenges will open up new possibilities in applications such as quality control, material documentation, and robotics. This dissertation addresses these issues at the levels of both the sensing hardware and the processing algorithms by introducing an automated similarity framework for multimodal surface recognition, developing a haptic-auditory test bed for acquiring high-quality surface data, and exploring optimal sensing configurations to improve recognition performance and robustness.
BibTeX

Haptic Intelligence Ph.D. Thesis Precision Haptics in Gait Retraining for Knee Osteoarthritis Rokhmanova, N. Carnegie Mellon University, Pittsburgh, USA, December 2024, Department of Mechanical Engineering (Published)
Gait retraining, or teaching patients to walk in ways that reduce joint loading, shows promise as a conservative intervention for knee osteoarthritis. However, its use in clinical settings remains limited by challenges in prescribing optimal gait patterns and delivering precise, real-time biofeedback. This thesis presents four interconnected studies that aim to address these barriers to clinical adoption: First, a regression model was developed to predict patient-specific biomechanical responses to a gait modification using only simple clinical measures, reducing the need for instrumented gait analysis. Second, we identified how inertial sensor accuracy fundamentally impacts motor learning outcomes during gait retraining, demonstrating the importance of reliable kinematic tracking. Third, we designed and validated an open-source wearable haptic platform called ARIADNE, which delivers precise vibrotactile motion guidance and enables rigorous comparison of feedback strategies for gait retraining. This platform's integrated sensing revealed how anatomical placement and tissue properties influence vibration transmission and perception. Finally, a gait retraining study demonstrated that vibrotactile feedback significantly improves both learning and retention of therapeutic gait patterns compared to verbal instruction alone, highlighting the critical role of precise biofeedback systems in rehabilitation. These contributions help advance the field's understanding of the sensorimotor principles underlying gait retraining while providing practical tools to support future clinical implementation.
BibTeX

Haptic Intelligence Ph.D. Thesis Data-Driven Needle Puncture Detection for the Delivery of Urgent Medical Care in Space L’Orsa, R. University of Calgary, Calgary, Canada, November 2024, Department of Electrical and Computer Engineering (Published)
Needle thoracostomy (NT) is a surgical procedure that treats one of the most preventable causes of trauma-related death: dangerous accumulations of air between the chest wall and the lungs. However, needle-tip overshoot of the target space can result in the inadvertent puncture of critical structures like the heart. This type of complication is fatal without urgent surgical care, which is not available in resource-poor environments like space. Since NT is done blind, operators rely on tool sensations to identify when the needle has reached its target. Needle instrumentation could enable puncture notifications to help operators limit tool-tip overshoot, but such a solution requires reliable puncture detection from manual (i.e., variable-velocity) needle insertion data streams. Data-driven puncture-detection (DDPD) algorithms are appropriate for this application, but their performance has historically been unacceptably low for use in safety-critical applications. This work contributes towards the development of an intelligent device for manual NT assistance by proposing two novel DDPD algorithms. Three data sets are collected that provide needle forces and displacements acquired during insertions into ex vivo porcine tissue analogs for the human chest, and factors affecting DDPD algorithm performance are analyzed in these data. Puncture event features are examined for each sensor, and the suitability of both accelerometer measurements and diffuse reflectance measurements are evaluated within the context of NT. Finally, DDPD ensembles are proposed that yield a 5.1-fold improvement in precision as compared to the traditional force-only DDPD approach. These results lay a foundation for improving the urgent delivery of percutaneous procedures in space and other resource-poor settings.
BibTeX

Haptic Intelligence Ph.D. Thesis Engineering and Evaluating Naturalistic Vibrotactile Feedback for Telerobotic Assembly Gong, Y. University of Stuttgart, Stuttgart, Germany, August 2024, Faculty of Engineering Design, Production Engineering and Automotive Engineering (Published)
Teleoperation allows workers on a construction site to assemble pre-fabricated building components by controlling powerful machines from a safe distance. However, teleoperation's primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, compromising their situational awareness and thus increasing the difficulty of the task; it also makes construction machines more difficult to learn to operate. To bridge this gap, we propose that reliable, economical, and easy-to-implement naturalistic vibrotactile feedback could improve telerobotic control interfaces in construction and other application areas such as surgery. This type of feedback enables the operator to feel the natural vibrations experienced by the robot, which contain crucial information about its motions and its physical interactions with the environment. This dissertation explores how to deliver naturalistic vibrotactile feedback from a robot's end-effector to the hand of an operator performing telerobotic assembly tasks; furthermore, it seeks to understand the effects of such haptic cues. The presented research can be divided into four parts. We first describe the engineering of AiroTouch, a naturalistic vibrotactile feedback system tailored for use on construction sites but suitable for many other applications of telerobotics. Then we evaluate AiroTouch and explore the effects of the naturalistic vibrotactile feedback it delivers in three user studies conducted either in laboratory settings or on a construction site. We begin this dissertation by developing guidelines for creating a haptic feedback system that provides high-quality naturalistic vibrotactile feedback. These guidelines include three sections: component selection, component placement, and system evaluation. We detail each aspect with the parameters that need to be considered. Based on these guidelines, we adapt widely available commercial audio equipment to create our system called AiroTouch, which measures the vibration experienced by each robot tool with a high-bandwidth three-axis accelerometer and enables the user to feel this vibration in real time through a voice-coil actuator. Accurate haptic transmission is achieved by optimizing the positions of the system's off-the-shelf sensors and actuators and is then verified through measurements. The second part of this thesis presents our initial validation of AiroTouch. We explored how adding this naturalistic type of vibrotactile feedback affects the operator during small-scale telerobotic assembly. Due to the limited accessibility of teleoperated robots and to maintain safety, we conducted a user study in lab with a commercial bimanual dexterous teleoperation system developed for surgery (Intuitive da Vinci Si). Thirty participants used this robot equipped with AiroTouch to assemble a small stiff structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that participants learn to take advantage of both tested versions of the haptic feedback in the given tasks, as significantly lower vibrations and forces are observed in the second trial. Subjective responses indicate that naturalistic vibrotactile feedback increases the realism of the interaction and reduces the perceived task duration, task difficulty, and fatigue. To test our approach on a real construction site, we enhanced AiroTouch using wireless signal-transmission technologies and waterproofing, and then we adapted it to a mini-crane construction robot. A study was conducted to evaluate how naturalistic vibrotactile feedback affects an observer's understanding of telerobotic assembly performed by this robot on a construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicates that all participants had positive responses to this technology and believed it would be beneficial for construction activities. Finally, we evaluated the effects of naturalistic vibrotactile feedback provided by wireless AiroTouch during live teleoperation of the mini-crane. Twenty-eight participants remotely controlled the mini-crane to complete three large-scale assembly-related tasks in lab, both with and without this type of haptic feedback. Our results show that naturalistic vibrotactile feedback enhances the participants' awareness of both robot motion and contact between the robot and other objects, particularly in scenarios with limited visibility. These effects increase participants' confidence when controlling the robot. Moreover, there is a noticeable trend of reduced vibration magnitude in the conditions where this type of haptic feedback is provided. The primary contribution of this dissertation is the clear explanation of details that are essential for the effective implementation of naturalistic vibrotactile feedback. We demonstrate that our accessible, audio-based approach can enhance user performance and experience during telerobotic assembly in construction and other application domains. These findings lay the foundation for further exploration of the potential benefits of incorporating haptic cues to enhance user experience during teleoperation.
BibTeX

Haptic Intelligence Ph.D. Thesis Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion Burns, R. B. University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (Published)
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.
BibTeX

Haptic Intelligence Ph.D. Thesis Gesture-Based Nonverbal Interaction for Exercise Robots Mohan, M. University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (Published)
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.
BibTeX

Haptic Intelligence Ph.D. Thesis Multi-Timescale Representation Learning of Human and Robot Haptic Interactions Richardson, B. University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (Published)
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.
URL BibTeX

Haptic Intelligence Ph.D. Thesis Understanding the Influence of Moisture on Fingerpad-Surface Interactions Nam, S. University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (Published)
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.
DOI BibTeX

Haptic Intelligence Ph.D. Thesis HuggieBot: An Interactive Hugging Robot With Visual and Haptic Perception Block, A. E. ETH Zürich, Zürich, Switzerland, August 2021, Department of Computer Science (Published)
Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe adverse effects on an individual's well-being. Due to the prevalence and health benefits of hugging, roboticists are interested in creating robots that can hug humans as seamlessly as humans hug other humans. However, hugs are complex affective interactions that need to adapt to the height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. This dissertation aims to create a series of hugging robots that use visual and haptic perception to provide enjoyable interactive hugs. Each of the four presented HuggieBot versions is evaluated by measuring how users emotionally and behaviorally respond to hugging it; HuggieBot 4.0 is explicitly compared to a human hugging partner using physiological measures. Building on research both within and outside of human-robot interaction (HRI), this thesis proposes eleven tenets of natural and enjoyable robotic hugging. These tenets were iteratively crafted through a design process combining user feedback and experimenter observation, and they were evaluated through user studies. A good hugging robot should (1) be soft, (2) be warm, (3) be human-sized, (4) autonomously invite the user for a hug when it detects someone in its personal space, and then it should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience. It should also (5) adjust its embrace to the user's size and position, (6) reliably release when the user wants to end the hug, and (7) perceive the user's height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations. Finally, a hugging robot should (8) accurately detect and classify gestures applied to its torso in real time, regardless of the user's hand placement, (9) respond quickly to their intra-hug gestures, (10) adopt a gesture paradigm that blends user preferences with slight variety and spontaneity, and (11) occasionally provide unprompted, proactive affective social touch to the user through intra-hug gestures. We believe these eleven tenets are essential to delivering high-quality robot hugs. Their presence results in a hug that pleases the user, and their absence results in a hug that is likely to be inadequate. We present these tenets as guidelines for future hugging robot creators to follow when designing new hugging robots to ensure user acceptance. We tested the four versions of HuggieBot through six user studies. First, we analyzed data collected in a previous study with a modified Willow Garage Personal Robot 2 (PR2) to evaluate human responses to different robot physical characteristics and hugging behaviors. Participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Second, we created an entirely new robotic platform, HuggieBot 2.0, according to our first six tenets. The new platform features a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform compared to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. We then refine the original fourth tenet (visually perceive its user) and present the remaining five tenets for designing interactive hugging robots; we validate the full list of eleven tenets through more in-person studies with our custom robot. To enable perceptive and pleasing autonomous robot behavior, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. The robot's inflated torso's microphone and pressure sensor collected data of 32 people repeatedly demonstrating these gestures, which were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. From user preferences, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create a third version of our robot, HuggieBot 3.0. We then validated its gesture perception system and behavior algorithm in a fifth user study with 16 users. Finally, we refined the quality and comfort of the embrace by adjusting the joint torques and joint angles of the closed pose position, we further improved the robot's visual perception to detect changes in user approach, we upgraded the robot's response to users who do not press on its back, and we had the robot respond to all intra-hug gestures with squeezes to create our final version of the robotic platform, HuggieBot 4.0. In our sixth user study, we investigated the emotional and physiological effects of hugging a robot compared to the effects of hugging a friendly but unfamiliar person. We continuously monitored participant heart rate and collected saliva samples at seven time points across the 3.5-hour study to measure the temporal evolution of cortisol and oxytocin. We used an adapted Trier Social Stress Test (TSST) protocol to reliably and ethically induce stress in the participants. They then experienced one of five different hug intervention methods before all interacting with HuggieBot 4.0. The results of these six user studies validated our eleven hugging tenets and informed the iterative design of HuggieBot. We see that users enjoy robot softness, robot warmth, and being physically squeezed by the robot. Users dislike being released too soon from a hug and equally dislike being held by the robot for too long. Adding haptic reactivity definitively improves user perception of a hugging robot; the robot's responses and proactive intra-hug gestures were greatly enjoyed. In our last study, we learned that HuggieBot can positively affect users on a physiological level and is somewhat comparable to hugging a person. Participants have more favorable opinions about hugging robots after prolonged interaction with HuggieBot in all of our research studies.
DOI BibTeX

Haptic Intelligence Ph.D. Thesis Delivering Expressive and Personalized Fingertip Tactile Cues Young, E. M. University of Pennsylvania, Philadelphia, PA, December 2020, Department of Mechanical Engineering and Applied Mechanics (Published)
Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interac- tions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings.
BibTeX

Haptic Intelligence Ph.D. Thesis Modulating Physical Interactions in Human-Assistive Technologies Hu, S. University of Pennsylvania, Philadelphia, PA, August 2020, Department of Mechanical Engineering and Applied Mechanics (Published)
Many mechanical devices and robots operate in home environments, and they offer rich experiences and valuable functionalities for human users. When these devices interact physically with humans, additional care has to be taken in both hardware and software design to ensure that the robots provide safe and meaningful interactions. It is advantageous to have the robots be customizable so users could tinker them for their specific needs. There are many robot platforms that strive toward these goals, but the most successful robots in our world are either separated from humans (such as in factories and warehouses) or occupy the same space as humans but do not offer physical interactions (such as cleaning robots). In this thesis, we envision a suite of assistive robotic devices that assist people in their daily, physical tasks. Specifically, we begin with a hybrid force display that combines a cable, a brake, and a motor, which offers safe and powerful force output with a large workspace. Virtual haptic elements, including free space, constant force, springs, and dampers, can be simulated by this device. We then adapt the hybrid mechanism and develop the Gait Propulsion Trainer (GPT) for stroke rehabilitation, where we aim to reduce propulsion asymmetry by applying resistance at the user’s pelvis during unilateral stance gait phase. Sensors underneath the user’s shoes and a wireless communication module are added to precisely control the timing of the resistance force. To address the effort of parameter tuning in determining the optimal training scheme, we then develop a learning-from-demonstration (LfD) framework where robot behavior can be obtained from data, thus bypassing some of the tuning effort while enabling customization and generalization for different task situations. This LfD framework is evaluated in simulation and in a user study, and results show improved objective performance and human perception of the robot. Finally, we apply the LfD framework in an upper-limb therapy setting, where the robot directly learns the force output from a therapist when supporting stroke survivors in various physical exercises. Six stroke survivors and an occupational therapist provided demonstrations and tested the autonomous robot behaviors in a user study, and we obtain preliminary insights toward making the robot more intuitive and more effective for both therapists and clients of different impairment levels. This thesis thus considers both hardware and software design for robotic platforms, and we explore both direct and indirect force modulation for human-assistive technologies.
Hu20-PHDD-Modulating BibTeX

Haptic Intelligence Ph.D. Thesis Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties Burka, A. L. University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (Published)
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.
BibTeX

Haptic Intelligence Ph.D. Thesis Design and Evaluation of Interactive Hand-Clapping Robots Fitter, N. T. University of Pennsylvania, August 2017, Department of Mechanical Engineering and Applied Mechanics (Published)
Human friends commonly connect through handshakes and high fives, and children around the world rejoice at hand-clapping games. As robots enter everyday human spaces, they will have the opportunity to join in such physical interactions, but few current robots are intended to touch humans. How should robots move and react in playful hand-to-hand interactions with people? We conducted research in four main areas to address this design challenge. First, we implemented and tested an initial hand-clapping robotic system. This effort began by recording sensor data from people performing a variety of hand-clapping activities; the resulting accelerometer and position data taught us how to design appropriate hand-clapping robot motion and logic. Implementation on a Rethink Robotics Baxter Research Robot demonstrated that a robot could move like our human participants and reliably detect hand impacts through its wrist-mounted accelerometers. N = 20 study participants clapped hands with differently configured versions of this robot in random order: the robot’s facial animation, physical reactivity, arm stiffness, and clapping tempo all significantly affected how users perceived the robot. We next sought to create and evaluate more sophisticated robot hand-clapping behaviors. Data from people performing interactive clapping tasks at increasing and decreasing tempos helped us propose prospective timing models and implement adaptive-tempo Baxter play. In a subsequent experiment that involved N = 20 users, a mischievous Baxter was equipped with the top-performing tempo adaptation model and chose to play cooperatively or asynchronously with its human partner. Although a few participants reacted positively to Baxter’s mischief, users overwhelmingly pre- ferred a synchronous, cooperative robot. Third, we set up and conducted a human-robot interaction experiment more similar to everyday human-human hand-clapping interactions. A machine learning pipeline trained on inertial data from human motions demonstrated that linear support vector machines (SVMs) can classify a new person’s hand-clapping actions with an accuracy of about 95%. This technique succeeded for both hand- and wrist-mounted inertial sensors, enabling people to teach the Baxter robot new hand- clapping games. Evaluation of various two-handed clapping play activities by N = 24 users showed that learning games from Baxter was significantly easier than teaching Baxter games, but that the teaching role caused people to consider more teamwork aspects of the gameplay. Finally, to broaden the scope of these interactions, we began exploring applications of Baxter in socially assistive robotics. Using many of the same sensing and actuation strategies, we developed a set of six playful hand-to-hand contact-based exercise interactions to be jointly executed between a person and Baxter, along with two similar non-contact games. A proof-of-concept experiment using these exercise games enrolled N = 20 young adults and N = 14 healthy adults over age 53. The results demonstrated that people are willing and motivated to interact with the robot in this way and that different games promote unique physical and cognitive exercise effects. Overall, this research aims to help shape design processes for socially relevant physical human-robot interaction and reveal new opportunities for socially assistive robotics.
BibTeX