Publications

DEPARTMENTS

Emperical Interference

Haptic Intelligence

Modern Magnetic Systems

Perceiving Systems

Physical Intelligence

Robotic Materials

Social Foundations of Computation


Research Groups

Autonomous Vision

Autonomous Learning

Bioinspired Autonomous Miniature Robots

Dynamic Locomotion

Embodied Vision

Human Aspects of Machine Learning

Intelligent Control Systems

Learning and Dynamical Systems

Locomotion in Biorobotic and Somatic Systems

Micro, Nano, and Molecular Systems

Movement Generation and Control

Neural Capture and Synthesis

Physics for Inference and Optimization

Organizational Leadership and Diversity

Probabilistic Learning Group


Topics

Robot Learning

Conference Paper

2022

Autonomous Learning

Robotics

AI

Career

Award


Perceiving Systems Ph.D. Thesis Towards Fine-grained 3D Human Motion Generation from Textual Instructions Athanasiou, N. University of Tübingen, April 2026 (Published)
Controlling 3D human motion through natural language is key to unlocking interactive experiences in animation, gaming, virtual reality, and robotics. Yet current generative models still struggle with the compositional nature of real human behavior — actions unfold in sequence, overlap in time, and need fine-grained editing, all of which demand more than single-prompt, single-motion generation. This thesis argues that truly controllable motion generation requires compositional thinking: the ability to chain actions over time, layer them across body parts, and refine them through iterative editing. I present a suite of methods and datasets that tackle each of these axes. I first introduce TEACH, a hierarchical Transformer-based model that generates temporally coherent motion sequences from a series of textual descriptions, handling transitions between consecutive actions. I then address spatial composition with SINC, which synthesizes simultaneous actions — such as waving while walking — by leveraging structural knowledge about body-part involvement extracted from large language models. Moving from generation to editing, I present MotionFix, a dataset of source–target–edit-text triplets, together with TMED, a conditional diffusion model that modifies existing motions according to fine-grained textual instructions. Underpinning these contributions is BABEL, a large-scale dataset of semantically rich, frame-level annotations for motion-capture data that serves as a shared foundation for training and benchmarking across all three tasks. Together, these contributions advance language-driven motion generation from isolated actions toward the compositional, editable control that real-world applications demand.
PhD Thesis BibTeX

Social Foundations of Computation Conference Paper ROC-n-reroll: How Verifier Imperfection affects Test-Time Scaling Dorner, F. E., Chen, Y. C., Cruz, A. F., Yang, F. Y. The Fourteenth International Conference on Learning Representations (ICLR), April 2026 (Accepted)
Test-time scaling aims to improve language model performance by leveraging additional compute during inference. Many works have empirically studied techniques such as Best-of-N (BoN) and Rejection Sampling (RS) that make use of a verifier to enable test-time scaling. However, to date there is little theoretical understanding of how verifier imperfection affects performance -- a gap we address in this work. Specifically, we prove that the instance-level accuracy of these methods is precisely characterized by the geometry of the verifier's ROC curve. Our theory has two important takeaways, confirmed by experiments with Qwen and LLama models on GSM8K and MATH500. First, RS outperforms BoN for fixed compute, while both methods converge to the same accuracy in the infinite-compute limit. Second, it is generally impossible to predict the high-compute performance of either method based on observations in the low-compute regime.
arXiv BibTeX

Social Foundations of Computation Miscellaneous Text as the Richest Preference Signal Cruz, A. F., Kleinberg, J., Abebe, R. The Fourteenth International Conference on Learning Representations (ICLR), AIMS Workshop , April 2026 (Accepted)
Preference elicitation algorithms have long relied on structured representations of user preferences: rankings of items, ratings, or simple binary interactions (e.g., views). Over the years, we've slowly become aware of the limitations and biases these representations entail. Users form preferences over items' features rather than items themselves. In this paper, we explore \emph{natural language} as a first-class preference representation, beyond a mere cold-start aid. We study three parallel representations of user preferences: (i) a user-item interaction matrix, (ii) free-form text profiles describing users' preferences, and (iii) interpretable tabular features derived by an LLM from these text profiles. Our findings unfold in three parts. First, text-based predictors substantially outperform collaborative filtering in the cold-start regime and remain competitive as interaction histories grow. Second, most of the predictive signal in text can be retained in a compact, interpretable tabular representation. Third, the three representations are complementary: Simple ensembles that combine them consistently achieve the strongest performance.
BibTeX

Perceiving Systems Conference Paper Predicting 4D Hand Trajectory from Monocular Videos Ye, Y., Feng, Y., Taheri, O., Feng, H., Black, M. J., Tulsiani, S. In Int. Conf. on 3D Vision (3DV), March 2026 (Accepted)
We present HAPTIC, an approach that infers coherent 4D hand trajectories from monocular videos. Current video-based hand pose reconstruction methods primarily focus on improving frame-wise 3D pose using adjacent frames rather than studying consistent 4D hand trajectories in space. Despite the additional temporal cues, they generally underperform compared to image-based methods due to the scarcity of annotated video data. To address these issues, we repurpose a state-of-the-art image-based transformer to take in multiple frames and directly predict a coherent trajectory. We introduce two types of lightweight attention layers: cross-view self-attention to fuse temporal information, and global cross-attention to bring in larger spatial context. Our method infers 4D hand trajectories similar to the ground truth while maintaining strong 2D reprojection alignment. We apply the method to both egocentric and allocentric videos. It significantly outperforms existing methods in global trajectory accuracy while being comparable to the state-of-the-art in single-image pose estimation.
project arXiv code BibTeX

Perceiving Systems Conference Paper Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis Danecek, R., Schmitt, C., Polikovsky, S., Black, M. J. In Int. Conf. on 3D Vision (3DV), March 2026 (Accepted)
In order to be widely applicable, speech-driven 3D head avatars must articulate their lips in accordance with speech, while also conveying the appropriate emotions with dynamically changing facial expressions. The key problem is that deterministic models produce high-quality lip-sync but without rich expressions, whereas stochastic models generate diverse expressions but with lower lip-sync quality. To get the best of both, we seek a stochastic model with accurate lip-sync. To that end, we develop a new approach based on the following observation: if a method generates realistic 3D lip motions, it should be possible to infer the spoken audio from the lip motion. The inferred speech should match the original input audio, and erroneous predictions create a novel supervision signal for training 3D talking head avatars with accurate lip-sync. To demonstrate this effect, we propose THUNDER (Talking Heads Under Neural Differentiable Elocution Reconstruction), a 3D talking head avatar framework that introduces a novel supervision mechanism via differentiable sound production. First, we train a novel mesh-to-speech model that regresses audio from facial animation. Then, we incorporate this model into a diffusion-based talking avatar framework. During training, the mesh-to-speech model takes the generated animation and produces a sound that is compared to the input speech, creating a differentiable analysis-by-audio-synthesis supervision loop. Our extensive qualitative and quantitative experiments demonstrate that THUNDER significantly improves the quality of the lip-sync of talking head avatars while still allowing for generation of diverse, high-quality, expressive facial animations.
project arXiv BibTeX

Perceiving Systems Conference Paper NeuralFur: Animal Fur Reconstruction from Multi-view Images Sklyarova, V., Kabadayi, B., Yiannakidis, A., Becherini, G., Black, M. J., Thies, J. In Int. Conf. on 3D Vision (3DV), March 2026 (Accepted)
Reconstructing realistic animal fur geometry from images is a challenging task due to the fine-scale details, self-occlusion, and view-dependent appearance of fur. In contrast to human hairstyle reconstruction, there are also no datasets that could be leveraged to learn a fur prior for different animals. In this work, we present a first multi-view-based method for high-fidelity 3D fur modeling of animals using a strand-based representation, leveraging the general knowledge of a vision language model. Given calibrated multi-view RGB images, we first reconstruct a coarse surface geometry using traditional multi-view stereo techniques. We then use a visual question answering (VQA) system to retrieve information about the realistic length structure of the fur for each part of the body. We use this knowledge to construct the animal’s furless geometry and grow strands atop it. The fur reconstruction is supervised with both geometric and photometric losses computed from multi-view images. To mitigate orientation ambiguities stemming from the Gabor filters that are applied to the input images, we additionally utilize the VQA to guide the strands' growth direction and their relation to the gravity vector that we incorporate as a loss. With this new schema of using a VQA model to guide 3D reconstruction from multi-view inputs, we show generalization across a variety of animals with different fur types.
project arXiv code BibTeX

Perceiving Systems Article Textile suit for anywhere full-body motion capture Sun, H., Feng, Y., Kao, P., Black, M. J., Kramer-Bottiglio, R. Science Advances, 12(10):1-15, March 2026 (Published)
Wearable technology has shown notable promise for tracking human motion, offering valuable insights for fields ranging from biomechanics to healthcare. Traditional motion capture systems, however, are often bulky and disruptive, making them impractical for daily use. Advances in textile-based sensing offer a promising alternative, enabling seamless integration of air- and sweat-permeable sensors into everyday clothing. Here, a sensorized textile suit designed for unobtrusive full-body motion capture is presented. The suit is capable of accurately tracking complex movements without interfering with routine activities. This wearable, using an individual-customized network of fabric-based sensors, autonomously identifies and monitors movement angles and patterns, providing insights into physical range, activity frequency, and exertion levels. Language models are shown to interpret motion data into descriptive language, enhancing its potential for real-world applications. This sensorized textile suit and corresponding algorithms represent a step forward in accessible, continuous movement monitoring in the form of everyday clothing, opening avenues for studying human behavior and health in natural environments. YSuit is a fully textile suit for accurate, customizable, washable, and comfortable anywhere full-body motion capture.
pdf publisher site data DOI BibTeX

Haptic Intelligence Article Comparing Placement and Polarity Configurations of a Two-Magnet Fingertip Vibrotactile Device Gertler, I., Ballardini, G., Tangolar, D., Serhat, G., Kuchenbecker, K. J. Scientific Reports, March 2026 (Published)
Vibrotactile feedback enriches the use of wearable technologies for entertainment, navigation, and healthcare. The actuators of these portable systems, particularly fingertip devices, need to be compact, comfortable, and easy to integrate. Multiple vibrating elements could enhance perceptual realism, but how should they be arranged and oriented on the fingerpad? Here, we evaluate a simple approach that uses an audio input signal to drive an air coil that vibrates two magnets embedded in a soft fingertip sheath; the magnets are arranged in the radial-ulnar or proximal-distal direction with either the same or opposite polarity. We explore the effects of these new device configurations on both dynamic response and haptic perception. Experimental results indicate that the vibrations were perceived well across frequencies, with stronger sensations between 180 and 360 Hz, which aligns with the high vibration magnitudes our computational simulation predicts in this frequency range. Interestingly, perceptual responses showed that participants mainly classified vibrations based on the excitation frequency rather than the polarity of the magnets. Participants also rated vibrotactile feedback derived from recorded sounds and replayed for different interactions. Their evaluations offer promising evidence that this actuation approach could be used in extended-reality applications to improve transient user interactions with virtual objects.
DOI BibTeX

Haptic Intelligence Conference Paper Designing a Psychotherapy Support Robot for Young Children Diagnosed with Obsessive-Compulsive Disorder Mohan, M., L’Orsa, R., Grüninger, F., Stollhof, B., Klein, C. S., Dinauer, R., Burns, R. B., Renner, T. J., Hollmann, K., Kuchenbecker, K. J. In Companion Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 1-6, Late-Breaking Report (LBR) (6 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI), Edinburgh, UK, March 2026, Mayumi Mohan and Rachael L'Orsa contributed equally to this publication (Published)
The gold-standard treatment for children diagnosed with obsessive-compulsive disorder (OCD) is therapist-guided cognitive behavioral therapy (CBT), which includes exposure and response prevention (ERP) sessions that teach children to overcome compulsive responses when exposed to their anxiety-inducing triggers. CBT requires children to report frequent self-assessments of tension during both therapist-supported and therapist-free self-management ERP sessions. Videoconferencing-delivered CBT (vCBT) enables a psychotherapist to treat a child remotely in their home, where OCD symptoms often arise, but these remote therapeutic interactions lack physical presence and can be challenging to run. We propose using a robot as an input/output device during vCBT for young children diagnosed with OCD, and we introduce a stationary table-top koala robot for this application. We further describe the first of three planned participatory design phases: a co-design study comprising two sessions where child and adolescent psychotherapists role-played vCBT ERP exercises with this robot to help define its role.
DOI BibTeX

Haptic Intelligence Ph.D. Thesis Haptify: A Measurement-Based System for Quantifying the Quality of Haptic Interfaces Fazlollahi, F. University of Tübingen, Tübingen, Germany, March 2026, Department of Computer Science (Published)
Grounded force-feedback (GFF) devices, exoskeletons, and other haptic robots modulate human movement through carefully engineered mechanical, electrical, and computational designs. Given their significant societal potential and often high cost, it is essential to fairly and efficiently assess the quality of these intimate cyber-physical interfaces. However, existing device specifications and low-level performance metrics often fail to capture the nuanced qualities that expert users perceive during hands-on experimentation. To address this gap, this thesis introduces Haptify, a comprehensive benchmarking system that can thoroughly, fairly, and noninvasively evaluate GFF haptic devices. Haptify integrates multiple sensing modalities - a seven-camera optical motion-capture system, a custom-built 60-cm-square force plate, and an instrumented end-effector that can be adapted to different devices - to record the interaction between the human hand, the device, and the ground during both passive and active experiments. With this setup, users hold the device end-effector and move it through a series of carefully designed tasks while Haptify measures kinematic and kinetic responses. From this process, we establish six key ways to assess GFF device performance: workspace shape, global free-space forces, global free-space vibrations, local dynamic forces and torques, frictionless surface rendering, and stiffness rendering. These benchmarks enable systematic evaluation and comparison across devices. We first apply Haptify to benchmark two GFF devices produced by 3D Systems: the widely used Touch and the more expensive Touch X. Results reveal that the Touch X offers a slightly smaller workspace than the Touch, but it produces smaller and more predictable free-space forces, reduced vibrations, more consistent dynamic forces and torques, and higher-quality rendering of both frictionless surfaces and stiff virtual objects. To further validate and extend our approach, we conducted a user study with sixteen expert hapticians who used Haptify to evaluate four commercial GFF devices: Novint Falcon, Force Dimension Omega.3, Touch, and Touch X. Experts tested the devices in unpowered mode and across five representative virtual benchmark environments, providing extensive quantitative ratings and qualitative feedback. We distilled recurring themes from their input and analyzed correlations between expert opinions and sensor-based measurements. Our findings show that expert judgments of fundamental haptic quality indicators align closely with the metrics derived from Haptify. Moreover, device performance both unpowered and in active benchmarks can be used to predict its suitability for more complex applications, such as teleoperated surgery. By linking expert assessments with external measurement data, this thesis establishes a combined qualitative-quantitative framework for benchmarking haptic robots. This approach not only enables fair comparison across diverse devices but also establishes a direct connection between objective measurements and the subjective expertise of experienced hapticians. In doing so, it lays the foundation for more rigorous, transparent, and application-relevant evaluation of haptic technologies.
BibTeX

Haptic Intelligence Miscellaneous Rendering Forces with a Modular Cable System, Motors, and Brakes Bartels, J. U., Achberger, A., Kuchenbecker, K. J., Sedlmair, M. Extended abstract (3 pages) presented at the German Robotics Conference (GRC), Cologne, Germany, March 2026 (Published)
We describe the hardware design, force-rendering approach, and evaluation of a new reconfigurable haptic interface consisting of a network of hybrid motor-brake actuation modules that apply forces via cables. Each module contains both a motor and a brake, enabling it to smoothly render active forces up to 6 N using its motor and collision forces up to 186 N using its passive one-way brake. The modular design, meanwhile, allows the system to deliver rich haptic feedback in a flexible number of DoF and widely ranging configurations.
BibTeX

Haptic Intelligence Dynamic Locomotion Ph.D. Thesis The Human Leg Catapult: Biological Mechanisms for Walking Gait Replicated in the EcoWalker Robot Kiss, B. University of Stuttgart, Stuttgart, Germany, March 2026, Faculty of Civil and Environmental Engineering (Published)
Humanoid robots and assistive devices have yet to match the efficiency and adaptability of able-bodied human walking in challenging environments. To bridge this performance gap, my projects explored the underlying mechanisms of human locomotion, focusing on the ankle push-off. Ankle push-off has a prominent role in walking due to its high-power output at the end of the stance phase, and due to the impact of its timing on the adaptability to diverse environments. The human leg catapult analogy provides a framework for the projects to understand and replicate the complex biological mechanisms that govern human walking gait. As a platform for the replication, the human-like bipedal EcoWalker robot was developed from version 1 to 3 in three consecutive projects, with iterative design and control updates tailored to each project's goals. Our findings provide insights into the separate roles of mono- and biarticular muscle-tendon units in the human leg catapult, while we also show functional details of the human leg catapult release mechanism through five distinct release processes on the EcoWalker robot. Utilizing the robot in the projects ensures that our findings are relevant to practical applications, allowing humanoid robot and assistive device developers to build on our insights, potentially reducing the performance gap in efficiency and adaptability between able-bodied human walking and artificial walking.
BibTeX

Haptic Intelligence Robotics Materials Medical Systems Article Functional Gradients Facilitate Tactile Sensing in Elephant Whiskers Schulz, A. K., Kaufmann, L. V., Smith, L. T., Philip, D. S., David, H., Lazovic, J., Brecht, M., Richter, G., Kuchenbecker, K. J. Science, 391(6786):712-718, February 2026, Lena V. Kaufmann and Lawrence T. Smith contributed equally to this work (Published)
Keratin composites enable animals to hike with hooves, fly with feathers, and sense with skin. Mammalian whiskers are elongated keratin rods attached to tactile skin structures that extend the animal's sensory volume. We investigated the whiskers that cover Asian elephant (Elephas maximus) trunks and found that they are geometrically and mechanically tailored to facilitate tactile perception by encoding contact location in the amplitude and frequency of the vibrotactile signal felt at the whisker base. Elephant whiskers emerge from armored trunk skin and shift from a thick, circular, porous, stiff base to a thin, ovular, dense, soft tip. These functional gradients of geometry, porosity, and stiffness independently tune the neuromechanics of elephant trunk touch to facilitate highly dexterous manipulation while ensuring whisker durability.
MPI-IS News Article YouTube Video Highlight Whisker Simulation Toolkit Edmond Data Repository Download Paper for Free Press Coverage DOI BibTeX

Physical Intelligence Article Optoacoustically augmented magnetic guidewire for radiation-free minimally invasive therapies Wang, F., Bao, X., Yildiz, E., Yu, Y., Deán-Ben, X. L., Kang, W., Zhang, S., Sheehan, D., Soon, R. H., Zinnanti, J., Sitti, M. Science Advances, 12:eaea0201, February 2026 (Published)
Endovascular interventions are essential for treating cerebrovascular diseases, yet their monitoring methods commonly rely on ionizing radiation and contrast agents, posing unnecessary risks to patients and clinicians. We present a multifunctional optoacoustically augmented magnetic guidewire (OptoMaG) that integrates optoacoustic imaging with magnetic navigation to enable radiation-free, image-guided interventions. The ~250-micrometer flexible guidewire incorporates a 460-nanometer luminescent core with an enhanced optoacoustic signature and a FePt magnetic tip for precise, steerable control. Proof-of-concept studies show that OptoMaG can be actively navigated with external magnetic fields to traverse a 3D human-scale cerebrovascular phantom and accurately reach target brain sites. Beyond navigation, the FePt tip enables localized thermal ablation under remote radiofrequency stimulation, highlighting its theranostic potential for tumor treatment. In addition, OptoMaG functions as a light source for photodynamic therapy, selectively activating photosensitizers to destroy tumor cells while preserving healthy tissue. Collectively, OptoMaG provides a safe, radiation-free platform merging real-time navigation with targeted therapeutic capabilities.
DOI URL BibTeX

Haptic Intelligence Ph.D. Thesis Modeling, Fabricating, and Evaluating Synergistic Soft‑Rigid Actuators Gertler, I. University of Stuttgart, Stuttgart, Germany, February 2026, Faculty of Engineering Design, Production Engineering and Automotive Engineering (Published)
Soft actuators offer lightweight, compliant, and safe alternatives to traditional mechanisms, but they often incur complicated actuation schemes, bulky support systems, and limited functionality when made solely from soft materials. Soft‑rigid designs that integrate rigid elements into primarily soft bodies are common, yet the potential of those rigid parts to shape actuation behavior without compromising the overall softness remains underexplored, and fabrication practices often lack reproducibility. This thesis presents two case studies of synergistic hybrid actuation systems that utilize the complementary roles of soft and rigid components to dictate temporal and spectral behavior in response to simple input commands. Between the soft and hard components, one is typically active, while the other is passive. The first case study implements a soft-active/rigid-passive approach for the medical robotics application of endoluminal locomotion. A thin hyperelastic balloon encased in an inextensible sleeve is coupled with a thicker, non-encased balloon on a single fluid supply to serve as front and rear anchors, respectively. Geometry and material selection reshape the pressure-stretch response so the rear anchor inflates and deflates before the front anchor, enabling asymmetric sequencing useful for peristaltic locomotion inside a lumen. Numerical simulation and experiments validate the characteristic curves of dip-molded balloons and alternating anchoring in rigid tubes. The approach can be extended to generate actuation patterns for sequential haptic feedback and other robotic applications. The second case study applies a soft-passive/rigid-active strategy in the domain of fingertip haptic actuation. A dip‑molded silicone sheath with embedded miniature magnets, excited by a single air‑core coil, produces localized, rich vibrotactile feedback. Simulations, mechanical measurements, and user experiments with a single-magnet design show consistent frequency‑dependent behavior and strong perceptual salience. In follow-on work, various dual‑magnet arrangements were also simulated, fabricated, and thoroughly evaluated. Classification tests indicate that frequency content is more important for perception than magnet orientation, while a realism‑rating experiment supports the feasibility of audio-driven simple commands for realistic haptic feedback. The device is demonstrated on the fingertip in virtual reality and could be adapted for other body locations for navigation, rehabilitation, or related applications. Together, these studies provide design rules, a simulation-fabrication-validation workflow, and reproducible fabrication practices for soft-rigid hybrid actuators that realize desired mechanical outputs from minimal actuation commands. The methods and findings generalize to other soft actuators and have potential applications in domains such as medical devices, wearable technologies, and soft sensing.
BibTeX

Perceiving Systems Ph.D. Thesis From Perception to Actions: Autonomous Exploration, Synthetic Data, and Dynamic Worlds Bonetto, E. January 2026 (Published)
This thesis explores innovative methods and frameworks to enhance intelligent systems' visual perception capabilities. Vision is the primary means by which many animals perceive, understand, learn, reason about, and interact with the world to achieve their goals. Unlike animals, intelligent systems must acquire these capabilities by processing raw visual data captured by cameras using computer vision and deep learning. First, we consider a crucial aspect of visual perception in intelligent systems: understanding the structure and layout of the environment. To enable applications such as object interaction or extended reality in previously unseen spaces, these systems are often required to estimate their own motion. When operating in novel environments, they must also construct a map of the space. Together, we have the essence of the Simultaneous Localization and Mapping (SLAM) problem. However, pre-mapping environments can be impractical, costly, and unscalable in scenarios like disaster response or home automation. This makes it essential to develop robots capable of autonomously exploring and mapping unknown areas, a process known as Active SLAM. Active SLAM typically involves a multi-step process in which the robot acts on the available information to decide the next best actions. The goal is to autonomously and efficiently explore environments without using prior information. Despite an extensive history, Active SLAM methods focused only on short- or long-term objectives, without considering the totality of the process or adapting to the ever-changing states. Addressing these gaps, we introduce iRotate to capitalize on continuous information-gain prediction. Distinct from prevailing approaches, iRotate constantly (pre)optimizes camera viewpoints acting on i) long-term, ii) short-term, and iii) real time objectives. By doing this, iRotate significantly reduces energy consumption and localization errors, thus diminishing the exploration effort - a substantial leap in efficiency and effectiveness. iRotate, like many other SLAM approaches, leverages the assumption of operating in a static environment. Dynamic components in the scene significantly impact SLAM performance in the localization, place recognition, and optimization steps, hindering the widespread adoption of autonomous robots. This stems from the difficulties of collecting diverse ground truth information in the real world and the long-standing limitations of simulation tools. Testing directly in the real world is costly and risky without prior simulation validation. Datasets instead are inherently static and non-interactive making them useless for developing autonomous approaches. Then, existing simulation tools often lack the visual realism and flexibility to create and control fully customized experiments to bridge the gap between simulation and the real world. This thesis addresses the challenges of obtaining ground truth data and simulating dynamic environments by introducing the GRADE framework. Through a photorealistic rendering engine, we enable online and offline testing of robotic systems and the generation of richly annotated synthetic ground truth data. By ensuring flexibility and repeatability, we allow the extension of previous experiments through variations, for example, in scene content or sensor settings. Synthetic data can first be used to address several challenges in the context of Deep Learning (DL) approaches, e.g. mismatched data distribution between applications, costs and limits of data collection procedures, and errors caused by incorrect or inconsistent labeling in training datasets. However, the gap between the real and simulated worlds often limits the direct use of synthetic data making style transfer, adaptation techniques, or real-world information necessary. Here, we leverage the photorealism obtainable with GRADE to generate synthetic data and overcome these issues. First, since humans are significant sources of dynamic behavior in environments and the target of many applications, we focus on their detection and segmentation. We train models on real, synthetic, and mixed datasets, and show that using only synthetic data can lead to state-of-the-art performance in indoor scenarios. Then, we leverage GRADE to benchmark several Dynamic Visual SLAM methods. These often rely on semantic segmentation and optical flow techniques to identify moving objects and exclude their visual features from the pose estimation and optimization processes. Our evaluations show how they tend to reject too many features, leading to failures in accurately and fully tracking camera trajectories. Surprisingly, we observed low tracking rates not only on simulated sequences but also in real-world datasets. Moreover, we also show that the performance of the segmentation and detection models used are not always positively correlated with the ones of the Dynamic Visual SLAM methods. These failures are mainly due to incorrect estimations, crowded scenes, and not considering the different motion states that the object can have. Addressing this, we introduce DynaPix. This Dynamic Visual SLAM method estimates per-pixel motion probabilities and incorporates them into a new enhanced pose estimation and optimization processes within the SLAM backend, resulting in longer tracking times and lower trajectory errors. Finally, we use GRADE to address the challenge of limited and inaccurate annotations of wild zebras, particularly for their detection and pose estimation when observed by unmanned aerial vehicles. Leveraging the flexibility of GRADE, we introduce ZebraPose - the first full top-down synthetic-to-real detection and 2D pose estimation method. Unlike previous approaches, ZebraPose demonstrates that both tasks can be performed using only synthetic data, eliminating the need for costly data collection campaigns, time-consuming annotation procedures, or syn-to-real transfer techniques. Ultimately, this thesis demonstrates how combining perception with action can overcome critical limitations in robotics and environmental perception, thereby advancing the deployment of intelligent and autonomous systems for real-world applications. Through innovations like iRotate, GRADE, and ZebraPose, it paves the way for more robust, flexible, and efficient intelligent systems capable of navigating dynamic environments.
Thesis: From Perception to Actions BibTeX

Perceiving Systems Ph.D. Thesis Physics-Informed Modeling of Dynamic Humans and Their Interactions Shashank, T. January 2026 (Published)
Building convincing digital humans is central to the vision of shared virtual worlds for AR, VR, and telepresence. Yet, despite rapid progress in 3D vision, today’s virtual humans often fall into a physical "uncanny valley”—bodies float above or penetrate objects, motions ignore balance and biomechanics, and human object interactions miss the rich contact patterns that make behavior look real. Enforcing physics through simulation is possible, but remains too slow, restrictive, and brittle for real-world, in-the-wild settings. This thesis argues that physical realism does not require full simulation. Instead, it can emerge from the same principles humans rely on every day: intuitive physics and contact. Inspired by insights from biomechanics and cognitive science, I present a unified framework that embeds these ideas directly into learning-based 3D human modeling. In this thesis, I present a suite of methods that bridge the gap between 3D human reconstruction and physical plausibility. I first introduce IPMAN, which incorporates differentiable biomechanical cues, such as center of mass and center of pressure, to produce stable, balanced, and grounded static poses. I then extend this framework to dynamic motion with HUMOS, a shape-conditioned motion generation model that accounts for how individual physiology influences movement, without requiring paired training data. Moving beyond locomotion, I address complex human-object interactions with DECO, a 3D contact detector that estimates dense, vertex-level contact across the full body surface. Finally, I present PICO, which establishes contact correspondences between the human body and arbitrary objects to recover full 3D interactions from single images. Together, these contributions bring physics-aware human modeling closer to practical deployment. The result is a step toward digital humans that not only look right, but move and interact with the world in ways that feel intuitively real.
Thesis BibTeX

Physical Intelligence Article Perturbing dynamics of active emulsions and their collectives Khan, M. T. A., Gardi, G., Soon, R. H., Zhang, M., Sitti, M. Matter, 9:00, January 2026 (Published)
Controlling fluidic flows in active droplets is crucial in developing intelligent models to understand and mimic single-celled microorganisms. Typically, these fluidic flows are affected by the interfacial dynamics of chemical agents. We found that these flows can be reconfigured by the mere presence of an anisotropic solid boundary embedded within active droplets. Spontaneous fluidic flows dynamically orient an embedded magnetic cluster, and the magnetic cluster, when realigned, causes these flows to reorient, thus providing control over the propulsion dynamics of chemotactic emulsions. When continuously perturbed, achiral emulsions exhibit emergent chiral motion with rotating fluidic flows. Such solid-fluid interactions occur in a number of self-propelling oil droplet systems, thereby enabling control over the emergent collective behaviors of chemically distinct active droplets.
DOI URL BibTeX

Haptic Intelligence Robotics Article Open-Source Hardware and Software Platform for Vibrotactile Motion Guidance Rokhmanova, N., Martus, J., Faulkner, R., Fiene, J., Kuchenbecker, K. J. Device, 4(1):100966, January 2026 (Published)
Vibrotactile feedback can enhance motor learning, sports training, and rehabilitation, but a lack of standardized tools limits its adoption. We developed a modular open-source hardware and software platform for delivering vibrotactile feedback that is spatially and temporally precise. The prototype device uses medical adhesive, linear resonant actuators (LRAs), and rigid 3D-printed components to standardize skin contact, avoiding the variability introduced by straps. The platform was validated by using the device's built-in accelerometers to fit a dynamic model of mechanical actuator vibration and examine how the anatomical site and body composition affect perceived vibration strength in 20 participants. Then, the platform was integrated with an optical motion-capture system to teach six participants a toe-in gait, showing potential for real-time, tailored clinical studies. By openly sharing the platform's hardware and software, we provide tools for delivering standardized vibrations and benchmarking feedback strategies in diverse applications.
DOI BibTeX

Social Foundations of Computation Miscellaneous Scaling Open-Ended Reasoning To Predict the Future Chandak, N., Shashwat, G., Prabhu, A., Hardt, M., Geiping, J. January 2026 (Submitted)
High-stakes decision making involves reasoning under uncertainty about the future. In this work, we train language models to make predictions on open-ended forecasting questions. To scale up training data, we synthesize novel forecasting questions from global events reported in daily news, using a fully automated, careful curation recipe. We train the Qwen3 thinking models on our dataset, OpenForesight. To prevent leakage of future information during training and evaluation, we use an offline news corpus, both for data generation and retrieval in our forecasting system. Guided by a small validation set, we show the benefits of retrieval, and an improved reward function for reinforcement learning (RL). Once we obtain our final forecasting system, we perform held-out testing between May to August 2025. Our specialized model, OpenForecaster 8B, matches much larger proprietary models, with our training improving the accuracy, calibration, and consistency of predictions. We find calibration improvements from forecasting training generalize across popular benchmarks. We open-source all our models, code, and data to make research on language model forecasting broadly accessible.
arXiv BibTeX

Social Foundations of Computation Conference Paper Train-before-Test Harmonizes Language Model Rankings Zhang, G., Dominguez-Olmedo, R., Hardt, M. The Fourteenth International Conference on Learning Representations (ICLR), oral, Top1.18%, January 2026 (Accepted)
Existing language model benchmarks provide contradictory model rankings, even for benchmarks that aim to capture similar skills. This dilemma of conflicting rankings hampers model selection, clouds model comparisons, and adds confusion to a growing ecosystem of competing models. Recent work attributed ranking disagreement to the phenomenon of training on the test task: As released, different models exhibit a different level of preparation for any given test task. A candidate solution to the problem is train-before-test: Give each model the same benchmark-specific finetuning before evaluation. Our primary contribution is a broad empirical evaluation of train-before-test across 24 benchmarks and 61 models. We show that train-before-test significantly improves ranking agreement consistently across all benchmarks. Whereas rankings have little external validity to start with, they enjoy a significant degree of external validity when applying train-before-test: Model rankings transfer gracefully from one benchmark to the other. Even within the same model family, train-before-test reduces strong ranking disagreement to near-perfect agreement. In addition, train-before-test reduces the model-score matrix to essentially rank one, revealing new insights into the latent factors of benchmark performance. Our work supports the recommendation to make train-before-test a default component of LLM benchmarking.
arXiv BibTeX