Header logo is


2019


Thumb xl celia
Decoding subcategories of human bodies from both body- and face-responsive cortical regions

Foster, C., Zhao, M., Romero, J., Black, M. J., Mohler, B. J., Bartels, A., Bülthoff, I.

NeuroImage, 202(15):116085, November 2019 (article)

Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.

ps

paper pdf DOI [BibTex]

2019


paper pdf DOI [BibTex]


Thumb xl multihumanoflow thumb
Learning Multi-Human Optical Flow

Ranjan, A., Hoffmann, D. T., Tzionas, D., Tang, S., Romero, J., Black, M. J.

arxiv preprint arXiv:1910.1166, November 2019 (article)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.

ps

Paper poster link (url) [BibTex]


Thumb xl autonomous mocap cover image new
Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article)

Abstract
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.

ps

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Thumb xl 3dmm
3D Morphable Face Models - Past, Present and Future

Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F., Bolkart, T., Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.

arxiv preprint arXiv:1909.01815, September 2019 (article)

Abstract
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation,and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications.

ps

paper project page [BibTex]

paper project page [BibTex]


Thumb xl hessepami
Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences

Hesse, N., Pujades, S., Black, M., Arens, M., Hofmann, U., Schroeder, S.

Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article)

Abstract
Statistical models of the human body surface are generally learned from thousands of high-quality 3D scans in predefined poses to cover the wide variety of human body shapes and articulations. Acquisition of such data requires expensive equipment, calibration procedures, and is limited to cooperative subjects who can understand and follow instructions, such as adults. We present a method for learning a statistical 3D Skinned Multi-Infant Linear body model (SMIL) from incomplete, low-quality RGB-D sequences of freely moving infants. Quantitative experiments show that SMIL faithfully represents the RGB-D data and properly factorizes the shape and pose of the infants. To demonstrate the applicability of SMIL, we fit the model to RGB-D sequences of freely moving infants and show, with a case study, that our method captures enough motion detail for General Movements Assessment (GMA), a method used in clinical practice for early detection of neurodevelopmental disorders in infants. SMIL provides a new tool for analyzing infant shape and movement and is a step towards an automated system for GMA.

ps

pdf Journal DOI [BibTex]

pdf Journal DOI [BibTex]


Thumb xl kenny
Perceptual Effects of Inconsistency in Human Animations

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person’s movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. From these data, we estimated both the kinematics of the actions as well as the performer’s individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. Using these stimuli we conducted three experiments in an immersive virtual reality environment. First, a group of participants detected which of two stimuli was inconsistent. Performance was very low, and results were only marginally significant. Next, a second group of participants rated perceived attractiveness, eeriness, and humanness of consistent and inconsistent stimuli, but these judgements of animation characteristics were not affected by consistency of the stimuli. Finally, a third group of participants rated properties of the objects rather than of the performers. Here, we found strong influences of shape-motion inconsistency on perceived weight and thrown distance of objects. This suggests that the visual system relies on its knowledge of shape and motion and that these components are assimilated into an altered perception of the action outcome. We propose that the visual system attempts to resist inconsistent interpretations of human animations. Actions involving object manipulations present an opportunity for the visual system to reinterpret the introduced inconsistencies as a change in the dynamics of an object rather than as an unexpected combination of body shape and body motion.

ps

publisher pdf DOI [BibTex]

publisher pdf DOI [BibTex]


no image
X-ray Optics Fabrication Using Unorthodox Approaches

Sanli, U., Baluktsian, M., Ceylan, H., Sitti, M., Weigand, M., Schütz, G., Keskinbora, K.

Bulletin of the American Physical Society, APS, 2019 (article)

mms pi

[BibTex]

[BibTex]


Thumb xl as20205.f2
Microrobotics and Microorganisms: Biohybrid Autonomous Cellular Robots

Alapan, Y., Yasa, O., Yigit, B., Yasa, I. C., Erkoc, P., Sitti, M.

Annual Review of Control, Robotics, and Autonomous Systems, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl woodw1 2892811 large
Tailored Magnetic Springs for Shape-Memory Alloy Actuated Mechanisms in Miniature Robots

Woodward, M. A., Sitti, M.

IEEE Transactions on Robotics, 35, 2019 (article)

Abstract
Animals can incorporate large numbers of actuators because of the characteristics of muscles; whereas, robots cannot, as typical motors tend to be large, heavy, and inefficient. However, shape-memory alloys (SMA), materials that contract during heating because of change in their crystal structure, provide another option. SMA, though, is unidirectional and therefore requires an additional force to reset (extend) the actuator, which is typically provided by springs or antagonistic actuation. These strategies, however, tend to limit the actuator's work output and functionality as their force-displacement relationships typically produce increasing resistive force with limited variability. In contrast, magnetic springs-composed of permanent magnets, where the interaction force between magnets mimics a spring force-have much more variable force-displacement relationships and scale well with SMA. However, as of yet, no method for designing magnetic springs for SMA-actuators has been demonstrated. Therefore, in this paper, we present a new methodology to tailor magnetic springs to the characteristics of these actuators, with experimental results both for the device and robot-integrated SMA-actuators. We found magnetic building blocks, based on sets of permanent magnets, which are well-suited to SMAs and have the potential to incorporate features such as holding force, state transitioning, friction minimization, auto-alignment, and self-mounting. We show magnetic springs that vary by more than 3 N in 750 $\mu$m and two SMA-actuated devices that allow the MultiMo-Bat to reach heights of up to 4.5 m without, and 3.6 m with, integrated gliding airfoils. Our results demonstrate the potential of this methodology to add previously impossible functionality to smart material actuators. We anticipate this methodology will inspire broader consideration of the use of magnetic springs in miniature robots and further study of the potential of tailored magnetic springs throughout mechanical systems.

pi

DOI [BibTex]


Thumb xl figure1
Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy

Son, D., Gilbert, H., Sitti, M.

Soft robotics, Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New …, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl smll201900472 fig 0001 m
Thrust and Hydrodynamic Efficiency of the Bundled Flagella

Danis, U., Rasooli, R., Chen, C., Dur, O., Sitti, M., Pekkan, K.

Micromachines, 10, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl c8sm02215a f1 hi res
The near and far of a pair of magnetic capillary disks

Koens, L., Wang, W., Sitti, M., Lauga, E.

Soft Matter, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl smll201900472 fig 0001 m
Multifarious Transit Gates for Programmable Delivery of Bio‐functionalized Matters

Hu, X., Torati, S. R., Kim, H., Yoon, J., Lim, B., Kim, K., Sitti, M., Kim, C.

Small, Wiley Online Library, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl capture
Multi-functional soft-bodied jellyfish-like swimming

Ren, Z., Hu, W., Dong, X., Sitti, M.

Nature communications, 10, 2019 (article)

pi

[BibTex]


no image
Welcome to Progress in Biomedical Engineering

Sitti, M.

Progress in Biomedical Engineering, 1, IOP Publishing, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl virtualcaliper
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements

Pujades, S., Mohler, B., Thaler, A., Tesch, J., Mahmood, N., Hesse, N., Bülthoff, H. H., Black, M. J.

IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article)

Abstract
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating “The Virtual Caliper”, which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.

ps

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]


Thumb xl smll201900472 fig 0001 m
Mechanics of a pressure-controlled adhesive membrane for soft robotic gripping on curved surfaces

Song, S., Drotlef, D., Paik, J., Majidi, C., Sitti, M.

Extreme Mechanics Letters, Elsevier, 2019 (article)

pi

[BibTex]


Thumb xl mt 2018 00757w 0007
Graphene oxide synergistically enhances antibiotic efficacy in Vancomycin resistance Staphylococcus aureus

Singh, V., Kumar, V., Kashyap, S., Singh, A. V., Kishore, V., Sitti, M., Saxena, P. S., Srivastava, A.

ACS Applied Bio Materials, ACS Publications, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl itxm a 1566425 f0001 c
Review of emerging concepts in nanotoxicology: opportunities and challenges for safer nanomaterial design

Singh, A. V., Laux, P., Luch, A., Sudrik, C., Wiehr, S., Wild, A., Santamauro, G., Bill, J., Sitti, M.

Toxicology Mechanisms and Methods, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl capture
Multifunctional and biodegradable self-propelled protein motors

Pena-Francesch, A., Giltinan, J., Sitti, M.

Nature communications, 10, Nature Publishing Group, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl capture
Cohesive self-organization of mobile microrobotic swarms

Yigit, B., Alapan, Y., Sitti, M.

arXiv preprint arXiv:1907.05856, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl screenshot from 2019 03 21 12 11 19
Automated Generation of Reactive Programs from Human Demonstration for Orchestration of Robot Behaviors

Berenz, V., Bjelic, A., Mainprice, J.

ArXiv, 2019 (article)

Abstract
Social robots or collaborative robots that have to interact with people in a reactive way are difficult to program. This difficulty stems from the different skills required by the programmer: to provide an engaging user experience the behavior must include a sense of aesthetics while robustly operating in a continuously changing environment. The Playful framework allows composing such dynamic behaviors using a basic set of action and perception primitives. Within this framework, a behavior is encoded as a list of declarative statements corresponding to high-level sensory-motor couplings. To facilitate non-expert users to program such behaviors, we propose a Learning from Demonstration (LfD) technique that maps motion capture of humans directly to a Playful script. The approach proceeds by identifying the sensory-motor couplings that are active at each step using the Viterbi path in a Hidden Markov Model (HMM). Given these activation patterns, binary classifiers called evaluations are trained to associate activations to sensory data. Modularity is increased by clustering the sensory-motor couplings, leading to a hierarchical tree structure. The novelty of the proposed approach is that the learned behavior is encoded not in terms of trajectories in a task space, but as couplings between sensory information and high-level motor actions. This provides advantages in terms of behavioral generalization and reactivity displayed by the robot.

am

Support Video link (url) [BibTex]


Thumb xl adtp201800064 fig 0004 m
Mobile microrobots for active therapeutic delivery

Erkoc, P., Yasa, I. C., Ceylan, H., Yasa, O., Alapan, Y., Sitti, M.

Advanced Therapeutics, Wiley Online Library, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl smll201900472 fig 0001 m
Shape-encoded dynamic assembly of mobile micromachines

Alapan, Y., Yigit, B., Beker, O., Demirörs, A. F., Sitti, M.

Nature, 18, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl adom201801313 fig 0001 m
Microfluidics Integrated Lithography‐Free Nanophotonic Biosensor for the Detection of Small Molecules

Sreekanth, K. V., Sreejith, S., Alapan, Y., Sitti, M., Lim, C. T., Singh, R.

Advanced Optical Materials, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl 201904010817153241
ENGINEERING Bio-inspired robotic collectives

Sitti, M.

Nature, 567, pages: 314-315, Macmillan Publishers Ltd., London, England, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl capture
Peptide-Induced Biomineralization of Tin Oxide (SnO2) Nanoparticles for Antibacterial Applications

Singh, A. V., Jahnke, T., Xiao, Y., Wang, S., Yu, Y., David, H., Richter, G., Laux, P., Luch, A., Srivastava, A., Saxena, P. S., Bill, J., Sitti, M.

Journal of nanoscience and nanotechnology, 19, American Scientific Publishers, 2019 (article)

pi

[BibTex]

[BibTex]


no image
Electromechanical actuation of dielectric liquid crystal elastomers for soft robotics

Davidson, Z., Shahsavan, H., Guo, Y., Hines, L., Xia, Y., Yang, S., Sitti, M.

Bulletin of the American Physical Society, APS, 2019 (article)

pi

[BibTex]

[BibTex]


Thumb xl turan1 2924846 large
Learning to Navigate Endoscopic Capsule Robots

Turan, M., Almalioglu, Y., Gilbert, H. B., Mahmood, F., Durr, N. J., Araujo, H., Sarı, A. E., Ajay, A., Sitti, M.

IEEE Robotics and Automation Letters, 4, 2019 (article)

pi

[BibTex]

[BibTex]

2008


no image
Learning to control in operational space

Peters, J., Schaal, S.

International Journal of Robotics Research, 27, pages: 197-212, 2008, clmc (article)

Abstract
One of the most general frameworks for phrasing control problems for complex, redundant robots is operational space control. However, while this framework is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in com- plex robots, e.g., humanoid robots. In this paper, we suggest a learning approach for opertional space control as a direct inverse model learning problem. A first important insight for this paper is that a physically cor- rect solution to the inverse problem with redundant degrees-of-freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component for our work is based on the insight that many operational space controllers can be understood in terms of a constrained optimal control problem. The cost function as- sociated with this optimal control problem allows us to formulate a learn- ing algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational space controller. From the machine learning point of view, this learning problem corre- sponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees of freedom robot arm are used to illustrate the suggested approach. The applica- tion to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on real, physical Mitsubishi PA-10 medical robotics arm.

am ei

link (url) DOI [BibTex]

2008


link (url) DOI [BibTex]


no image
ENHANCED ADHESION OF PDMS SURFACES FUNCTIONALIZED BY POLY (n-BUTYL ACRYLATE) BRUSHES INSPIRED BY GECKO FOOT HAIRS

Nese, A., Lee, H., Dong, H., Aksak, B., Cusick, B., Kowalewski, T., Matyjaszewski, K., Sitti, M.

Polymer Preprints, 49(2):107, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Design and development of the lifting and propulsion mechanism for a biologically inspired water runner robot

Floyd, S., Sitti, M.

IEEE transactions on robotics, 24(3):698-709, IEEE, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Control of Cell Behavior by Aligned Micro/Nanofibrous Biomaterial Scaffolds Fabricated by Spinneret-Based Tunable Engineered Parameters (STEP) Technique

Nain, A. S., Phillippi, J. A., Sitti, M., MacKrell, J., Campbell, P. G., Amon, C.

Small, 4(8):1153-1159, Wiley Online Library, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Adaptation to a sub-optimal desired trajectory

M. Mistry, E. A. G. L. T. Y. S. S. M. K.

Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (article)

am

PDF [BibTex]

PDF [BibTex]


no image
Rolling and spinning friction characterization of fine particles using lateral force microscopy based contact pushing

Sümer, B., Sitti, M.

Journal of Adhesion Science and Technology, 22(5-6):481-506, Taylor & Francis Group, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Modeling the soft backing layer thickness effect on adhesion of elastic microfiber arrays

Long, R., Hui, C., Kim, S., Sitti, M.

Journal of Applied Physics, 104(4):044301, AIP, 2008 (article)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Cross-talk compensation in atomic force microscopy

Onal, C. D., Sümer, B., Sitti, M.

Review of scientific instruments, 79(10):103706, AIP, 2008 (article)

pi

[BibTex]

[BibTex]


no image
Operational space control: A theoretical and emprical comparison

Nakanishi, J., Cory, R., Mistry, M., Peters, J., Schaal, S.

International Journal of Robotics Research, 27(6):737-757, 2008, clmc (article)

Abstract
Dexterous manipulation with a highly redundant movement system is one of the hallmarks of hu- man motor skills. From numerous behavioral studies, there is strong evidence that humans employ compliant task space control, i.e., they focus control only on task variables while keeping redundant degrees-of-freedom as compliant as possible. This strategy is robust towards unknown disturbances and simultaneously safe for the operator and the environment. The theory of operational space con- trol in robotics aims to achieve similar performance properties. However, despite various compelling theoretical lines of research, advanced operational space control is hardly found in actual robotics imple- mentations, in particular new kinds of robots like humanoids and service robots, which would strongly profit from compliant dexterous manipulation. To analyze the pros and cons of different approaches to operational space control, this paper focuses on a theoretical and empirical evaluation of different methods that have been suggested in the literature, but also some new variants of operational space controllers. We address formulations at the velocity, acceleration and force levels. First, we formulate all controllers in a common notational framework, including quaternion-based orientation control, and discuss some of their theoretical properties. Second, we present experimental comparisons of these approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks. As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which ensures physical consistency, as this issue was crucial for our successful robot implementations. Our extensive empirical results demonstrate that one of the simplified acceleration-based approaches can be advantageous in terms of task performance, ease of parameter tuning, and general robustness and compliance in face of inevitable modeling errors.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Adhesion of biologically inspired oil-coated polymer micropillars

Cheung, E., Sitti, M.

Journal of Adhesion Science and Technology, 22(5-6):569-589, Taylor & Francis Group, 2008 (article)

pi

[BibTex]

[BibTex]