Header logo is


2020


Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

arxiv [BibTex]

2020


arxiv [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully-automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires (1) the generated human bodies should be semantically plausible with the 3D environment, e.g. people sitting on the sofa or cooking near the stove; (2) the generated human-scene interaction should be physically feasible in the way that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human pose conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR.

ps

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

ps

[BibTex]

[BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

ps

arXiv code [BibTex]

arXiv code [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]

arXiv [BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

ps

pdf [BibTex]

pdf [BibTex]


Learning Multi-Human Optical Flow
Learning Multi-Human Optical Flow

Ranjan, A., Hoffmann, D. T., Tzionas, D., Tang, S., Romero, J., Black, M. J.

International Journal of Computer Vision (IJCV), January 2020 (article)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.

ps

Paper Publisher Version poster link (url) DOI [BibTex]


Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium
Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium

Yunusa, M., Lahlou, A., Sitti, M.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Although substrates play an important role upon crystallization of supercooled liquids, the influences of surface temperature and thermal property have remained elusive. Here, the crystallization of supercooled phase‐change gallium (Ga) on substrates with different thermal conductivity is studied. The effect of interfacial temperature on the crystallization kinetics, which dictates thermo‐mechanical stresses between the substrate and the crystallized Ga, is investigated. At an elevated surface temperature, close to the melting point of Ga, an extended single‐crystal growth of Ga on dielectric substrates due to layering effect and annealing is realized without the application of external fields. Adhesive strength at the interfaces depends on the thermal conductivity and initial surface temperature of the substrates. This insight can be applicable to other liquid metals for industrial applications, and sheds more light on phase‐change memory crystallization.

pi

[BibTex]


no image
Nanoerythrosome-functionalized biohybrid microswimmers

Nicole, Oncay, Yunus, Birgul, Metin Sitti

2020 (article) Accepted

pi

[BibTex]

[BibTex]


no image
Statistical reprogramming of macroscopic self-assembly with dynamic boundaries

Utku, , Massimo, , Zoey, , Sitti,

2020 (article) Accepted

pi

[BibTex]

[BibTex]


Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice
Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice

Kozielski, K. L., Jahanshahi, A., Gilbert, H. B., Yu, Y., Erin, O., Francisco, D., Alosaimi, F., Temel, Y., Sitti, M.

bioRxiv, Cold Spring Harbor Laboratory, 2020 (article)

pi

[BibTex]

[BibTex]


Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms
Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms

Dong, X., Sitti, M.

The International Journal of Robotics Research, 2020 (article)

Abstract
Magnetically actuated mobile microrobots can access distant, enclosed, and small spaces, such as inside microfluidic channels and the human body, making them appealing for minimally invasive tasks. Despite their simplicity when scaling down, creating collective microrobots that can work closely and cooperatively, as well as reconfigure their formations for different tasks, would significantly enhance their capabilities such as manipulation of objects. However, a challenge of realizing such cooperative magnetic microrobots is to program and reconfigure their formations and collective motions with under-actuated control signals. This article presents a method of controlling 2D static and time-varying formations among collective self-repelling ferromagnetic microrobots (100 μm to 350 μm in diameter, up to 260 in number) by spatially and temporally programming an external magnetic potential energy distribution at the air–water interface or on solid surfaces. A general design method is introduced to program external magnetic potential energy using ferromagnets. A predictive model of the collective system is also presented to predict the formation and guide the design procedure. With the proposed method, versatile complex static formations are experimentally demonstrated and the programmability and scaling effects of formations are analyzed. We also demonstrate the collective mobility of these magnetic microrobots by controlling them to exhibit bio-inspired collective behaviors such as aggregation, directional motion with arbitrary swarm headings, and rotational swarming motion. Finally, the functions of the produced microrobotic swarm are demonstrated by controlling them to navigate through cluttered environments and complete reconfigurable cooperative manipulation tasks.

pi

DOI [BibTex]


Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs
Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs

Colmenares, D., Kania, R., Liu, M., Sitti, M.

arXiv preprint arXiv:2002.00798, 2020 (article)

Abstract
In this paper, we characterize the performance of and develop thermal management solutions for a DC motor-driven resonant actuator developed for flapping wing micro air vehicles. The actuator, a DC micro-gearmotor connected in parallel with a torsional spring, drives reciprocal wing motion. Compared to the gearmotor alone, this design increased torque and power density by 161.1% and 666.8%, respectively, while decreasing the drawn current by 25.8%. Characterization of the actuator, isolated from nonlinear aerodynamic loading, results in standard metrics directly comparable to other actuators. The micro-motor, selected for low weight considerations, operates at high power for limited duration due to thermal effects. To predict system performance, a lumped parameter thermal circuit model was developed. Critical model parameters for this micro-motor, two orders of magnitude smaller than those previously characterized, were identified experimentally. This included the effects of variable winding resistance, bushing friction, speed-dependent forced convection, and the addition of a heatsink. The model was then used to determine a safe operation envelope for the vehicle and to design a weight-optimal heatsink. This actuator design and thermal modeling approach could be applied more generally to improve the performance of any miniature mobile robot or device with motor-driven oscillating limbs or loads.

pi

[BibTex]


Magnetic Resonance Imaging System--Driven Medical Robotics
Magnetic Resonance Imaging System–Driven Medical Robotics

Erin, O., Boyvat, M., Tiryaki, M. E., Phelan, M., Sitti, M.

Advanced Intelligent Systems, 2, Wiley Online Library, 2020 (article)

Abstract
Magnetic resonance imaging (MRI) system–driven medical robotics is an emerging field that aims to use clinical MRI systems not only for medical imaging but also for actuation, localization, and control of medical robots. Submillimeter scale resolution of MR images for soft tissues combined with the electromagnetic gradient coil–based magnetic actuation available inside MR scanners can enable theranostic applications of medical robots for precise image‐guided minimally invasive interventions. MRI‐driven robotics typically does not introduce new MRI instrumentation for actuation but instead focuses on converting already available instrumentation for robotic purposes. To use the advantages of this technology, various medical devices such as untethered mobile magnetic robots and tethered active catheters have been designed to be powered magnetically inside MRI systems. Herein, the state‐of‐the‐art progress, challenges, and future directions of MRI‐driven medical robotic systems are reviewed.

pi

[BibTex]

[BibTex]


Pros and Cons: Magnetic versus Optical Microrobots
Pros and Cons: Magnetic versus Optical Microrobots

Sitti, M., Wiersma, D. S.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Mobile microrobotics has emerged as a new robotics field within the last decade to create untethered tiny robots that can access and operate in unprecedented, dangerous, or hard‐to‐reach small spaces noninvasively toward disruptive medical, biotechnology, desktop manufacturing, environmental remediation, and other potential applications. Magnetic and optical actuation methods are the most widely used actuation methods in mobile microrobotics currently, in addition to acoustic and biological (cell‐driven) actuation approaches. The pros and cons of these actuation methods are reported here, depending on the given context. They can both enable long‐range, fast, and precise actuation of single or a large number of microrobots in diverse environments. Magnetic actuation has unique potential for medical applications of microrobots inside nontransparent tissues at high penetration depths, while optical actuation is suitable for more biotechnology, lab‐/organ‐on‐a‐chip, and desktop manufacturing types of applications with much less surface penetration depth requirements or with transparent environments. Combining both methods in new robot designs can have a strong potential of combining the pros of both methods. There is still much progress needed in both actuation methods to realize the potential disruptive applications of mobile microrobots in real‐world conditions.

pi

[BibTex]

[BibTex]


no image
Selectively Controlled Magnetic Microrobots with Opposing Helices

Joshua, , Wendong, , Panayiota, , Eric, , Sitti,

2020 (article) Accepted

pi

[BibTex]

[BibTex]


General Movement Assessment from videos of computed {3D} infant body models is equally effective compared to conventional {RGB} Video rating
General Movement Assessment from videos of computed 3D infant body models is equally effective compared to conventional RGB Video rating

Schroeder, S., Hesse, N., Weinberger, R., Tacke, U., Gerstl, L., Hilgendorff, A., Heinen, F., Arens, M., Bodensteiner, C., Dijkstra, L. J., Pujades, S., Black, M., Hadders-Algra, M.

Early Human Development, 2020 (article)

Abstract
Background: General Movement Assessment (GMA) is a powerful tool to predict Cerebral Palsy (CP). Yet, GMA requires substantial training hampering its implementation in clinical routine. This inspired a world-wide quest for automated GMA. Aim: To test whether a low-cost, marker-less system for three-dimensional motion capture from RGB depth sequences using a whole body infant model may serve as the basis for automated GMA. Study design: Clinical case study at an academic neurodevelopmental outpatient clinic. Subjects: Twenty-nine high-risk infants were recruited and assessed at their clinical follow-up at 2-4 month corrected age (CA). Their neurodevelopmental outcome was assessed regularly up to 12-31 months CA. Outcome measures: GMA according to Hadders-Algra by a masked GMA-expert of conventional and computed 3D body model (“SMIL motion”) videos of the same GMs. Agreement between both GMAs was assessed, and sensitivity and specificity of both methods to predict CP at ≥12 months CA. Results: The agreement of the two GMA ratings was substantial, with κ=0.66 for the classification of definitely abnormal (DA) GMs and an ICC of 0.887 (95% CI 0.762;0.947) for a more detailed GM-scoring. Five children were diagnosed with CP (four bilateral, one unilateral CP). The GMs of the child with unilateral CP were twice rated as mildly abnormal. DA-ratings of both videos predicted bilateral CP well: sensitivity 75% and 100%, specificity 88% and 92% for conventional and SMIL motion videos, respectively. Conclusions: Our computed infant 3D full body model is an attractive starting point for automated GMA in infants at risk of CP.

ps

[BibTex]

[BibTex]


Acoustically powered surface-slipping mobile microrobots
Acoustically powered surface-slipping mobile microrobots

Aghakhani, A., Yasa, O., Wrede, P., Sitti, M.

Proceedings of the National Academy of Sciences, 117, National Acad Sciences, 2020 (article)

Abstract
Untethered synthetic microrobots have significant potential to revolutionize minimally invasive medical interventions in the future. However, their relatively slow speed and low controllability near surfaces typically are some of the barriers standing in the way of their medical applications. Here, we introduce acoustically powered microrobots with a fast, unidirectional surface-slipping locomotion on both flat and curved surfaces. The proposed three-dimensionally printed, bullet-shaped microrobot contains a spherical air bubble trapped inside its internal body cavity, where the bubble is resonated using acoustic waves. The net fluidic flow due to the bubble oscillation orients the microrobot's axisymmetric axis perpendicular to the wall and then propels it laterally at very high speeds (up to 90 body lengths per second with a body length of 25 µm) while inducing an attractive force toward the wall. To achieve unidirectional locomotion, a small fin is added to the microrobot’s cylindrical body surface, which biases the propulsion direction. For motion direction control, the microrobots are coated anisotropically with a soft magnetic nanofilm layer, allowing steering under a uniform magnetic field. Finally, surface locomotion capability of the microrobots is demonstrated inside a three-dimensional circular cross-sectional microchannel under acoustic actuation. Overall, the combination of acoustic powering and magnetic steering can be effectively utilized to actuate and navigate these microrobots in confined and hard-to-reach body location areas in a minimally invasive fashion.

pi

[BibTex]

[BibTex]


Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle
Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle

Colmenares, D., Kania, R., Zhang, W., Sitti, M.

arXiv preprint arXiv:2001.11586, 2020 (article)

Abstract
We investigate the effect of wing twist flexibility on lift and efficiency of a flapping-wing micro air vehicle capable of liftoff. Wings used previously were chosen to be fully rigid due to modeling and fabrication constraints. However, biological wings are highly flexible and other micro air vehicles have successfully utilized flexible wing structures for specialized tasks. The goal of our study is to determine if dynamic twisting of flexible wings can increase overall aerodynamic lift and efficiency. A flexible twisting wing design was found to increase aerodynamic efficiency by 41.3%, translational lift production by 35.3%, and the effective lift coefficient by 63.7% compared to the rigid-wing design. These results exceed the predictions of quasi-steady blade element models, indicating the need for unsteady computational fluid dynamics simulations of twisted flapping wings.

pi

[BibTex]

[BibTex]


Cohesive self-organization of mobile microrobotic swarms
Cohesive self-organization of mobile microrobotic swarms

Yigit, B., Alapan, Y., Sitti, M.

arXiv preprint arXiv:1907.05856, 2020 (article)

pi

[BibTex]

[BibTex]


Bioinspired underwater locomotion of light-driven liquid crystal gels
Bioinspired underwater locomotion of light-driven liquid crystal gels

Shahsavan, H., Aghakhani, A., Zeng, H., Guo, Y., Davidson, Z. S., Priimagi, A., Sitti, M.

Proceedings of the National Academy of Sciences, National Acad Sciences, 2020 (article)

Abstract
Untethered dynamic shape programming and control of soft materials have significant applications in technologies such as soft robots, medical devices, organ-on-a-chip, and optical devices. Here, we present a solution to remotely actuate and move soft materials underwater in a fast, efficient, and controlled manner using photoresponsive liquid crystal gels (LCGs). LCG constructs with engineered molecular alignment show a low and sharp phase-transition temperature and experience considerable density reduction by light exposure, thereby allowing rapid and reversible shape changes. We demonstrate different modes of underwater locomotion, such as crawling, walking, jumping, and swimming, by localized and time-varying illumination of LCGs. The diverse locomotion modes of smart LCGs can provide a new toolbox for designing efficient light-fueled soft robots in fluid-immersed media.

pi

[BibTex]

[BibTex]


Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients
Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients

Giachini, P., Gupta, S., Wang, W., Wood, D., Yunusa, M., Baharlou, E., Sitti, M., Menges, A.

Science Advances, 6, American Association for the Advancement of Science, 2020 (article)

Abstract
Functionally graded materials (FGMs) enable applications in fields such as biomedicine and architecture, but their fabrication suffers from shortcomings in gradient continuity, interfacial bonding, and directional freedom. In addition, most commercial design software fail to incorporate property gradient data, hindering explorations of the design space of FGMs. Here, we leveraged a combined approach of materials engineering and digital processing to enable extrusion-based multimaterial additive manufacturing of cellulose-based tunable viscoelastic materials with continuous, high-contrast, and multidirectional stiffness gradients. A method to engineer sets of cellulose-based materials with similar compositions, yet distinct mechanical and rheological properties, was established. In parallel, a digital workflow was developed to embed gradient information into design models with integrated fabrication path planning. The payoff of integrating these physical and digital tools is the ability to achieve the same stiffness gradient in multiple ways, opening design possibilities previously limited by the rigid coupling of material and geometry.

pi

[BibTex]

[BibTex]

2018


Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers
Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers

Khalil, I. S. M., Tabak, A. F., Hamed, Y., Mitwally, M. E., Tawakol, M., Klingner, A., Sitti, M.

Advanced Science, 5(2):1700461, 2018 (article)

Abstract
Abstract Peritrichously flagellated Escherichia coli swim back and forth by wrapping their flagella together in a helical bundle. However, other monotrichous bacteria cannot swim back and forth with a single flagellum and planar wave propagation. Quantifying this observation, a magnetically driven soft two‐tailed microrobot capable of reversing its swimming direction without making a U‐turn trajectory or actively modifying the direction of wave propagation is designed and developed. The microrobot contains magnetic microparticles within the polymer matrix of its head and consists of two collinear, unequal, and opposite ultrathin tails. It is driven and steered using a uniform magnetic field along the direction of motion with a sinusoidally varying orthogonal component. Distinct reversal frequencies that enable selective and independent excitation of the first or the second tail of the microrobot based on their tail length ratio are found. While the first tail provides a propulsive force below one of the reversal frequencies, the second is almost passive, and the net propulsive force achieves flagellated motion along one direction. On the other hand, the second tail achieves flagellated propulsion along the opposite direction above the reversal frequency.

pi

link (url) DOI [BibTex]

2018


link (url) DOI [BibTex]


Customized Multi-Person Tracker
Customized Multi-Person Tracker

Ma, L., Tang, S., Black, M. J., Van Gool, L.

In Computer Vision – ACCV 2018, Springer International Publishing, Asian Conference on Computer Vision, December 2018 (inproceedings)

ps

PDF Project Page [BibTex]

PDF Project Page [BibTex]


Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time
Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time

Huang, Y., Kaufmann, M., Aksan, E., Black, M. J., Hilliges, O., Pons-Moll, G.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 37, pages: 185:1-185:15, ACM, November 2018, Two first authors contributed equally (article)

Abstract
We demonstrate a novel deep neural network capable of reconstructing human full body pose in real-time from 6 Inertial Measurement Units (IMUs) worn on the user's body. In doing so, we address several difficult challenges. First, the problem is severely under-constrained as multiple pose parameters produce the same IMU orientations. Second, capturing IMU data in conjunction with ground-truth poses is expensive and difficult to do in many target application scenarios (e.g., outdoors). Third, modeling temporal dependencies through non-linear optimization has proven effective in prior work but makes real-time prediction infeasible. To address this important limitation, we learn the temporal pose priors using deep learning. To learn from sufficient data, we synthesize IMU data from motion capture datasets. A bi-directional RNN architecture leverages past and future information that is available at training time. At test time, we deploy the network in a sliding window fashion, retaining real time capabilities. To evaluate our method, we recorded DIP-IMU, a dataset consisting of 10 subjects wearing 17 IMUs for validation in 64 sequences with 330,000 time instants; this constitutes the largest IMU dataset publicly available. We quantitatively evaluate our approach on multiple datasets and show results from a real-time implementation. DIP-IMU and the code are available for research purposes.

ps

data code pdf preprint errata video DOI Project Page [BibTex]

data code pdf preprint errata video DOI Project Page [BibTex]


Universal Custom Complex Magnetic Spring Design Methodology
Universal Custom Complex Magnetic Spring Design Methodology

Woodward, M. A., Sitti, M.

IEEE Transactions on Magnetics, 54(1):1-13, October 2018 (article)

Abstract
A design methodology is presented for creating custom complex magnetic springs through the design of force-displacement curves. This methodology results in a magnet configuration, which will produce a desired force-displacement relationship. Initially, the problem is formulated and solved as a system of linear equations. Then, given the limited likelihood of a single solution being feasibly manufactured, key parameters of the solution are extracted and varied to create a family of solutions. Finally, these solutions are refined using numerical optimization. Given the properties of magnets, this methodology can create any well-defined function of force versus displacement and is model-independent. To demonstrate this flexibility, a number of example magnetic springs are designed; one of which, designed for use in a jumping-gliding robot's shape memory alloy actuated clutch, is manufactured and experimentally characterized. Due to the scaling of magnetic forces, the displacement region which these magnetic springs are most applicable is that of millimeters and below. However, this region is well situated for miniature robots and smart material actuators, where a tailored magnetic spring, designed to compliment a component, can enhance its performance while adding new functionality. The methodology is also expendable to variable interactions and multi-dimensional magnetic field design.

pi

DOI [BibTex]

DOI [BibTex]


On the Integration of Optical Flow and Action Recognition
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Deep Neural Network-based Cooperative Visual Tracking through Multiple Micro Aerial Vehicles
Deep Neural Network-based Cooperative Visual Tracking through Multiple Micro Aerial Vehicles

Price, E., Lawless, G., Ludwig, R., Martinovic, I., Buelthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3(4):3193-3200, IEEE, October 2018, Also accepted and presented in the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (article)

Abstract
Multi-camera tracking of humans and animals in outdoor environments is a relevant and challenging problem. Our approach to it involves a team of cooperating micro aerial vehicles (MAVs) with on-board cameras only. DNNs often fail at objects with small scale or far away from the camera, which are typical characteristics of a scenario with aerial robots. Thus, the core problem addressed in this paper is how to achieve on-board, online, continuous and accurate vision-based detections using DNNs for visual person tracking through MAVs. Our solution leverages cooperation among multiple MAVs and active selection of most informative regions of image. We demonstrate the efficiency of our approach through simulations with up to 16 robots and real robot experiments involving two aerial robots tracking a person, while maintaining an active perception-driven formation. ROS-based source code is provided for the benefit of the community.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation
Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

Wulff, J., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 567-582, Springer, Cham, October 2018 (inproceedings)

Abstract
The difficulty of annotating training data is a major obstacle to using CNNs for low-level tasks in video. Synthetic data often does not generalize to real videos, while unsupervised methods require heuristic n losses. Proxy tasks can overcome these issues, and start by training a network for a task for which annotation is easier or which can be trained unsupervised. The trained network is then fine-tuned for the original task using small amounts of ground truth data. Here, we investigate frame interpolation as a proxy task for optical flow. Using real movies, we train a CNN unsupervised for temporal interpolation. Such a network implicitly estimates motion, but cannot handle untextured regions. By fi ne-tuning on small amounts of ground truth flow, the network can learn to fill in homogeneous regions and compute full optical flow fi elds. Using this unsupervised pre-training, our network outperforms similar architectures that were trained supervised using synthetic optical flow.

ps

pdf arXiv DOI Project Page [BibTex]

pdf arXiv DOI Project Page [BibTex]


First Impressions of Personality Traits From Body Shapes
First Impressions of Personality Traits From Body Shapes

Hu, Y., Parde, C. J., Hill, M. Q., Mahmood, N., O’Toole, A. J.

Psychological Science, 29(12):1969-–1983, October 2018 (article)

Abstract
People infer the personalities of others from their facial appearance. Whether they do so from body shapes is less studied. We explored personality inferences made from body shapes. Participants rated personality traits for male and female bodies generated with a three-dimensional body model. Multivariate spaces created from these ratings indicated that people evaluate bodies on valence and agency in ways that directly contrast positive and negative traits from the Big Five domains. Body-trait stereotypes based on the trait ratings revealed a myriad of diverse body shapes that typify individual traits. Personality-trait profiles were predicted reliably from a subset of the body-shape features used to specify the three-dimensional bodies. Body features related to extraversion and conscientiousness were predicted with the highest consensus, followed by openness traits. This study provides the first comprehensive look at the range, diversity, and reliability of personality inferences that people make from body shapes.

ps

publisher site pdf DOI [BibTex]

publisher site pdf DOI [BibTex]


Human Motion Parsing by Hierarchical Dynamic Clustering
Human Motion Parsing by Hierarchical Dynamic Clustering

Zhang, Y., Tang, S., Sun, H., Neumann, H.

In Proceedings of the British Machine Vision Conference (BMVC), pages: 269, BMVA Press, 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsuper- vised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.

ps

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Generating {3D} Faces using Convolutional Mesh Autoencoders
Generating 3D Faces using Convolutional Mesh Autoencoders

Ranjan, A., Bolkart, T., Sanyal, S., Black, M. J.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11207, pages: 725-741, Springer, Cham, September 2018 (inproceedings)

Abstract
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.

ps

Code (tensorflow) Code (pytorch) Project Page paper supplementary DOI Project Page Project Page [BibTex]

Code (tensorflow) Code (pytorch) Project Page paper supplementary DOI Project Page Project Page [BibTex]


Part-Aligned Bilinear Representations for Person Re-identification
Part-Aligned Bilinear Representations for Person Re-identification

Suh, Y., Wang, J., Tang, S., Mei, T., Lee, K. M.

In European Conference on Computer Vision (ECCV), 11218, pages: 418-437, Springer, Cham, September 2018 (inproceedings)

Abstract
Comparing the appearance of corresponding body parts is essential for person re-identification. However, body parts are frequently misaligned be- tween detected boxes, due to the detection errors and the pose/viewpoint changes. In this paper, we propose a network that learns a part-aligned representation for person re-identification. Our model consists of a two-stream network, which gen- erates appearance and body part feature maps respectively, and a bilinear-pooling layer that fuses two feature maps to an image descriptor. We show that it results in a compact descriptor, where the inner product between two image descriptors is equivalent to an aggregation of the local appearance similarities of the cor- responding body parts, and thereby significantly reduces the part misalignment problem. Our approach is advantageous over other pose-guided representations by learning part descriptors optimal for person re-identification. Training the net- work does not require any part annotation on the person re-identification dataset. Instead, we simply initialize the part sub-stream using a pre-trained sub-network of an existing pose estimation network and train the whole network to minimize the re-identification loss. We validate the effectiveness of our approach by demon- strating its superiority over the state-of-the-art methods on the standard bench- mark datasets including Market-1501, CUHK03, CUHK01 and DukeMTMC, and standard video dataset MARS.

ps

pdf supplementary DOI Project Page [BibTex]

pdf supplementary DOI Project Page [BibTex]


Learning Human Optical Flow
Learning Human Optical Flow

Ranjan, A., Romero, J., Black, M. J.

In 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.

ps

video code pdf link (url) Project Page Project Page [BibTex]

video code pdf link (url) Project Page Project Page [BibTex]


Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation
Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation

(Best Student Paper Award)

Omran, M., Lassner, C., Pons-Moll, G., Gehler, P. V., Schiele, B.

In 3DV, September 2018 (inproceedings)

Abstract
Direct prediction of 3D body pose and shape remains a challenge even for highly parameterized deep learning models. Mapping from the 2D image space to the prediction space is difficult: perspective ambiguities make the loss function noisy and training data is scarce. In this paper, we propose a novel approach (Neural Body Fitting (NBF)). It integrates a statistical body model within a CNN, leveraging reliable bottom-up semantic body part segmentation and robust top-down body model constraints. NBF is fully differentiable and can be trained using 2D and 3D annotations. In detailed experiments, we analyze how the components of our model affect performance, especially the use of part segmentations as an explicit intermediate representation, and present a robust, efficiently trainable framework for 3D human pose estimation from 2D images with competitive results on standard benchmarks. Code is available at https://github.com/mohomran/neural_body_fitting

ps

arXiv code Project Page [BibTex]


Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11220, pages: 713-731, Springer, Cham, September 2018 (inproceedings)

avg ps

pdf suppmat Video Project Page DOI Project Page [BibTex]

pdf suppmat Video Project Page DOI Project Page [BibTex]


Learning an Infant Body Model from {RGB-D} Data for Accurate Full Body Motion Analysis
Learning an Infant Body Model from RGB-D Data for Accurate Full Body Motion Analysis

Hesse, N., Pujades, S., Romero, J., Black, M. J., Bodensteiner, C., Arens, M., Hofmann, U. G., Tacke, U., Hadders-Algra, M., Weinberger, R., Muller-Felber, W., Schroeder, A. S.

In Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2018 (inproceedings)

Abstract
Infant motion analysis enables early detection of neurodevelopmental disorders like cerebral palsy (CP). Diagnosis, however, is challenging, requiring expert human judgement. An automated solution would be beneficial but requires the accurate capture of 3D full-body movements. To that end, we develop a non-intrusive, low-cost, lightweight acquisition system that captures the shape and motion of infants. Going beyond work on modeling adult body shape, we learn a 3D Skinned Multi-Infant Linear body model (SMIL) from noisy, low-quality, and incomplete RGB-D data. We demonstrate the capture of shape and motion with 37 infants in a clinical environment. Quantitative experiments show that SMIL faithfully represents the data and properly factorizes the shape and pose of the infants. With a case study based on general movement assessment (GMA), we demonstrate that SMIL captures enough information to allow medical assessment. SMIL provides a new tool and a step towards a fully automatic system for GMA.

ps

pdf Project page video extended arXiv version DOI Project Page [BibTex]

pdf Project page video extended arXiv version DOI Project Page [BibTex]


Deep Directional Statistics: Pose Estimation with Uncertainty Quantification
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

Prokudin, S., Gehler, P., Nowozin, S.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable we would like our models to quantify their uncertainty in order to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper, we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allow for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art.

ps

code pdf [BibTex]

code pdf [BibTex]


Recovering Accurate {3D} Human Pose in The Wild Using {IMUs} and a Moving Camera
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera

Marcard, T. V., Henschel, R., Black, M. J., Rosenhahn, B., Pons-Moll, G.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11214, pages: 614-631, Springer, Cham, September 2018 (inproceedings)

Abstract
In this work, we propose a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild. This poses many new challenges: the moving camera, heading drift, cluttered background, occlusions and many people visible in the video. We associate 2D pose detections in each image to the corresponding IMU-equipped persons by solving a novel graph based optimization problem that forces 3D to 2D coherency within a frame and across long range frames. Given associations, we jointly optimize the pose of a statistical body model, the camera pose and heading drift using a continuous optimization framework. We validated our method on the TotalCapture dataset, which provides video and IMU synchronized with ground truth. We obtain an accuracy of 26mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in the wild. Using our method, we recorded 3D Poses in the Wild (3DPW ), a new dataset consisting of more than 51; 000 frames with accurate 3D pose in challenging sequences, including walking in the city, going up-stairs, having co ffee or taking the bus. We make the reconstructed 3D poses, video, IMU and 3D models available for research purposes at http://virtualhumans.mpi-inf.mpg.de/3DPW.

ps

pdf SupMat data project DOI Project Page [BibTex]

pdf SupMat data project DOI Project Page [BibTex]


Visual Perception and Evaluation of Photo-Realistic Self-Avatars From {3D} Body Scans in Males and Females
Visual Perception and Evaluation of Photo-Realistic Self-Avatars From 3D Body Scans in Males and Females

Thaler, A., Piryankova, I., Stefanucci, J. K., Pujades, S., de la Rosa, S., Streuber, S., Romero, J., Black, M. J., Mohler, B. J.

Frontiers in ICT, 5, pages: 1-14, September 2018 (article)

Abstract
The creation or streaming of photo-realistic self-avatars is important for virtual reality applications that aim for perception and action to replicate real world experience. The appearance and recognition of a digital self-avatar may be especially important for applications related to telepresence, embodied virtual reality, or immersive games. We investigated gender differences in the use of visual cues (shape, texture) of a self-avatar for estimating body weight and evaluating avatar appearance. A full-body scanner was used to capture each participant's body geometry and color information and a set of 3D virtual avatars with realistic weight variations was created based on a statistical body model. Additionally, a second set of avatars was created with an average underlying body shape matched to each participant’s height and weight. In four sets of psychophysical experiments, the influence of visual cues on the accuracy of body weight estimation and the sensitivity to weight changes was assessed by manipulating body shape (own, average) and texture (own photo-realistic, checkerboard). The avatars were presented on a large-screen display, and participants responded to whether the avatar's weight corresponded to their own weight. Participants also adjusted the avatar's weight to their desired weight and evaluated the avatar's appearance with regard to similarity to their own body, uncanniness, and their willingness to accept it as a digital representation of the self. The results of the psychophysical experiments revealed no gender difference in the accuracy of estimating body weight in avatars. However, males accepted a larger weight range of the avatars as corresponding to their own. In terms of the ideal body weight, females but not males desired a thinner body. With regard to the evaluation of avatar appearance, the questionnaire responses suggest that own photo-realistic texture was more important to males for higher similarity ratings, while own body shape seemed to be more important to females. These results argue for gender-specific considerations when creating self-avatars.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Decentralized {MPC} based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios

Tallamraju, R., Rajappa, S., Black, M. J., Karlapalem, K., Ahmad, A.

2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages: 1-8, IEEE, August 2018 (conference)

Abstract
In this work, we consider the problem of decentralized multi-robot target tracking and obstacle avoidance in dynamic environments. Each robot executes a local motion planning algorithm which is based on model predictive control (MPC). The planner is designed as a quadratic program, subject to constraints on robot dynamics and obstacle avoidance. Repulsive potential field functions are employed to avoid obstacles. The novelty of our approach lies in embedding these non-linear potential field functions as constraints within a convex optimization framework. Our method convexifies nonconvex constraints and dependencies, by replacing them as pre-computed external input forces in robot dynamics. The proposed algorithm additionally incorporates different methods to avoid field local minima problems associated with using potential field functions in planning. The motion planner does not enforce predefined trajectories or any formation geometry on the robots and is a comprehensive solution for cooperative obstacle avoidance in the context of multi-robot target tracking. We perform simulation studies for different scenarios to showcase the convergence and efficacy of the proposed algorithm.

ps

Published Version link (url) DOI [BibTex]

Published Version link (url) DOI [BibTex]


 Programmable collective behavior in dynamically self-assembled mobile microrobotic swarms
Programmable collective behavior in dynamically self-assembled mobile microrobotic swarms

B Yigit, , Y Alapan, , Sitti, M.

Advanced Science, July 2018 (article)

Abstract
Collective control of mobile microrobotic swarms is indispensable for their potential high-impact applications in targeted drug delivery, medical diagnostics, parallel micromanipulation, and environmental sensing and remediation. Lack of on-board computational and sensing capabilities in current microrobotic systems necessitates use of physical interactions among individual microrobots for local physical communication and cooperation. Here, we show that mobile microrobotic swarms with well-defined collective behavior can be designed by engineering magnetic interactions among individual units. Microrobots, consisting of a linear chain of self-assembled magnetic microparticles, locomote on surfaces in response to a precessing magnetic field. Control over the direction of precessing magnetic field allows engineering attractive and repulsive interactions among microrobots and, thus, collective order with well-defined spatial organization and parallel operation over macroscale distances (~ 1 cm). These microrobotic swarms can be guided through confined spaces, while preserving microrobot morphology and function. These swarms can further achieve directional transport of large cargoes on surfaces and small cargoes in bulk fluids. Described design approach, exploiting physical interactions among individual robots, enables facile and rapid formation of self-organized and reconfigurable microrobotic swarms with programmable collective order.

pi

link (url) [BibTex]


3D-Printed Biodegradable Microswimmer for Drug Delivery and Targeted Cell Labeling
3D-Printed Biodegradable Microswimmer for Drug Delivery and Targeted Cell Labeling

Hakan Ceylan, , I. Ceren Yasa, , Oncay Yasa, , Ahmet Fatih Tabak, , Joshua Giltinan, , Sitti, M.

bioRxiv, pages: 379024, July 2018 (article)

Abstract
Miniaturization of interventional medical devices can leverage minimally invasive technologies by enabling operational resolution at cellular length scales with high precision and repeatability. Untethered micron-scale mobile robots can realize this by navigating and performing in hard-to-reach, confined and delicate inner body sites. However, such a complex task requires an integrated design and engineering strategy, where powering, control, environmental sensing, medical functionality and biodegradability need to be considered altogether. The present study reports a hydrogel-based, biodegradable microrobotic swimmer, which is responsive to the changes in its microenvironment for theranostic cargo delivery and release tasks. We design a double-helical magnetic microswimmer of 20 micrometers length, which is 3D-printed with complex geometrical and compositional features. At normal physiological concentrations, matrix metalloproteinase-2 (MMP-2) enzyme can entirely degrade the microswimmer body in 118 h to solubilized non-toxic products. The microswimmer can respond to the pathological concentrations of MMP-2 by swelling and thereby accelerating the release kinetics of the drug payload. Anti-ErbB 2 antibody-tagged magnetic nanoparticles released from the degraded microswimmers serve for targeted labeling of SKBR3 breast cancer cells to realize the potential of medical imaging of local tissue sites following the therapeutic intervention. These results represent a leap forward toward clinical medical microrobots that are capable of sensing, responding to the local pathological information, and performing specific therapeutic and diagnostic tasks as orderly executed operations using their smart composite material architectures.

pi

DOI Project Page [BibTex]


Robust Physics-based Motion Retargeting with Realistic Body Shapes
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

mg ps

pdf video Project Page Project Page [BibTex]

pdf video Project Page Project Page [BibTex]


Innate turning preference of leaf-cutting ants in the absence of external orientation cues
Innate turning preference of leaf-cutting ants in the absence of external orientation cues

Endlein, T., Sitti, M.

Journal of Experimental Biology, The Company of Biologists Ltd, June 2018 (article)

Abstract
Many ants use a combination of cues for orientation but how do ants find their way when all external cues are suppressed? Do they walk in a random way or are their movements spatially oriented? Here we show for the first time that leaf-cutting ants (Acromyrmex lundii) have an innate preference of turning counter-clockwise (left) when external cues are precluded. We demonstrated this by allowing individual ants to run freely on the water surface of a newly-developed treadmill. The surface tension supported medium-sized workers but effectively prevented ants from reaching the wall of the vessel, important to avoid wall-following behaviour (thigmotaxis). Most ants ran for minutes on the spot but also slowly turned counter-clockwise in the absence of visual cues. Reconstructing the effectively walked path revealed a looping pattern which could be interpreted as a search strategy. A similar turning bias was shown for groups of ants in a symmetrical Y-maze where twice as many ants chose the left branch in the absence of optical cues. Wall-following behaviour was tested by inserting a coiled tube before the Y-fork. When ants traversed a left-coiled tube, more ants chose the left box and vice versa. Adding visual cues in form of vertical black strips either outside the treadmill or on one branch of the Y-maze led to oriented walks towards the strips. It is suggested that both, the turning bias and the wall-following are employed as search strategies for an unknown environment which can be overridden by visual cues.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Motility and chemotaxis of bacteria-driven microswimmers fabricated using antigen 43-mediated biotin display
Motility and chemotaxis of bacteria-driven microswimmers fabricated using antigen 43-mediated biotin display

Schauer, O., Mostaghaci, B., Colin, R., Hürtgen, D., Kraus, D., Sitti, M., Sourjik, V.

Scientific Reports, 8(1):9801, Nature Publishing Group, June 2018 (article)

Abstract
Bacteria-driven biohybrid microswimmers (bacteriabots) combine synthetic cargo with motile living bacteria that enable propulsion and steering. Although fabrication and potential use of such bacteriabots have attracted much attention, existing methods of fabrication require an extensive sample preparation that can drastically decrease the viability and motility of bacteria. Moreover, chemotactic behavior of bacteriabots in a liquid medium with chemical gradients has remained largely unclear. To overcome these shortcomings, we designed Escherichia coli to autonomously display biotin on its cell surface via the engineered autotransporter antigen 43 and thus to bind streptavidin-coated cargo. We show that the cargo attachment to these bacteria is greatly enhanced by motility and occurs predominantly at the cell poles, which is greatly beneficial for the fabrication of motile bacteriabots. We further performed a systemic study to understand and optimize the ability of these bacteriabots to follow chemical gradients. We demonstrate that the chemotaxis of bacteriabots is primarily limited by the cargo-dependent reduction of swimming speed and show that the fabrication of bacteriabots using elongated E. coli cells can be used to overcome this limitation.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]