Header logo is



Learning to Dress 3D People in Generative Clothing
Learning to Dress 3D People in Generative Clothing

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and videos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shape. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term on SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses.

ps

arxiv project page [BibTex]


Generating 3D People in Scenes without People
Generating 3D People in Scenes without People

Zhang, Y., Hassan, M., Neumann, H., Black, M. J., Tang, S.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
We present a fully-automatic system that takes a 3D scene and generates plausible 3D human bodies that are posed naturally in that 3D scene. Given a 3D scene without people, humans can easily imagine how people could interact with the scene and the objects in it. However, this is a challenging task for a computer as solving it requires (1) the generated human bodies should be semantically plausible with the 3D environment, e.g. people sitting on the sofa or cooking near the stove; (2) the generated human-scene interaction should be physically feasible in the way that the human body and scene do not interpenetrate while, at the same time, body-scene contact supports physical interactions. To that end, we make use of the surface-based 3D human model SMPL-X. We first train a conditional variational autoencoder to predict semantically plausible 3D human pose conditioned on latent scene representations, then we further refine the generated 3D bodies using scene constraints to enforce feasible physical interaction. We show that our approach is able to synthesize realistic and expressive 3D human bodies that naturally interact with 3D environment. We perform extensive experiments demonstrating that our generative framework compares favorably with existing methods, both qualitatively and quantitatively. We believe that our scene-conditioned 3D human generation pipeline will be useful for numerous applications; e.g. to generate training data for human pose estimation, in video games and in VR/AR.

ps

PDF link (url) [BibTex]

PDF link (url) [BibTex]


Learning Physics-guided Face Relighting under Directional Light
Learning Physics-guided Face Relighting under Directional Light

Nestmeyer, T., Lalonde, J., Matthews, I., Lehrmann, A. M.

In Conference on Computer Vision and Pattern Recognition, IEEE/CVF, June 2020 (inproceedings) Accepted

Abstract
Relighting is an essential step in realistically transferring objects from a captured image into another environment. For example, authentic telepresence in Augmented Reality requires faces to be displayed and relit consistent with the observer's scene lighting. We investigate end-to-end deep learning architectures that both de-light and relight an image of a human face. Our model decomposes the input image into intrinsic components according to a diffuse physics-based image formation model. We enable non-diffuse effects including cast shadows and specular highlights by predicting a residual correction to the diffuse render. To train and evaluate our model, we collected a portrait database of 21 subjects with various expressions and poses. Each sample is captured in a controlled light stage setup with 32 individual light sources. Our method creates precise and believable relighting results and generalizes to complex illumination conditions and challenging poses, including when the subject is not looking straight at the camera.

ps

Paper [BibTex]

Paper [BibTex]


{VIBE}: Video Inference for Human Body Pose and Shape Estimation
VIBE: Video Inference for Human Body Pose and Shape Estimation

Kocabas, M., Athanasiou, N., Black, M. J.

In Computer Vision and Pattern Recognition (CVPR), June 2020 (inproceedings)

Abstract
Human motion is fundamental to understanding behavior. Despite progress on single-image 3D pose and shape estimation, existing video-based state-of-the-art methodsfail to produce accurate and natural motion sequences due to a lack of ground-truth 3D motion data for training. To address this problem, we propose “Video Inference for Body Pose and Shape Estimation” (VIBE), which makes use of an existing large-scale motion capture dataset (AMASS) together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty is an adversarial learning framework that leverages AMASS to discriminate between real human motions and those produced by our temporal pose and shape regression networks. We define a temporal network architecture and show that adversarial training, at the sequence level, produces kinematically plausible motion sequences without in-the-wild ground-truth 3D labels. We perform extensive experimentation to analyze the importance of motion and demonstrate the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving state-of-the-art performance. Code and pretrained models are available at https://github.com/mkocabas/VIBE

ps

arXiv code [BibTex]

arXiv code [BibTex]


From Variational to Deterministic Autoencoders
From Variational to Deterministic Autoencoders

Ghosh*, P., Sajjadi*, M. S. M., Vergari, A., Black, M. J., Schölkopf, B.

8th International Conference on Learning Representations (ICLR) , April 2020, *equal contribution (conference) Accepted

Abstract
Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

ei ps

arXiv [BibTex]

arXiv [BibTex]


Gripping apparatus and method of producing a gripping apparatus
Gripping apparatus and method of producing a gripping apparatus

Song, S., Sitti, M., Drotlef, D., Majidi, C.

Google Patents, Febuary 2020, US Patent App. 16/610,209 (patent)

Abstract
The present invention relates to a gripping apparatus comprising a membrane; a flexible housing; with said membrane being fixedly connected to a periphery of the housing. The invention further relates to a method of producing a gripping apparatus.

pi

[BibTex]

[BibTex]


Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations
Chained Representation Cycling: Learning to Estimate 3D Human Pose and Shape by Cycling Between Representations

Rueegg, N., Lassner, C., Black, M. J., Schindler, K.

In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Febuary 2020 (inproceedings)

Abstract
The goal of many computer vision systems is to transform image pixels into 3D representations. Recent popular models use neural networks to regress directly from pixels to 3D object parameters. Such an approach works well when supervision is available, but in problems like human pose and shape estimation, it is difficult to obtain natural images with 3D ground truth. To go one step further, we propose a new architecture that facilitates unsupervised, or lightly supervised, learning. The idea is to break the problem into a series of transformations between increasingly abstract representations. Each step involves a cycle designed to be learnable without annotated training data, and the chain of cycles delivers the final solution. Specifically, we use 2D body part segments as an intermediate representation that contains enough information to be lifted to 3D, and at the same time is simple enough to be learned in an unsupervised way. We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images. We also explore varying amounts of paired data and show that cycling greatly alleviates the need for paired data. While we present results for modeling humans, our formulation is general and can be applied to other vision problems.

ps

pdf [BibTex]

pdf [BibTex]


Learning Multi-Human Optical Flow
Learning Multi-Human Optical Flow

Ranjan, A., Hoffmann, D. T., Tzionas, D., Tang, S., Romero, J., Black, M. J.

International Journal of Computer Vision (IJCV), January 2020 (article)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Recent optical flow methods focus on training deep networks to approach the problem. However, the training data used by them does not cover the domain of human motion. Therefore, we develop a dataset of multi-human optical flow and train optical flow networks on this dataset. We use a 3D model of the human body and motion capture data to synthesize realistic flow fields in both single-and multi-person images. We then train optical flow networks to estimate human flow fields from pairs of images. We demonstrate that our trained networks are more accurate than a wide range of top methods on held-out test data and that they can generalize well to real image sequences. The code, trained models and the dataset are available for research.

ps

Paper Publisher Version poster link (url) DOI [BibTex]


Method of actuating a shape changeable member, shape changeable member and actuating system
Method of actuating a shape changeable member, shape changeable member and actuating system

Hu, W., Lum, G. Z., Mastrangeli, M., Sitti, M.

Google Patents, January 2020, US Patent App. 16/477,593 (patent)

Abstract
The present invention relates to a method of actuating a shape changeable member of actuatable material. The invention further relates to a shape changeable member and to a system comprising such a shape changeable member and a magnetic field apparatus.

pi

[BibTex]


Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium
Thermal Effects on the Crystallization Kinetics, and Interfacial Adhesion of Single-Crystal Phase-Change Gallium

Yunusa, M., Lahlou, A., Sitti, M.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Although substrates play an important role upon crystallization of supercooled liquids, the influences of surface temperature and thermal property have remained elusive. Here, the crystallization of supercooled phase‐change gallium (Ga) on substrates with different thermal conductivity is studied. The effect of interfacial temperature on the crystallization kinetics, which dictates thermo‐mechanical stresses between the substrate and the crystallized Ga, is investigated. At an elevated surface temperature, close to the melting point of Ga, an extended single‐crystal growth of Ga on dielectric substrates due to layering effect and annealing is realized without the application of external fields. Adhesive strength at the interfaces depends on the thermal conductivity and initial surface temperature of the substrates. This insight can be applicable to other liquid metals for industrial applications, and sheds more light on phase‐change memory crystallization.

pi

[BibTex]


no image
Nanoerythrosome-functionalized biohybrid microswimmers

Nicole, Oncay, Yunus, Birgul, Metin Sitti

2020 (article) Accepted

pi

[BibTex]

[BibTex]


Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice
Injectable Nanoelectrodes Enable Wireless Deep Brain Stimulation of Native Tissue in Freely Moving Mice

Kozielski, K. L., Jahanshahi, A., Gilbert, H. B., Yu, Y., Erin, O., Francisco, D., Alosaimi, F., Temel, Y., Sitti, M.

bioRxiv, Cold Spring Harbor Laboratory, 2020 (article)

pi

[BibTex]

[BibTex]


no image
Statistical reprogramming of macroscopic self-assembly with dynamic boundaries

Utku, , Massimo, , Zoey, , Sitti,

2020 (article) Accepted

pi

[BibTex]

[BibTex]


Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms
Controlling two-dimensional collective formation and cooperative behavior of magnetic microrobot swarms

Dong, X., Sitti, M.

The International Journal of Robotics Research, 2020 (article)

Abstract
Magnetically actuated mobile microrobots can access distant, enclosed, and small spaces, such as inside microfluidic channels and the human body, making them appealing for minimally invasive tasks. Despite their simplicity when scaling down, creating collective microrobots that can work closely and cooperatively, as well as reconfigure their formations for different tasks, would significantly enhance their capabilities such as manipulation of objects. However, a challenge of realizing such cooperative magnetic microrobots is to program and reconfigure their formations and collective motions with under-actuated control signals. This article presents a method of controlling 2D static and time-varying formations among collective self-repelling ferromagnetic microrobots (100 μm to 350 μm in diameter, up to 260 in number) by spatially and temporally programming an external magnetic potential energy distribution at the air–water interface or on solid surfaces. A general design method is introduced to program external magnetic potential energy using ferromagnets. A predictive model of the collective system is also presented to predict the formation and guide the design procedure. With the proposed method, versatile complex static formations are experimentally demonstrated and the programmability and scaling effects of formations are analyzed. We also demonstrate the collective mobility of these magnetic microrobots by controlling them to exhibit bio-inspired collective behaviors such as aggregation, directional motion with arbitrary swarm headings, and rotational swarming motion. Finally, the functions of the produced microrobotic swarm are demonstrated by controlling them to navigate through cluttered environments and complete reconfigurable cooperative manipulation tasks.

pi

DOI [BibTex]


no image
Analytical classical density functionals from an equation learning network

Lin, S., Martius, G., Oettel, M.

The Journal of Chemical Physics, 152(2):021102, 2020, arXiv preprint \url{https://arxiv.org/abs/1910.12752} (article)

al

Preprint_PDF DOI [BibTex]

Preprint_PDF DOI [BibTex]


Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs
Characterization and Thermal Management of a DC Motor-Driven Resonant Actuator for Miniature Mobile Robots with Oscillating Limbs

Colmenares, D., Kania, R., Liu, M., Sitti, M.

arXiv preprint arXiv:2002.00798, 2020 (article)

Abstract
In this paper, we characterize the performance of and develop thermal management solutions for a DC motor-driven resonant actuator developed for flapping wing micro air vehicles. The actuator, a DC micro-gearmotor connected in parallel with a torsional spring, drives reciprocal wing motion. Compared to the gearmotor alone, this design increased torque and power density by 161.1% and 666.8%, respectively, while decreasing the drawn current by 25.8%. Characterization of the actuator, isolated from nonlinear aerodynamic loading, results in standard metrics directly comparable to other actuators. The micro-motor, selected for low weight considerations, operates at high power for limited duration due to thermal effects. To predict system performance, a lumped parameter thermal circuit model was developed. Critical model parameters for this micro-motor, two orders of magnitude smaller than those previously characterized, were identified experimentally. This included the effects of variable winding resistance, bushing friction, speed-dependent forced convection, and the addition of a heatsink. The model was then used to determine a safe operation envelope for the vehicle and to design a weight-optimal heatsink. This actuator design and thermal modeling approach could be applied more generally to improve the performance of any miniature mobile robot or device with motor-driven oscillating limbs or loads.

pi

[BibTex]


Magnetic Resonance Imaging System--Driven Medical Robotics
Magnetic Resonance Imaging System–Driven Medical Robotics

Erin, O., Boyvat, M., Tiryaki, M. E., Phelan, M., Sitti, M.

Advanced Intelligent Systems, 2, Wiley Online Library, 2020 (article)

Abstract
Magnetic resonance imaging (MRI) system–driven medical robotics is an emerging field that aims to use clinical MRI systems not only for medical imaging but also for actuation, localization, and control of medical robots. Submillimeter scale resolution of MR images for soft tissues combined with the electromagnetic gradient coil–based magnetic actuation available inside MR scanners can enable theranostic applications of medical robots for precise image‐guided minimally invasive interventions. MRI‐driven robotics typically does not introduce new MRI instrumentation for actuation but instead focuses on converting already available instrumentation for robotic purposes. To use the advantages of this technology, various medical devices such as untethered mobile magnetic robots and tethered active catheters have been designed to be powered magnetically inside MRI systems. Herein, the state‐of‐the‐art progress, challenges, and future directions of MRI‐driven medical robotic systems are reviewed.

pi

[BibTex]

[BibTex]


Pros and Cons: Magnetic versus Optical Microrobots
Pros and Cons: Magnetic versus Optical Microrobots

Sitti, M., Wiersma, D. S.

Advanced Materials, Wiley Online Library, 2020 (article)

Abstract
Mobile microrobotics has emerged as a new robotics field within the last decade to create untethered tiny robots that can access and operate in unprecedented, dangerous, or hard‐to‐reach small spaces noninvasively toward disruptive medical, biotechnology, desktop manufacturing, environmental remediation, and other potential applications. Magnetic and optical actuation methods are the most widely used actuation methods in mobile microrobotics currently, in addition to acoustic and biological (cell‐driven) actuation approaches. The pros and cons of these actuation methods are reported here, depending on the given context. They can both enable long‐range, fast, and precise actuation of single or a large number of microrobots in diverse environments. Magnetic actuation has unique potential for medical applications of microrobots inside nontransparent tissues at high penetration depths, while optical actuation is suitable for more biotechnology, lab‐/organ‐on‐a‐chip, and desktop manufacturing types of applications with much less surface penetration depth requirements or with transparent environments. Combining both methods in new robot designs can have a strong potential of combining the pros of both methods. There is still much progress needed in both actuation methods to realize the potential disruptive applications of mobile microrobots in real‐world conditions.

pi

[BibTex]

[BibTex]


Selectively Controlled Magnetic Microrobots with Opposing Helices
Selectively Controlled Magnetic Microrobots with Opposing Helices

Giltinan, J., Katsamba, P., Wang, W., Lauga, E., Sitti, M.

Applied Physics Letters, 116, AIP Publishing LLC, 2020 (article)

pi

[BibTex]

[BibTex]


General Movement Assessment from videos of computed {3D} infant body models is equally effective compared to conventional {RGB} Video rating
General Movement Assessment from videos of computed 3D infant body models is equally effective compared to conventional RGB Video rating

Schroeder, S., Hesse, N., Weinberger, R., Tacke, U., Gerstl, L., Hilgendorff, A., Heinen, F., Arens, M., Bodensteiner, C., Dijkstra, L. J., Pujades, S., Black, M., Hadders-Algra, M.

Early Human Development, 2020 (article)

Abstract
Background: General Movement Assessment (GMA) is a powerful tool to predict Cerebral Palsy (CP). Yet, GMA requires substantial training hampering its implementation in clinical routine. This inspired a world-wide quest for automated GMA. Aim: To test whether a low-cost, marker-less system for three-dimensional motion capture from RGB depth sequences using a whole body infant model may serve as the basis for automated GMA. Study design: Clinical case study at an academic neurodevelopmental outpatient clinic. Subjects: Twenty-nine high-risk infants were recruited and assessed at their clinical follow-up at 2-4 month corrected age (CA). Their neurodevelopmental outcome was assessed regularly up to 12-31 months CA. Outcome measures: GMA according to Hadders-Algra by a masked GMA-expert of conventional and computed 3D body model (“SMIL motion”) videos of the same GMs. Agreement between both GMAs was assessed, and sensitivity and specificity of both methods to predict CP at ≥12 months CA. Results: The agreement of the two GMA ratings was substantial, with κ=0.66 for the classification of definitely abnormal (DA) GMs and an ICC of 0.887 (95% CI 0.762;0.947) for a more detailed GM-scoring. Five children were diagnosed with CP (four bilateral, one unilateral CP). The GMs of the child with unilateral CP were twice rated as mildly abnormal. DA-ratings of both videos predicted bilateral CP well: sensitivity 75% and 100%, specificity 88% and 92% for conventional and SMIL motion videos, respectively. Conclusions: Our computed infant 3D full body model is an attractive starting point for automated GMA in infants at risk of CP.

ps

[BibTex]

[BibTex]


Acoustically powered surface-slipping mobile microrobots
Acoustically powered surface-slipping mobile microrobots

Aghakhani, A., Yasa, O., Wrede, P., Sitti, M.

Proceedings of the National Academy of Sciences, 117, National Acad Sciences, 2020 (article)

Abstract
Untethered synthetic microrobots have significant potential to revolutionize minimally invasive medical interventions in the future. However, their relatively slow speed and low controllability near surfaces typically are some of the barriers standing in the way of their medical applications. Here, we introduce acoustically powered microrobots with a fast, unidirectional surface-slipping locomotion on both flat and curved surfaces. The proposed three-dimensionally printed, bullet-shaped microrobot contains a spherical air bubble trapped inside its internal body cavity, where the bubble is resonated using acoustic waves. The net fluidic flow due to the bubble oscillation orients the microrobot's axisymmetric axis perpendicular to the wall and then propels it laterally at very high speeds (up to 90 body lengths per second with a body length of 25 µm) while inducing an attractive force toward the wall. To achieve unidirectional locomotion, a small fin is added to the microrobot’s cylindrical body surface, which biases the propulsion direction. For motion direction control, the microrobots are coated anisotropically with a soft magnetic nanofilm layer, allowing steering under a uniform magnetic field. Finally, surface locomotion capability of the microrobots is demonstrated inside a three-dimensional circular cross-sectional microchannel under acoustic actuation. Overall, the combination of acoustic powering and magnetic steering can be effectively utilized to actuate and navigate these microrobots in confined and hard-to-reach body location areas in a minimally invasive fashion.

pi

[BibTex]

[BibTex]


Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle
Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle

Colmenares, D., Kania, R., Zhang, W., Sitti, M.

arXiv preprint arXiv:2001.11586, 2020 (article)

Abstract
We investigate the effect of wing twist flexibility on lift and efficiency of a flapping-wing micro air vehicle capable of liftoff. Wings used previously were chosen to be fully rigid due to modeling and fabrication constraints. However, biological wings are highly flexible and other micro air vehicles have successfully utilized flexible wing structures for specialized tasks. The goal of our study is to determine if dynamic twisting of flexible wings can increase overall aerodynamic lift and efficiency. A flexible twisting wing design was found to increase aerodynamic efficiency by 41.3%, translational lift production by 35.3%, and the effective lift coefficient by 63.7% compared to the rigid-wing design. These results exceed the predictions of quasi-steady blade element models, indicating the need for unsteady computational fluid dynamics simulations of twisted flapping wings.

pi

[BibTex]

[BibTex]


Cohesive self-organization of mobile microrobotic swarms
Cohesive self-organization of mobile microrobotic swarms

Yigit, B., Alapan, Y., Sitti, M.

arXiv preprint arXiv:1907.05856, 2020 (article)

pi

[BibTex]

[BibTex]


no image
Multifunctional Surface Microrollers for Targeted Cargo Delivery in Physiological Blood Flow

Yunus, , Ugur, , Alp, , Metin,

2020 (article) Accepted

pi

[BibTex]


Bioinspired underwater locomotion of light-driven liquid crystal gels
Bioinspired underwater locomotion of light-driven liquid crystal gels

Shahsavan, H., Aghakhani, A., Zeng, H., Guo, Y., Davidson, Z. S., Priimagi, A., Sitti, M.

Proceedings of the National Academy of Sciences, National Acad Sciences, 2020 (article)

Abstract
Untethered dynamic shape programming and control of soft materials have significant applications in technologies such as soft robots, medical devices, organ-on-a-chip, and optical devices. Here, we present a solution to remotely actuate and move soft materials underwater in a fast, efficient, and controlled manner using photoresponsive liquid crystal gels (LCGs). LCG constructs with engineered molecular alignment show a low and sharp phase-transition temperature and experience considerable density reduction by light exposure, thereby allowing rapid and reversible shape changes. We demonstrate different modes of underwater locomotion, such as crawling, walking, jumping, and swimming, by localized and time-varying illumination of LCGs. The diverse locomotion modes of smart LCGs can provide a new toolbox for designing efficient light-fueled soft robots in fluid-immersed media.

pi

[BibTex]

[BibTex]


Differentiation of blackbox combinatorial solvers
Differentiation of blackbox combinatorial solvers

Vlastelica, M., Paulus, A., Musil, V., Martius, G., Rolı́nek, M.

In International Conference on Learning Representations, ICLR’20, 2020 (incollection)

al

link (url) [BibTex]

link (url) [BibTex]


Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients
Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients

Giachini, P., Gupta, S., Wang, W., Wood, D., Yunusa, M., Baharlou, E., Sitti, M., Menges, A.

Science Advances, 6, American Association for the Advancement of Science, 2020 (article)

Abstract
Functionally graded materials (FGMs) enable applications in fields such as biomedicine and architecture, but their fabrication suffers from shortcomings in gradient continuity, interfacial bonding, and directional freedom. In addition, most commercial design software fail to incorporate property gradient data, hindering explorations of the design space of FGMs. Here, we leveraged a combined approach of materials engineering and digital processing to enable extrusion-based multimaterial additive manufacturing of cellulose-based tunable viscoelastic materials with continuous, high-contrast, and multidirectional stiffness gradients. A method to engineer sets of cellulose-based materials with similar compositions, yet distinct mechanical and rheological properties, was established. In parallel, a digital workflow was developed to embed gradient information into design models with integrated fabrication path planning. The payoff of integrating these physical and digital tools is the ability to achieve the same stiffness gradient in multiple ways, opening design possibilities previously limited by the rigid coupling of material and geometry.

pi

[BibTex]

[BibTex]

2014


Series of Multilinked Caterpillar Track-type Climbing Robots
Series of Multilinked Caterpillar Track-type Climbing Robots

Lee, G., Kim, H., Seo, K., Kim, J., Sitti, M., Seo, T.

Journal of Field Robotics, November 2014 (article)

Abstract
Climbing robots have been widely applied in many industries involving hard to access, dangerous, or hazardous environments to replace human workers. Climbing speed, payload capacity, the ability to overcome obstacles, and wall-to-wall transitioning are significant characteristics of climbing robots. Here, multilinked track wheel-type climbing robots are proposed to enhance these characteristics. The robots have been developed for five years in collaboration with three universities: Seoul National University, Carnegie Mellon University, and Yeungnam University. Four types of robots are presented for different applications with different surface attachment methods and mechanisms: MultiTank for indoor sites, Flexible caterpillar robot (FCR) and Combot for heavy industrial sites, and MultiTrack for high-rise buildings. The method of surface attachment is different for each robot and application, and the characteristics of the joints between links are designed as active or passive according to the requirement of a given robot. Conceptual design, practical design, and control issues of such climbing robot types are reported, and a proper choice of the attachment methods and joint type is essential for the successful multilink track wheel-type climbing robot for different surface materials, robot size, and computational costs.

pi

DOI [BibTex]

2014


DOI [BibTex]


Advanced Structured Prediction
Advanced Structured Prediction

Nowozin, S., Gehler, P. V., Jancsary, J., Lampert, C. H.

Advanced Structured Prediction, pages: 432, Neural Information Processing Series, MIT Press, November 2014 (book)

Abstract
The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components. These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.

ps

publisher link (url) [BibTex]

publisher link (url) [BibTex]


{MoSh}: Motion and Shape Capture from Sparse Markers
MoSh: Motion and Shape Capture from Sparse Markers

Loper, M. M., Mahmood, N., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 33(6):220:1-220:13, ACM, New York, NY, USA, November 2014 (article)

Abstract
Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.

ps

pdf video data pdf from publisher link (url) DOI Project Page Project Page Project Page [BibTex]

pdf video data pdf from publisher link (url) DOI Project Page Project Page Project Page [BibTex]


Hough-based Object Detection with Grouped Features
Hough-based Object Detection with Grouped Features

Srikantha, A., Gall, J.

International Conference on Image Processing, pages: 1653-1657, Paris, France, IEEE International Conference on Image Processing , October 2014 (conference)

Abstract
Hough-based voting approaches have been successfully applied to object detection. While these methods can be efficiently implemented by random forests, they estimate the probability for an object hypothesis for each feature independently. In this work, we address this problem by grouping features in a local neighborhood to obtain a better estimate of the probability. To this end, we propose oblique classification-regression forests that combine features of different trees. We further investigate the benefit of combining independent and grouped features and evaluate the approach on RGB and RGB-D datasets.

ps

pdf poster DOI Project Page [BibTex]

pdf poster DOI Project Page [BibTex]


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Geckogripper: A soft, inflatable robotic gripper using gecko-inspired elastomer micro-fiber adhesives
Geckogripper: A soft, inflatable robotic gripper using gecko-inspired elastomer micro-fiber adhesives

Song, S., Majidi, C., Sitti, M.

In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages: 4624-4629, September 2014 (inproceedings)

Abstract
This paper proposes GeckoGripper, a novel soft, inflatable gripper based on the controllable adhesion mechanism of gecko-inspired micro-fiber adhesives, to pick-and-place complex and fragile non-planar or planar parts serially or in parallel. Unlike previous fibrillar structures that use peel angle to control the manipulation of parts, we developed an elastomer micro-fiber adhesive that is fabricated on a soft, flexible membrane, increasing the adaptability to non-planar three-dimensional (3D) geometries and controllability in adhesion. The adhesive switching ratio (the ratio between the maximum and minimum adhesive forces) of the developed gripper was measured to be around 204, which is superior to previous works based on peel angle-based release control methods. Adhesion control mechanism based on the stretch of the membrane and superior adaptability to non-planar 3D geometries enable the micro-fibers to pick-and-place various 3D parts as shown in demonstrations.

pi

DOI [BibTex]

DOI [BibTex]


Can I recognize my body’s weight? The influence of shape and texture on the perception of self
Can I recognize my body’s weight? The influence of shape and texture on the perception of self

Piryankova, I., Stefanucci, J., Romero, J., de la Rosa, S., Black, M., Mohler, B.

ACM Transactions on Applied Perception for the Symposium on Applied Perception, 11(3):13:1-13:18, September 2014 (article)

Abstract
The goal of this research was to investigate women’s sensitivity to changes in their perceived weight by altering the body mass index (BMI) of the participants’ personalized avatars displayed on a large-screen immersive display. We created the personalized avatars with a full-body 3D scanner that records both the participants’ body geometry and texture. We altered the weight of the personalized avatars to produce changes in BMI while keeping height, arm length and inseam fixed and exploited the correlation between body geometry and anthropometric measurements encapsulated in a statistical body shape model created from thousands of body scans. In a 2x2 psychophysical experiment, we investigated the relative importance of visual cues, namely shape (own shape vs. an average female body shape with equivalent height and BMI to the participant) and texture (own photo-realistic texture or checkerboard pattern texture) on the ability to accurately perceive own current body weight (by asking them ‘Is the avatar the same weight as you?’). Our results indicate that shape (where height and BMI are fixed) had little effect on the perception of body weight. Interestingly, the participants perceived their body weight veridically when they saw their own photo-realistic texture and significantly underestimated their body weight when the avatar had a checkerboard patterned texture. The range that the participants accepted as their own current weight was approximately a 0.83 to −6.05 BMI% change tolerance range around their perceived weight. Both the shape and the texture had an effect on the reported similarity of the body parts and the whole avatar to the participant’s body. This work has implications for new measures for patients with body image disorders, as well as researchers interested in creating personalized avatars for games, training applications or virtual reality.

ps

pdf DOI Project Page Project Page [BibTex]

pdf DOI Project Page Project Page [BibTex]


Human Pose Estimation with Fields of Parts
Human Pose Estimation with Fields of Parts

Kiefel, M., Gehler, P.

In Computer Vision – ECCV 2014, LNCS 8693, pages: 331-346, Lecture Notes in Computer Science, (Editors: Fleet, David and Pajdla, Tomas and Schiele, Bernt and Tuytelaars, Tinne), Springer, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
This paper proposes a new formulation of the human pose estimation problem. We present the Fields of Parts model, a binary Conditional Random Field model designed to detect human body parts of articulated people in single images. The Fields of Parts model is inspired by the idea of Pictorial Structures, it models local appearance and joint spatial configuration of the human body. However the underlying graph structure is entirely different. The idea is simple: we model the presence and absence of a body part at every possible position, orientation, and scale in an image with a binary random variable. This results into a vast number of random variables, however, we show that approximate inference in this model is efficient. Moreover we can encode the very same appearance and spatial structure as in Pictorial Structures models. This approach allows us to combine ideas from segmentation and pose estimation into a single model. The Fields of Parts model can use evidence from the background, include local color information, and it is connected more densely than a kinematic chain structure. On the challenging Leeds Sports Poses dataset we improve over the Pictorial Structures counterpart by 5.5% in terms of Average Precision of Keypoints (APK).

ei ps

website pdf DOI Project Page [BibTex]

website pdf DOI Project Page [BibTex]


Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points

Tzionas, D., Srikantha, A., Aponte, P., Gall, J.

In German Conference on Pattern Recognition (GCPR), pages: 1-13, Lecture Notes in Computer Science, Springer, GCPR, September 2014 (inproceedings)

Abstract
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.

ps

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]

pdf Supplementary pdf Supplementary Material Project Page DOI Project Page [BibTex]


{OpenDR}: An Approximate Differentiable Renderer
OpenDR: An Approximate Differentiable Renderer

Loper, M. M., Black, M. J.

In Computer Vision – ECCV 2014, 8695, pages: 154-169, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate di fferentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new autodiff erentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.

ps

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]

pdf Code Chumpy Supplementary video of talk DOI Project Page [BibTex]


Discovering Object Classes from Activities
Discovering Object Classes from Activities

Srikantha, A., Gall, J.

In European Conference on Computer Vision, 8694, pages: 415-430, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
In order to avoid an expensive manual labeling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visual similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in these videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.

ps

pdf anno poster DOI Project Page [BibTex]

pdf anno poster DOI Project Page [BibTex]


Probabilistic Progress Bars
Probabilistic Progress Bars

Kiefel, M., Schuler, C., Hennig, P.

In Conference on Pattern Recognition (GCPR), 8753, pages: 331-341, Lecture Notes in Computer Science, (Editors: Jiang, X., Hornegger, J., and Koch, R.), Springer, GCPR, September 2014 (inproceedings)

Abstract
Predicting the time at which the integral over a stochastic process reaches a target level is a value of interest in many applications. Often, such computations have to be made at low cost, in real time. As an intuitive example that captures many features of this problem class, we choose progress bars, a ubiquitous element of computer user interfaces. These predictors are usually based on simple point estimators, with no error modelling. This leads to fluctuating behaviour confusing to the user. It also does not provide a distribution prediction (risk values), which are crucial for many other application areas. We construct and empirically evaluate a fast, constant cost algorithm using a Gauss-Markov process model which provides more information to the user.

ei ps pn

website+code pdf DOI [BibTex]

website+code pdf DOI [BibTex]


Optical Flow Estimation with Channel Constancy
Optical Flow Estimation with Channel Constancy

Sevilla-Lara, L., Sun, D., Learned-Miller, E. G., Black, M. J.

In Computer Vision – ECCV 2014, 8689, pages: 423-438, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Large motions remain a challenge for current optical flow algorithms. Traditionally, large motions are addressed using multi-resolution representations like Gaussian pyramids. To deal with large displacements, many pyramid levels are needed and, if an object is small, it may be invisible at the highest levels. To address this we decompose images using a channel representation (CR) and replace the standard brightness constancy assumption with a descriptor constancy assumption. CRs can be seen as an over-segmentation of the scene into layers based on some image feature. If the appearance of a foreground object differs from the background then its descriptor will be different and they will be represented in different layers.We create a pyramid by smoothing these layers, without mixing foreground and background or losing small objects. Our method estimates more accurate flow than the baseline on the MPI-Sintel benchmark, especially for fast motions and near motion boundaries.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Modeling Blurred Video with Layers
Modeling Blurred Video with Layers

Wulff, J., Black, M. J.

In Computer Vision – ECCV 2014, 8694, pages: 236-252, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with fi nite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between di fferently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer's appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences.

ps

pdf Supplemental Video Data DOI Project Page Project Page [BibTex]

pdf Supplemental Video Data DOI Project Page Project Page [BibTex]


Intrinsic Video
Intrinsic Video

Kong, N., Gehler, P. V., Black, M. J.

In Computer Vision – ECCV 2014, 8690, pages: 360-375, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Intrinsic images such as albedo and shading are valuable for later stages of visual processing. Previous methods for extracting albedo and shading use either single images or images together with depth data. Instead, we define intrinsic video estimation as the problem of extracting temporally coherent albedo and shading from video alone. Our approach exploits the assumption that albedo is constant over time while shading changes slowly. Optical flow aids in the accurate estimation of intrinsic video by providing temporal continuity as well as putative surface boundaries. Additionally, we find that the estimated albedo sequence can be used to improve optical flow accuracy in sequences with changing illumination. The approach makes only weak assumptions about the scene and we show that it substantially outperforms existing single-frame intrinsic image methods. We evaluate this quantitatively on synthetic sequences as well on challenging natural sequences with complex geometry, motion, and illumination.

ps

pdf Supplementary Video DOI Project Page Project Page [BibTex]

pdf Supplementary Video DOI Project Page Project Page [BibTex]


Automated Detection of New or Evolving Melanocytic Lesions Using a {3D} Body Model
Automated Detection of New or Evolving Melanocytic Lesions Using a 3D Body Model

Bogo, F., Romero, J., Peserico, E., Black, M. J.

In Medical Image Computing and Computer-Assisted Intervention (MICCAI), 8673, pages: 593-600, Lecture Notes in Computer Science, (Editors: Golland, Polina and Hata, Nobuhiko and Barillot, Christian and Hornegger, Joachim and Howe, Robert), Spring International Publishing, Medical Image Computing and Computer-Assisted Intervention (MICCAI), September 2014 (inproceedings)

Abstract
Detection of new or rapidly evolving melanocytic lesions is crucial for early diagnosis and treatment of melanoma.We propose a fully automated pre-screening system for detecting new lesions or changes in existing ones, on the order of 2 - 3mm, over almost the entire body surface. Our solution is based on a multi-camera 3D stereo system. The system captures 3D textured scans of a subject at diff erent times and then brings these scans into correspondence by aligning them with a learned, parametric, non-rigid 3D body model. This means that captured skin textures are in accurate alignment across scans, facilitating the detection of new or changing lesions. The integration of lesion segmentation with a deformable 3D body model is a key contribution that makes our approach robust to changes in illumination and subject pose.

ps

pdf Poster DOI Project Page [BibTex]

pdf Poster DOI Project Page [BibTex]


Tracking using Multilevel Quantizations
Tracking using Multilevel Quantizations

Hong, Z., Wang, C., Mei, X., Prokhorov, D., Tao, D.

In Computer Vision – ECCV 2014, 8694, pages: 155-171, Lecture Notes in Computer Science, (Editors: D. Fleet and T. Pajdla and B. Schiele and T. Tuytelaars ), Springer International Publishing, 13th European Conference on Computer Vision, September 2014 (inproceedings)

Abstract
Most object tracking methods only exploit a single quantization of an image space: pixels, superpixels, or bounding boxes, each of which has advantages and disadvantages. It is highly unlikely that a common optimal quantization level, suitable for tracking all objects in all environments, exists. We therefore propose a hierarchical appearance representation model for tracking, based on a graphical model that exploits shared information across multiple quantization levels. The tracker aims to find the most possible position of the target by jointly classifying the pixels and superpixels and obtaining the best configuration across all levels. The motion of the bounding box is taken into consideration, while Online Random Forests are used to provide pixel- and superpixel-level quantizations and progressively updated on-the-fly. By appropriately considering the multilevel quantizations, our tracker exhibits not only excellent performance in non-rigid object deformation handling, but also its robustness to occlusions. A quantitative evaluation is conducted on two benchmark datasets: a non-rigid object tracking dataset (11 sequences) and the CVPR2013 tracking benchmark (50 sequences). Experimental results show that our tracker overcomes various tracking challenges and is superior to a number of other popular tracking methods.

ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Segmented molecular design of self-healing proteinaceous materials.
Segmented molecular design of self-healing proteinaceous materials.

Sariola, V., Pena-Francesch, A., Jung, H., Çetinkaya, M., Pacheco, C., Sitti, M., Demirel, M. C.

Scientific reports, 5, pages: 13482-13482, Nature Publishing Group, July 2014 (article)

Abstract
Hierarchical assembly of self-healing adhesive proteins creates strong and robust structural and interfacial materials, but understanding of the molecular design and structure–property relationships of structural proteins remains unclear. Elucidating this relationship would allow rational design of next generation genetically engineered self-healing structural proteins. Here we report a general self-healing and -assembly strategy based on a multiphase recombinant protein based material. Segmented structure of the protein shows soft glycine- and tyrosine-rich segments with self-healing capability and hard beta-sheet segments. The soft segments are strongly plasticized by water, lowering the self-healing temperature close to body temperature. The hard segments self-assemble into nanoconfined domains to reinforce the material. The healing strength scales sublinearly with contact time, which associates with diffusion and wetting of autohesion. The finding suggests that recombinant structural proteins from heterologous expression have potential as strong and repairable engineering materials.

pi

DOI [BibTex]

DOI [BibTex]


Breathing Life into Shape: Capturing, Modeling and Animating {3D} Human Breathing
Breathing Life into Shape: Capturing, Modeling and Animating 3D Human Breathing

Tsoli, A., Mahmood, N., Black, M. J.

ACM Transactions on Graphics, (Proc. SIGGRAPH), 33(4):52:1-52:11, ACM, New York, NY, July 2014 (article)

Abstract
Modeling how the human body deforms during breathing is important for the realistic animation of lifelike 3D avatars. We learn a model of body shape deformations due to breathing for different breathing types and provide simple animation controls to render lifelike breathing regardless of body shape. We capture and align high-resolution 3D scans of 58 human subjects. We compute deviations from each subject’s mean shape during breathing, and study the statistics of such shape changes for different genders, body shapes, and breathing types. We use the volume of the registered scans as a proxy for lung volume and learn a novel non-linear model relating volume and breathing type to 3D shape deformations and pose changes. We then augment a SCAPE body model so that body shape is determined by identity, pose, and the parameters of the breathing model. These parameters provide an intuitive interface with which animators can synthesize 3D human avatars with realistic breathing motions. We also develop a novel interface for animating breathing using a spirometer, which measures the changes in breathing volume of a “breath actor.”

ps

pdf video link (url) DOI Project Page Project Page Project Page [BibTex]


Bio-Hybrid Cell-Based Actuators for Microsystems
Bio-Hybrid Cell-Based Actuators for Microsystems

Carlsen, R. W., Sitti, M.

Small, 10(19):3831-3851, June 2014 (article)

Abstract
As we move towards the miniaturization of devices to perform tasks at the nano and microscale, it has become increasingly important to develop new methods for actuation, sensing, and control. Over the past decade, bio-hybrid methods have been investigated as a promising new approach to overcome the challenges of scaling down robotic and other functional devices. These methods integrate biological cells with artificial components and therefore, can take advantage of the intrinsic actuation and sensing functionalities of biological cells. Here, the recent advancements in bio-hybrid actuation are reviewed, and the challenges associated with the design, fabrication, and control of bio-hybrid microsystems are discussed. As a case study, focus is put on the development of bacteria-driven microswimmers, which has been investigated as a targeted drug delivery carrier. Finally, a future outlook for the development of these systems is provided. The continued integration of biological and artificial components is envisioned to enable the performance of tasks at a smaller and smaller scale in the future, leading to the parallel and distributed operation of functional systems at the microscale.

pi

DOI [BibTex]

DOI [BibTex]