Header logo is


2011


Computational flow studies in a subject-specific human upper airway using a one-equation turbulence model. Influence of the nasal cavity
Computational flow studies in a subject-specific human upper airway using a one-equation turbulence model. Influence of the nasal cavity

Prihambodo Saksono, Perumal Nithiarasu, Igor Sazonov, Si Yong Yeo

International Journal for Numerical Methods in Biomedical Engineering, 87(1-5):96–114, 2011 (article)

Abstract
This paper focuses on the impact of including nasal cavity on airflow through a human upper respiratory tract. A computational study is carried out on a realistic geometry, reconstructed from CT scans of a subject. The geometry includes nasal cavity, pharynx, larynx, trachea and two generations of airway bifurcations below trachea. The unstructured mesh generation procedure is discussed in some length due to the complex nature of the nasal cavity structure and poor scan resolution normally available from hospitals. The fluid dynamic studies have been carried out on the geometry with and without the inclusion of the nasal cavity. The characteristic-based split scheme along with the one-equation Spalart–Allmaras turbulence model is used in its explicit form to obtain flow solutions at steady state. Results reveal that the exclusion of nasal cavity significantly influences the resulting solution. In particular, the location of recirculating flow in the trachea is dramatically different when the truncated geometry is used. In addition, we also address the differences in the solution due to imposed, equally distributed and proportionally distributed flow rates at inlets (both nares). The results show that the differences in flow pattern between the two inlet conditions are not confined to the nasal cavity and nasopharyngeal region, but they propagate down to the trachea.

ps

[BibTex]

2011


[BibTex]


Discrete Minimum Distortion Correspondence Problems for Non-rigid   Shape Matching
Discrete Minimum Distortion Correspondence Problems for Non-rigid Shape Matching

Wang, C., Bronstein, M. M., Bronstein, A. M., Paragios, N.

In International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), 2011 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Viewpoint Invariant 3{D} Landmark Model Inference from Monocular 2{D}  Images Using Higher-Order Priors
Viewpoint Invariant 3D Landmark Model Inference from Monocular 2D Images Using Higher-Order Priors

Wang, C., Zeng, Y., Simon, L., Kakadiaris, I., Samaras, D., Paragios, N.

In IEEE International Conference on Computer Vision (ICCV), 2011 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
Correspondence estimation from non-rigid motion information

Wulff, J., Lotz, T., Stehle, T., Aach, T., Chase, J. G.

In Proc. SPIE, (Editors: B. M. Dawant, D. R. Haynor), SPIE, Medical Imaging: Image Processing, 2011 (inproceedings)

Abstract
The DIET (Digital Image Elasto Tomography) system is a novel approach to screen for breast cancer using only optical imaging information of the surface of a vibrating breast. 3D tracking of skin surface motion without the requirement of external markers is desirable. A novel approach to establish point correspondences using pure skin images is presented here. Instead of the intensity, motion is used as the primary feature, which can be extracted using optical flow algorithms. Taking sequences of multiple frames into account, this motion information alone is accurate and unambiguous enough to allow for a 3D reconstruction of the breast surface. Two approaches, direct and probabilistic, for this correspondence estimation are presented here, suitable for different levels of calibration information accuracy. Reconstructions show that the results obtained using these methods are comparable in accuracy to marker-based methods while considerably increasing resolution. The presented method has high potential in optical tissue deformation and motion sensing.

ps

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Predicting Articulated Human Motion from Spatial Processes
Predicting Articulated Human Motion from Spatial Processes

Soren Hauberg, Kim S. Pedersen

International Journal of Computer Vision, 94, pages: 317-334, Springer Netherlands, 2011 (article)

ps

Publishers site Code Paper site PDF [BibTex]

Publishers site Code Paper site PDF [BibTex]


An Empirical Study on the Performance of Spectral Manifold Learning Techniques
An Empirical Study on the Performance of Spectral Manifold Learning Techniques

Peter Mysling, Soren Hauberg, Kim S. Pedersen

In Artificial Neural Networks and Machine Learning – ICANN 2011, 6791, pages: 347-354, Lecture Notes in Computer Science, (Editors: Honkela, Timo and Duch, Włodzisław and Girolami, Mark and Kaski, Samuel), Springer Berlin Heidelberg, 2011 (inproceedings)

ps

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


no image
Separation of visual object features and grasp strategy in primate ventral premotor cortex

Vargas-Irwin, C., Franquemont, L., Black, M., Donoghue, J.

Neural Control of Movement, 21st Annual Conference, 2011 (conference)

ps

[BibTex]

[BibTex]

2010


no image
Lack of Discriminatory Function for Endoscopy Skills on a Computer-based Simulator

Kim, S., Spencer, G., Makar, G., Ahmad, N., Jaffe, D., Ginsberg, G., Kuchenbecker, K. J., Kochman, M.

Surgical Endoscopy, 24(12):3008-3015, December 2010 (article)

hi

[BibTex]

2010


[BibTex]


Visibility Maps for Improving Seam Carving
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

ps

webpage pdf slides supplementary code [BibTex]

webpage pdf slides supplementary code [BibTex]


A {2D} human body model dressed in eigen clothing
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

ps

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Decoding complete reach and grasp actions from local primary motor cortex populations
Decoding complete reach and grasp actions from local primary motor cortex populations

(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))

Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J., Black, M. J., Donoghue, J. P.

J. of Neuroscience, 39(29):9659-9669, July 2010 (article)

ps

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]


no image
VerroTouch: High-Frequency Acceleration Feedback for Telerobotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Martin, P., Bohren, J., Mendoza, P. J., Lee, D. I.

Hands-on demonstration presented at EuroHaptics, Amsterdam, Netherlands, Amsterdam, Netherlands, July 2010 (misc)

hi

[BibTex]

[BibTex]


no image
TexturePad: Realistic Rendering of Haptic Textures

Romano, J. M., Landin, N., McMahan, W., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Amsterdam, Netherlands, July 2010 (misc)

hi

[BibTex]

[BibTex]


no image
VerroTouch: High-Frequency Acceleration Feedback for Telerobotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Martin, P., Bohren, J., Mendoza, P. J., Lee, D. I.

In Haptics: Generating and Perceiving Tangible Sensations, Proc. EuroHaptics, Part I, 6191, pages: 189-196, Lecture Notes in Computer Science, Springer, Amsterdam, Netherlands, July 2010, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


Multisensor-Fusion for 3D Full-Body Human Motion Capture
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

ps

project page pdf [BibTex]

project page pdf [BibTex]


Contour people: A parameterized model of {2D} articulated human shape
Contour people: A parameterized model of 2D articulated human shape

Freifeld, O., Weiss, A., Zuffi, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, (CVPR), pages: 639-646, IEEE, June 2010 (inproceedings)

Abstract
We define a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and part rotation. This latter model also incorporates a learned non-rigid deformation model. The result is a 2D articulated model that is compact to represent, simple to compute with and more expressive than previous models. We demonstrate the value of such a model in 2D pose estimation and segmentation. Given an initial pose from a standard pictorial-structures method, we refine the pose and shape using an objective function that segments the scene into foreground and background regions. The result is a parametric, human-specific, image segmentation.

ps

pdf slides video of CVPR talk Project Page [BibTex]

pdf slides video of CVPR talk Project Page [BibTex]


Coded exposure imaging for projective motion deblurring
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

ps

Publisher site [BibTex]

Publisher site [BibTex]


Tracking people interacting with objects
Tracking people interacting with objects

Kjellstrom, H., Kragic, D., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages: 747-754, June 2010 (inproceedings)

ps

pdf Video [BibTex]

pdf Video [BibTex]


Secrets of optical flow estimation and their principles
Secrets of optical flow estimation and their principles

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

ps

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]


no image
Identifying the Role of Proprioception in Upper-Limb Prosthesis Control: Studies on Targeted Motion

Blank, A., Okamura, A. M., Kuchenbecker, K. J.

ACM Transactions on Applied Perception, 7(3):1-23, June 2010 (article)

hi

[BibTex]

[BibTex]


no image
Automatic Filter Design for Synthesis of Haptic Textures from Recorded Acceleration Data

Romano, J. M., Yoshioka, T., Kuchenbecker, K. J.

In Proc. IEEE International Conference on Robotics and Automation, pages: 1815-1821, Anchorage, Alaska, USA, May 2010, Oral presentation given by Romano (inproceedings)

hi

[BibTex]

[BibTex]


no image
Control of a High Fidelity Ungrounded Torque Feedback Device: The iTorqU 2.1

Winfree, K. N., Romano, J. M., Gewirtz, J., Kuchenbecker, K. J.

In Proc. IEEE International Conference on Robotics and Automation, pages: 1347-1352, Anchorage, Alaska, May 2010, Oral presentation given by Winfree (inproceedings)

hi

[BibTex]

[BibTex]


no image
Realistic Haptic Contacts and Textures for Tablet Computing

Romano, J. M., Kuchenbecker, K. J.

Hands-on demonstration presented at the Stanford Medical Innovation Conference on Medical Robotics, Stanford, California, April 2010 (misc)

hi

[BibTex]

[BibTex]


no image
High-Frequency Tactile Feedback for the da Vinci Surgical System

Standish, D., Gewirtz, J., McMahan, W., Martin, P., Kuchenbecker, K. J.

Hands-on demonstration presented at the Stanford Medical Innovation Conference on Medical Robotics, April 2010 (misc)

hi

[BibTex]

[BibTex]


Guest editorial: State of the art in image- and video-based human pose and motion estimation
Guest editorial: State of the art in image- and video-based human pose and motion estimation

Sigal, L., Black, M. J.

International Journal of Computer Vision, 87(1):1-3, March 2010 (article)

ps

pdf from publisher [BibTex]

pdf from publisher [BibTex]


no image
High-Frequency Tactile Feedback for the da Vinci Surgical System

Standish, D., Gewirtz, J., McMahan, W., Martin, P., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Waltham, Massachusetts, March 2010 (misc)

hi

[BibTex]

[BibTex]


no image
The Haptic Board

Jiang, Z., Bhoite, M., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Waltham, Massachusetts, USA, March 2010 (misc)

hi

[BibTex]

[BibTex]


no image
High Frequency Acceleration Feedback Significantly Increases the Realism of Haptically Rendered Textured Surfaces

McMahan, W., Romano, J. M., Rahuman, A. M. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 141-148, Waltham, Massachusetts, March 2010, Oral presentation given by McMahan (inproceedings)

hi

[BibTex]

[BibTex]


no image
Spatially distributed tactile feedback for kinesthetic motion guidance

Kapur, P., Jensen, M., Buxbaum, L. J., Jax, S. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 519-526, Waltham, Massachusetts, USA, March 2010, Poster presentation given by Kapur. {F}inalist for Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


{HumanEva}: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion
HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion

Sigal, L., Balan, A., Black, M. J.

International Journal of Computer Vision, 87(1):4-27, Springer Netherlands, March 2010 (article)

Abstract
While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.

ps

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


no image
Tactile Gaming Vest (TGV)

Palan, S., Wang, R., Naukam, N., Li, E., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Waltham, Massachusetts, March 2010 (misc)

hi

[BibTex]

[BibTex]


no image
Realistic Haptic Contacts and Textures for Tablet Computing

Romano, J. M., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Waltham, Massachusetts, March 2010, {B}est Teaser Award (misc)

hi

[BibTex]

[BibTex]


no image
GPU-Based Haptic Rendering of 3D Smoke

Yang, M., Lu, J., Safonova, A., Kuchenbecker, K. J.

Hands-on demonstration presented at IEEE Haptics Symposium, Waltham, Massachusetts, March 2010 (misc)

hi

[BibTex]

[BibTex]


no image
Dimensional Reduction of High-Frequency Accelerations for Haptic Rendering

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

In Haptics: Generating and Perceiving Tangible Sensations: Part II (Proceedings of EuroHaptics), 6192, pages: 79-86, Lecture Notes in Computer Science, Springer, Amsterdam, Netherlands, 2010, Poster presentation given by Landin (inproceedings)

hi

[BibTex]

[BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, Bildverarbeitung für die Medizin, 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
VerroTouch: A Vibrotactile Feedback System for Minimally Invasive Robotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Bohren, J., Martin, P., Wedmid, A., Mendoza, P. J., Lee, D. I.

In Proc. 28th World Congress of Endourology, 2010, PS8-14. Poster presentation given by Wedmid (inproceedings)

hi

[BibTex]

[BibTex]


{Robust one-shot 3D scanning using loopy belief propagation}
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

ps

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Scene Carving: Scene Consistent Image Retargeting
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

ps

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


no image
Gait planning based on kinematics for a quadruped gecko model with redundancy

Son, D., Jeon, D., Nam, W. C., Chang, D., Seo, T., Kim, J.

Robotics and Autonomous Systems, 58, 2010 (article)

pi

[BibTex]

[BibTex]


Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

ps

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

ps

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


no image
Flat dry elastomer adhesives as attachment materials for climbing robots

Unver, O., Sitti, M.

IEEE transactions on robotics, 26(1):131-141, IEEE, 2010 (article)

pi

[BibTex]

[BibTex]


no image
Adhesion recovery and passive peeling in a wall climbing robot using adhesives

Kute, C., Murphy, M. P., Mengüç, Y., Sitti, M.

In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages: 2797-2802, 2010 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Nanohandling robot cells

Fatikow, Sergej, Wich, Thomas, Dahmen, Christian, Jasper, Daniel, Stolle, Christian, Eichhorn, Volkmar, Hagemann, Saskia, Weigel-Jech, Michael

In Handbook of Nanophysics: Nanomedicine and Nanorobotics, pages: 1-31, CRC Press, 2010 (incollection)

pi

[BibTex]

[BibTex]


 Automated Home-Cage Behavioral Phenotyping of Mice
Automated Home-Cage Behavioral Phenotyping of Mice

Jhuang, H., Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.

Nature Communications, Nature Communications, 2010 (article)

ps

software, demo pdf [BibTex]

software, demo pdf [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

ps

pdf [BibTex]

pdf [BibTex]


Dense Marker-less Three Dimensional Motion Capture
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

ps

Conference site [BibTex]

Conference site [BibTex]


no image
An experimental analysis of elliptical adhesive contact

Sümer, B., Onal, C. D., Aksak, B., Sitti, M.

Journal of Applied Physics, 107(11):113512, AIP, 2010 (article)

pi

Project Page [BibTex]

Project Page [BibTex]