Header logo is


2008


no image
Simulation and analysis of a passive pitch reversal flapping wing mechanism for an aerial robotic platform

Arabagi, V., Sitti, M.

In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages: 1260-1265, 2008 (inproceedings)

pi

Project Page [BibTex]

2008


Project Page [BibTex]


no image
Human movement generation based on convergent flow fields: A computational model and a behavioral experiment

Hoffmann, H., Schaal, S.

In Advances in Computational Motor Control VII, Symposium at the Society for Neuroscience Meeting, Washington DC, 2008, 2008, clmc (inproceedings)

am

link (url) [BibTex]

link (url) [BibTex]


no image
Fabrication and Characterization of Biologically Inspired Mushroom-Shaped Elastomer Microfiber Arrays

Kim, S., Sitti, M.

In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pages: 839-847, 2008 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Gecko inspired micro-fibrillar adhesives for wall climbing robots on micro/nanoscale rough surfaces

Aksak, B., Murphy, M. P., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 3058-3063, 2008 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Miniature Mobile Robots Down to Micron Scale

Sitti, M.

In Micro-NanoMechatronics and Human Science, 2008. MHS 2008. International Symposium on, pages: 525-525, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Movement reproduction and obstacle avoidance with dynamic movement primitives and potential fields

Park, D., Hoffmann, H., Pastor, P., Schaal, S.

In IEEE International Conference on Humanoid Robots, 2008., 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
The dual role of uncertainty in force field learning

Mistry, M., Theodorou, E., Hoffmann, H., Schaal, S.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract
Force field experiments have been a successful paradigm for studying the principles of planning, execution, and learning in human arm movements. Subjects have been shown to cope with the disturbances generated by force fields by learning internal models of the underlying dynamics to predict disturbance effects or by increasing arm impedance (via co-contraction) if a predictive approach becomes infeasible. Several studies have addressed the issue uncertainty in force field learning. Scheidt et al. demonstrated that subjects exposed to a viscous force field of fixed structure but varying strength (randomly changing from trial to trial), learn to adapt to the mean disturbance, regardless of the statistical distribution. Takahashi et al. additionally show a decrease in strength of after-effects after learning in the randomly varying environment. Thus they suggest that the nervous system adopts a dual strategy: learning an internal model of the mean of the random environment, while simultaneously increasing arm impedance to minimize the consequence of errors. In this study, we examine what role variance plays in the learning of uncertain force fields. We use a 7 degree-of-freedom exoskeleton robot as a manipulandum (Sarcos Master Arm, Sarcos, Inc.), and apply a 3D viscous force field of fixed structure and strength randomly selected from trial to trial. Additionally, in separate blocks of trials, we alter the variance of the randomly selected strength multiplier (while keeping a constant mean). In each block, after sufficient learning has occurred, we apply catch trials with no force field and measure the strength of after-effects. As expected in higher variance cases, results show increasingly smaller levels of after-effects as the variance is increased, thus implying subjects choose the robust strategy of increasing arm impedance to cope with higher levels of uncertainty. Interestingly, however, subjects show an increase in after-effect strength with a small amount of variance as compared to the deterministic (zero variance) case. This result implies that a small amount of variability aides in internal model formation, presumably a consequence of the additional amount of exploration conducted in the workspace of the task.

am

[BibTex]

[BibTex]


no image
Dynamic movement primitives for movement generation motivated by convergent force fields in frog

Hoffmann, H., Pastor, P., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
Polymeric Micro/Nanofiber Manufacturing and Mechanical Characterization

Nain, A. S., Sitti, M., Amon, C.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 295-303, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
An untethered magnetically actuated micro-robot capable of motion on arbitrary surfaces

Floyd, S., Pawashe, C., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 419-424, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Fabrication of bio-inspired elastomer nanofiber arrays with spatulate tips using notching effect

Kim, S., Sitti, M., Jang, J., Thomas, E. L.

In Nanotechnology, 2008. NANO’08. 8th IEEE Conference on, pages: 780-782, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A motorized anchoring mechanism for a tethered capsule robot using fibrillar adhesives for interventions in the esophagus

Glass, P., Cheung, E., Wang, H., Appasamy, R., Sitti, M.

In Biomedical Robotics and Biomechatronics, 2008. BioRob 2008. 2nd IEEE RAS & EMBS International Conference on, pages: 758-764, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Behavioral experiments on reinforcement learning in human motor control

Hoffmann, H., Theodorou, E., Schaal, S.

In Abstracts of the Eighteenth Annual Meeting of Neural Control of Movement (NCM), Naples, Florida, April 29-May 4, 2008, clmc (inproceedings)

Abstract
Reinforcement learning (RL) - learning solely based on reward or cost feedback - is widespread in robotics control and has been also suggested as computational model for human motor control. In human motor control, however, hardly any experiment studied reinforcement learning. Here, we study learning based on visual cost feedback in a reaching task and did three experiments: (1) to establish a simple enough experiment for RL, (2) to study spatial localization of RL, and (3) to study the dependence of RL on the cost function. In experiment (1), subjects sit in front of a drawing tablet and look at a screen onto which the drawing pen's position is projected. Beginning from a start point, their task is to move with the pen through a target point presented on screen. Visual feedback about the pen's position is given only before movement onset. At the end of a movement, subjects get visual feedback only about the cost of this trial. We choose as cost the squared distance between target and virtual pen position at the target line. Above a threshold value, the cost was fixed at this value. In the mapping of the pen's position onto the screen, we added a bias (unknown to subject) and Gaussian noise. As result, subjects could learn the bias, and thus, showed reinforcement learning. In experiment (2), we randomly altered the target position between three different locations (three different directions from start point: -45, 0, 45). For each direction, we chose a different bias. As result, subjects learned all three bias values simultaneously. Thus, RL can be spatially localized. In experiment (3), we varied the sensitivity of the cost function by multiplying the squared distance with a constant value C, while keeping the same cut-off threshold. As in experiment (2), we had three target locations. We assigned to each location a different C value (this assignment was randomized between subjects). Since subjects learned the three locations simultaneously, we could directly compare the effect of the different cost functions. As result, we found an optimal C value; if C was too small (insensitive cost), learning was slow; if C was too large (narrow cost valley), the exploration time was longer and learning delayed. Thus, reinforcement learning in human motor control appears to be sen

am

[BibTex]

[BibTex]


no image
Movement generation by learning from demonstration and generalization to new targets

Pastor, P., Hoffmann, H., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), 2008, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
Combining dynamic movement primitives and potential fields for online obstacle avoidance

Park, D., Hoffmann, H., Schaal, S.

In Adaptive Motion of Animals and Machines (AMAM), Cleveland, Ohio, 2008, 2008, clmc (inproceedings)

am

link (url) [BibTex]

link (url) [BibTex]


no image
Fabrication of Single and Multi-Layer Fibrous Biomaterial Scaffolds for Tissue Engineering

Nain, A. S., Miller, E., Sitti, M., Campbell, P., Amon, C.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 231-238, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Performance of different foot designs for a water running robot

Floyd, S., Adilak, S., Ramirez, S., Rogman, R., Sitti, M.

In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages: 244-250, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Dynamic modeling of a basilisk lizard inspired quadruped robot running on water

Park, H. S., Floyd, S., Sitti, M.

In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages: 3101-3107, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Bacterial propulsion of chemically patterned micro-cylinders

Behkam, B., Sitti, M.

In Biomedical Robotics and Biomechatronics, 2008. BioRob 2008. 2nd IEEE RAS & EMBS International Conference on, pages: 753-757, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Computational model for movement learning under uncertain cost

Theodorou, E., Hoffmann, H., Mistry, M., Schaal, S.

In Abstracts of the Society of Neuroscience Meeting (SFN 2008), Washington, DC 2008, 2008, clmc (inproceedings)

Abstract
Stochastic optimal control is a framework for computing control commands that lead to an optimal behavior under a given cost. Despite the long history of optimal control in engineering, it has been only recently applied to describe human motion. So far, stochastic optimal control has been mainly used in tasks that are already learned, such as reaching to a target. For learning, however, there are only few cases where optimal control has been applied. The main assumptions of stochastic optimal control that restrict its application to tasks after learning are the a priori knowledge of (1) a quadratic cost function (2) a state space model that captures the kinematics and/or dynamics of musculoskeletal system and (3) a measurement equation that models the proprioceptive and/or exteroceptive feedback. Under these assumptions, a sequence of control gains is computed that is optimal with respect to the prespecified cost function. In our work, we relax the assumption of the a priori known cost function and provide a computational framework for modeling tasks that involve learning. Typically, a cost function consists of two parts: one part that models the task constraints, like squared distance to goal at movement endpoint, and one part that integrates over the squared control commands. In learning a task, the first part of this cost function will be adapted. We use an expectation-maximization scheme for learning: the expectation step optimizes the task constraints through gradient descent of a reward function and the maximizing step optimizes the control commands. Our computational model is tested and compared with data given from a behavioral experiment. In this experiment, subjects sit in front of a drawing tablet and look at a screen onto which the drawing-pen's position is projected. Beginning from a start point, their task is to move with the pen through a target point presented on screen. Visual feedback about the pen's position is given only before movement onset. At the end of a movement, subjects get visual feedback only about the cost of this trial. In the mapping of the pen's position onto the screen, we added a bias (unknown to subject) and Gaussian noise. Therefore the cost is a function of this bias. The subjects were asked to reach to the target and minimize this cost over trials. In this behavioral experiment, subjects could learn the bias and thus showed reinforcement learning. With our computational model, we could model the learning process over trials. Particularly, the dependence on parameters of the reward function (Gaussian width) and the modulation of movement variance over time were similar in experiment and model.

am

[BibTex]

[BibTex]


no image
A Bayesian approach to empirical local linearizations for robotics

Ting, J., D’Souza, A., Vijayakumar, S., Schaal, S.

In International Conference on Robotics and Automation (ICRA2008), Pasadena, CA, USA, May 19-23, 2008, 2008, clmc (inproceedings)

Abstract
Local linearizations are ubiquitous in the control of robotic systems. Analytical methods, if available, can be used to obtain the linearization, but in complex robotics systems where the the dynamics and kinematics are often not faithfully obtainable, empirical linearization may be preferable. In this case, it is important to only use data for the local linearization that lies within a ``reasonable'' linear regime of the system, which can be defined from the Hessian at the point of the linearization -- a quantity that is not available without an analytical model. We introduce a Bayesian approach to solve statistically what constitutes a ``reasonable'' local regime. We approach this problem in the context local linear regression. In contrast to previous locally linear methods, we avoid cross-validation or complex statistical hypothesis testing techniques to find the appropriate local regime. Instead, we treat the parameters of the local regime probabilistically and use approximate Bayesian inference for their estimation. This approach results in an analytical set of iterative update equations that are easily implemented on real robotics systems for real-time applications. As in other locally weighted regressions, our algorithm also lends itself to complete nonlinear function approximation for learning empirical internal models. We sketch the derivation of our Bayesian method and provide evaluations on synthetic data and actual robot data where the analytical linearization was known.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Do humans plan continuous trajectories in kinematic coordinates?

Hoffmann, H., Schaal, S.

In Abstracts of the Society of Neuroscience Meeting (SFN 2008), Washington, DC 2008, 2008, clmc (inproceedings)

Abstract
The planning and execution of human arm movements is still unresolved. An ongoing controversy is whether we plan a movement in kinematic coordinates and convert these coordinates with an inverse internal model into motor commands (like muscle activation) or whether we combine a few muscle synergies or equilibrium points to move a hand, e.g., between two targets. The first hypothesis implies that a planner produces a desired end-effector position for all time points; the second relies on the dynamics of the muscular-skeletal system for a given control command to produce a continuous end-effector trajectory. To distinguish between these two possibilities, we use a visuomotor adaptation experiment. Subjects moved a pen on a graphics tablet and observed the pen's mapped position onto a screen (subjects quickly adapted to this mapping). The task was to move a cursor between two points in a given time window. In the adaptation test, we manipulated the velocity profile of the cursor feedback such that the shape of the trajectories remained unchanged (for straight paths). If humans would use a kinematic plan and map at each time the desired end-effector position onto control commands, subjects should adapt to the above manipulation. In a similar experiment, Wolpert et al (1995) showed adaptation to changes in the curvature of trajectories. This result, however, cannot rule out a shift of an equilibrium point or an additional synergy activation between start and end point of a movement. In our experiment, subjects did two sessions, one control without and one with velocity-profile manipulation. To skew the velocity profile of the cursor trajectory, we added to the current velocity, v, the function 0.8*v*cos(pi + pi*x), where x is the projection of the cursor position onto the start-goal line divided by the distance start to goal (x=0 at the start point). As result, subjects did not adapt to this manipulation: for all subjects, the true hand motion was not significantly modified in a direction consistent with adaptation, despite that the visually presented motion differed significantly from the control motion. One may still argue that this difference in motion was insufficient to be processed visually. Thus, as a control experiment, we replayed control and modified motions to the subjects and asked which of the two motions appeared 'more natural'. Subjects chose the unperturbed motion as more natural significantly better than chance. In summary, for a visuomotor transformation task, the hypothesis of a planned continuous end-effector trajectory predicts adaptation to a modified velocity profile. The current experiment found no adaptation under such transformation.

am

[BibTex]

[BibTex]


no image
Design and Numerical Modeling of an On-Board Chemical Release Module for Motion Control of Bacteria-Propelled Swimming Micro-Robots

Behkam, B., Nain, A. S., Amon, C. H., Sitti, M.

In ASME 2008 International Mechanical Engineering Congress and Exposition, pages: 239-244, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Investigation of Calcium Mechanotransduction by Quasi 3-D Microfiber Mechanical Stimulation of Cells

Ruder, W. C., Pratt, E. D., Sitti, M., LeDuc, P. R., Antaki, J. F.

In ASME 2008 Summer Bioengineering Conference, pages: 1049-1050, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Beanbag robotics: Robotic swarms with 1-dof units

Kriesel, D. M., Cheung, E., Sitti, M., Lipson, H.

In International Conference on Ant Colony Optimization and Swarm Intelligence, pages: 267-274, 2008 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Particle image velocimetry and thrust of flagellar micro propulsion systems

Danis, U., Sitti, M., Pekkan, K.

In APS Division of Fluid Dynamics Meeting Abstracts, 1, 2008 (inproceedings)

pi

[BibTex]

[BibTex]

2004


no image
E. coli inspired propulsion for swimming microrobots

Behkam, B., Sitti, M.

In ASME 2004 International Mechanical Engineering Congress and Exposition, pages: 1037-1041, 2004 (inproceedings)

pi

Project Page [BibTex]

2004


Project Page [BibTex]


no image
Dynamic modes of nanoparticle motion during nanoprobe-based manipulation

Tafazzoli, A., Sitti, M.

In Nanotechnology, 2004. 4th IEEE Conference on, pages: 35-37, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Modeling and design of biomimetic adhesives inspired by gecko foot-hairs

Shah, G. J., Sitti, M.

In Robotics and Biomimetics, 2004. ROBIO 2004. IEEE International Conference on, pages: 873-878, 2004 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Learning Composite Adaptive Control for a Class of Nonlinear Systems

Nakanishi, J., Farrell, J. A., Schaal, S.

In IEEE International Conference on Robotics and Automation, pages: 2647-2652, New Orleans, LA, USA, April 2004, 2004, clmc (inproceedings)

am

link (url) [BibTex]

link (url) [BibTex]


no image
Augmented reality user interface for nanomanipulation using atomic force microscopes

Vogl, W., Sitti, M., Ehrenstrasser, M., Zäh, M.

In Proc. of Eurohaptics, pages: 413-416, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
WaalBots for Space applications

Menon, C., Murphy, M., Angrilli, F., Sitti, M.

In 55th IAC Conference, Vancouver, Canada, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
A framework for learning biped locomotion with dynamic movement primitives

Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S., Kawato, M.

In IEEE-RAS/RSJ International Conference on Humanoid Robots (Humanoids 2004), IEEE, Los Angeles, CA: Nov.10-12, Santa Monica, CA, 2004, clmc (inproceedings)

Abstract
This article summarizes our framework for learning biped locomotion using dynamical movement primitives based on nonlinear oscillators. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like locomotion. We suggest dynamical movement primitives as a central pattern generator (CPG) of a biped robot, an approach we have previously proposed for learning and encoding complex human movements. Demonstrated trajectories are learned through movement primitives by locally weighted regression, and the frequency of the learned trajectories is adjusted automatically by a frequency adaptation algorithm based on phase resetting and entrainment of coupled oscillators. Numerical simulations and experimental implementation on a physical robot demonstrate the effectiveness of the proposed locomotion controller. Furthermore, we demonstrate that phase resetting contributes to robustness against external perturbations and environmental changes by numerical simulations and experiments.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Motor Primitives with Reinforcement Learning

Peters, J., Schaal, S.

In Proceedings of the 11th Joint Symposium on Neural Computation, http://resolver.caltech.edu/CaltechJSNC:2004.poster020, 2004, clmc (inproceedings)

Abstract
One of the major challenges in action generation for robotics and in the understanding of human motor control is to learn the "building blocks of move- ment generation," or more precisely, motor primitives. Recently, Ijspeert et al. [1, 2] suggested a novel framework how to use nonlinear dynamical systems as motor primitives. While a lot of progress has been made in teaching these mo- tor primitives using supervised or imitation learning, the self-improvement by interaction of the system with the environment remains a challenging problem. In this poster, we evaluate different reinforcement learning approaches can be used in order to improve the performance of motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and line out how these lead to a novel algorithm which is based on natural policy gradients [3]. We compare this algorithm to previous reinforcement learning algorithms in the context of dynamic motor primitive learning, and show that it outperforms these by at least an order of magnitude. We demonstrate the efficiency of the resulting reinforcement learning method for creating complex behaviors for automous robotics. The studied behaviors will include both discrete, finite tasks such as baseball swings, as well as complex rhythmic patterns as they occur in biped locomotion

am

[BibTex]

[BibTex]


no image
Dynamic behavior and simulation of nanoparticle sliding during nanoprobe-based positioning

Tafazzoli, A., Sitti, M.

In Proc. ASME International Mechanical Engineering Conference, 19, pages: 32, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Three-dimensional nanoscale manipulation and manufacturing using proximal probes: controlled pulling of polymer micro/nanofibers

Nain, A. S., Amon, C., Sitti, M.

In Mechatronics, 2004. ICM’04. Proceedings of the IEEE International Conference on, pages: 224-230, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Micro-and nano-scale robotics

Sitti, M.

In American Control Conference, 2004. Proceedings of the 2004, 1, pages: 1-8, 2004 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Gecko inspired surface climbing robots

Menon, C., Murphy, M., Sitti, M.

In Robotics and Biomimetics, 2004. ROBIO 2004. IEEE International Conference on, pages: 431-436, 2004 (inproceedings)

pi

Project Page [BibTex]

Project Page [BibTex]

1999


no image
Tele-touch feedback of surfaces at the micro/nano scale: Modeling and experiments

Sitti, M., Horighuchi, S., Hashimoto, H.

In Intelligent Robots and Systems, 1999. IROS’99. Proceedings. 1999 IEEE/RSJ International Conference on, 2, pages: 882-888, 1999 (inproceedings)

pi

[BibTex]

1999


[BibTex]


no image
Challenge to micro/nanomanipulation using atomic force microscope

Hashimoto, H., Sitti, M.

In Micromechatronics and Human Science, 1999. MHS’99. Proceedings of 1999 International Symposium on, pages: 35-42, 1999 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Visualization interface for AFM-based nano-manipulation

Horiguchi, S., Sitti, M., Hashimoto, H.

In Industrial Electronics, 1999. ISIE’99. Proceedings of the IEEE International Symposium on, 1, pages: 310-315, 1999 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Tele-nanorobotics 2-d manipulation of micro/nanoparticles using afm

Sitti, M., Horiguchi, S., Hashimoto, H.

In Advanced Intelligent Mechatronics, 1999. Proceedings. 1999 IEEE/ASME International Conference on, pages: 786-786, 1999 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Two-dimensional fine particle positioning using a piezoresistive cantilever as a micro/nano-manipulator

Sitti, M., Hashimoto, H.

In Robotics and Automation, 1999. Proceedings. 1999 IEEE International Conference on, 4, pages: 2729-2735, 1999 (inproceedings)

pi

[BibTex]

[BibTex]