Header logo is


2007


no image
Towards Machine Learning of Motor Skills

Peters, J., Schaal, S., Schölkopf, B.

In Proceedings of Autonome Mobile Systeme (AMS), pages: 138-144, (Editors: K Berns and T Luksch), 2007, clmc (inproceedings)

Abstract
Autonomous robots that can adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. Early approaches to this goal during the heydays of artificial intelligence research in the late 1980s, however, made it clear that an approach purely based on reasoning or human insights would not be able to model all the perceptuomotor tasks that a robot should fulfill. Instead, new hope was put in the growing wake of machine learning that promised fully adaptive control algorithms which learn both by observation and trial-and-error. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach to motor skill learning in order to get one step closer towards human-like performance. For doing so, we study two ma jor components for such an approach, i.e., firstly, a theoretically well-founded general approach to representing the required control structures for task representation and execution and, secondly, appropriate learning algorithms which can be applied in this setting.

am ei

PDF DOI [BibTex]

2007


PDF DOI [BibTex]


no image
Reinforcement Learning for Optimal Control of Arm Movements

Theodorou, E., Peters, J., Schaal, S.

In Abstracts of the 37st Meeting of the Society of Neuroscience., Neuroscience, 2007, clmc (inproceedings)

Abstract
Every day motor behavior consists of a plethora of challenging motor skills from discrete movements such as reaching and throwing to rhythmic movements such as walking, drumming and running. How this plethora of motor skills can be learned remains an open question. In particular, is there any unifying computa-tional framework that could model the learning process of this variety of motor behaviors and at the same time be biologically plausible? In this work we aim to give an answer to these questions by providing a computational framework that unifies the learning mechanism of both rhythmic and discrete movements under optimization criteria, i.e., in a non-supervised trial-and-error fashion. Our suggested framework is based on Reinforcement Learning, which is mostly considered as too costly to be a plausible mechanism for learning com-plex limb movement. However, recent work on reinforcement learning with pol-icy gradients combined with parameterized movement primitives allows novel and more efficient algorithms. By using the representational power of such mo-tor primitives we show how rhythmic motor behaviors such as walking, squash-ing and drumming as well as discrete behaviors like reaching and grasping can be learned with biologically plausible algorithms. Using extensive simulations and by using different reward functions we provide results that support the hy-pothesis that Reinforcement Learning could be a viable candidate for motor learning of human motor behavior when other learning methods like supervised learning are not feasible.

am ei

[BibTex]

[BibTex]


no image
Reinforcement learning by reward-weighted regression for operational space control

Peters, J., Schaal, S.

In Proceedings of the 24th Annual International Conference on Machine Learning, pages: 745-750, ICML, 2007, clmc (inproceedings)

Abstract
Many robot control problems of practical importance, including operational space control, can be reformulated as immediate reward reinforcement learning problems. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-base reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Policy gradient methods for machine learning

Peters, J., Theodorou, E., Schaal, S.

In Proceedings of the 14th INFORMS Conference of the Applied Probability Society, pages: 97-98, Eindhoven, Netherlands, July 9-11, 2007, 2007, clmc (inproceedings)

Abstract
We present an in-depth survey of policy gradient methods as they are used in the machine learning community for optimizing parameterized, stochastic control policies in Markovian systems with respect to the expected reward. Despite having been developed separately in the reinforcement learning literature, policy gradient methods employ likelihood ratio gradient estimators as also suggested in the stochastic simulation optimization community. It is well-known that this approach to policy gradient estimation traditionally suffers from three drawbacks, i.e., large variance, a strong dependence on baseline functions and a inefficient gradient descent. In this talk, we will present a series of recent results which tackles each of these problems. The variance of the gradient estimation can be reduced significantly through recently introduced techniques such as optimal baselines, compatible function approximations and all-action gradients. However, as even the analytically obtainable policy gradients perform unnaturally slow, it required the step from ÔvanillaÕ policy gradient methods towards natural policy gradients in order to overcome the inefficiency of the gradient descent. This development resulted into the Natural Actor-Critic architecture which can be shown to be very efficient in application to motor primitive learning for robotics.

am ei

[BibTex]

[BibTex]


no image
Policy Learning for Motor Skills

Peters, J., Schaal, S.

In Proceedings of 14th International Conference on Neural Information Processing (ICONIP), pages: 233-242, (Editors: Ishikawa, M. , K. Doya, H. Miyamoto, T. Yamakawa), 2007, clmc (inproceedings)

Abstract
Policy learning which allows autonomous robots to adapt to novel situations has been a long standing vision of robotics, artificial intelligence, and cognitive sciences. However, to date, learning techniques have yet to fulfill this promise as only few methods manage to scale into the high-dimensional domains of manipulator robotics, or even the new upcoming trend of humanoid robotics, and usually scaling was only achieved in precisely pre-structured domains. In this paper, we investigate the ingredients for a general approach policy learning with the goal of an application to motor skill refinement in order to get one step closer towards human-like performance. For doing so, we study two major components for such an approach, i.e., firstly, we study policy learning algorithms which can be applied in the general setting of motor skill learning, and, secondly, we study a theoretically well-founded general approach to representing the required control structures for task representation and execution.

am ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Reinforcement learning for operational space control

Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, pages: 2111-2116, IEEE Computer Society, ICRA, 2007, clmc (inproceedings)

Abstract
While operational space control is of essential importance for robotics and well-understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in face of modeling errors, which are inevitable in complex robots, e.g., humanoid robots. In such cases, learning control methods can offer an interesting alternative to analytical control algorithms. However, the resulting supervised learning problem is ill-defined as it requires to learn an inverse mapping of a usually redundant system, which is well known to suffer from the property of non-convexity of the solution space, i.e., the learning system could generate motor commands that try to steer the robot into physically impossible configurations. The important insight that many operational space control algorithms can be reformulated as optimal control problems, however, allows addressing this inverse learning problem in the framework of reinforcement learning. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-based reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Relative Entropy Policy Search

Peters, J.

CLMC Technical Report: TR-CLMC-2007-2, Computational Learning and Motor Control Lab, Los Angeles, CA, 2007, clmc (techreport)

Abstract
This technical report describes a cute idea of how to create new policy search approaches. It directly relates to the Natural Actor-Critic methods but allows the derivation of one shot solutions. Future work may include the application to interesting problems.

am ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Using reward-weighted regression for reinforcement learning of task space control

Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages: 262-267, Honolulu, Hawaii, April 1-5, 2007, 2007, clmc (inproceedings)

Abstract
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

am ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Evaluation of Policy Gradient Methods and Variants on the Cart-Pole Benchmark

Riedmiller, M., Peters, J., Schaal, S.

In Proceedings of the 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages: 254-261, ADPRL, 2007, clmc (inproceedings)

Abstract
In this paper, we evaluate different versions from the three main kinds of model-free policy gradient methods, i.e., finite difference gradients, `vanilla' policy gradients and natural policy gradients. Each of these methods is first presented in its simple form and subsequently refined and optimized. By carrying out numerous experiments on the cart pole regulator benchmark we aim to provide a useful baseline for future research on parameterized policy search algorithms. Portable C++ code is provided for both plant and algorithms; thus, the results in this paper can be reevaluated, reused and new algorithms can be inserted with ease.

am ei

PDF [BibTex]

PDF [BibTex]


no image
Space exploration-towards bio-inspired climbing robots

Menon, C., Murphy, M., Sitti, M., Lan, N.

INTECH Open Access Publisher, 2007 (misc)

pi

[BibTex]

[BibTex]


no image
Bacterial flagella-based propulsion and on/off motion control of microscale objects

Behkam, B., Sitti, M.

Applied Physics Letters, 90(2):023902, AIP, 2007 (article)

pi

[BibTex]

[BibTex]


no image
A strategy for vision-based controlled pushing of microparticles

Lynch, N. A., Onal, C., Schuster, E., Sitti, M.

In Robotics and Automation, 2007 IEEE International Conference on, pages: 1413-1418, 2007 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Friction of partially embedded vertically aligned carbon nanofibers inside elastomers

Aksak, B., Sitti, M., Cassell, A., Li, J., Meyyappan, M., Callen, P.

Applied Physics Letters, 91(6):061906, AIP, 2007 (article)

pi

[BibTex]

[BibTex]


no image
Enhanced friction of elastomer microfiber adhesives with spatulate tips

Kim, S., Aksak, B., Sitti, M.

Applied Physics Letters, 91(22):221913, AIP, 2007 (article)

pi

Project Page [BibTex]

Project Page [BibTex]


no image
Uncertain 3D Force Fields in Reaching Movements: Do Humans Favor Robust or Average Performance?

Mistry, M., Theodorou, E., Hoffmann, H., Schaal, S.

In Abstracts of the 37th Meeting of the Society of Neuroscience, 2007, clmc (inproceedings)

am

PDF [BibTex]

PDF [BibTex]


no image
Applying the episodic natural actor-critic architecture to motor primitive learning

Peters, J., Schaal, S.

In Proceedings of the 2007 European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 25-27, 2007, clmc (inproceedings)

Abstract
In this paper, we investigate motor primitive learning with the Natural Actor-Critic approach. The Natural Actor-Critic consists out of actor updates which are achieved using natural stochastic policy gradients while the critic obtains the natural policy gradient by linear regression. We show that this architecture can be used to learn the Òbuilding blocks of movement generationÓ, called motor primitives. Motor primitives are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. We show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

am

link (url) [BibTex]

link (url) [BibTex]


no image
The new robotics - towards human-centered machines

Schaal, S.

HFSP Journal Frontiers of Interdisciplinary Research in the Life Sciences, 1(2):115-126, 2007, clmc (article)

Abstract
Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research instiutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

am

link (url) [BibTex]

link (url) [BibTex]


no image
A computational model of human trajectory planning based on convergent flow fields

Hoffman, H., Schaal, S.

In Abstracts of the 37st Meeting of the Society of Neuroscience, San Diego, CA, Nov. 3-7, 2007, clmc (inproceedings)

Abstract
A popular computational model suggests that smooth reaching movements are generated in humans by minimizing a difference vector between hand and target in visual coordinates (Shadmehr and Wise, 2005). To achieve such a task, the optimal joint accelerations may be pre-computed. However, this pre-planning is inflexible towards perturbations of the limb, and there is strong evidence that reaching movements can be modified on-line at any moment during the movement. Thus, next-state planning models (Bullock and Grossberg, 1988) have been suggested that compute the current control command from a function of the goal state such that the overall movement smoothly converges to the goal (see Shadmehr and Wise (2005) for an overview). So far, these models have been restricted to simple point-to-point reaching movements with (approximately) straight trajectories. Here, we present a computational model for learning and executing arbitrary trajectories that combines ideas from pattern generation with dynamic systems and the observation of convergent force fields, which control a frog leg after spinal stimulation (Giszter et al., 1993). In our model, we incorporate the following two observations: first, the orientation of vectors in a force field is invariant over time, but their amplitude is modulated by a time-varying function, and second, two force fields add up when stimulated simultaneously (Giszter et al., 1993). This addition of convergent force fields varying over time results in a virtual trajectory (a moving equilibrium point) that correlates with the actual leg movement (Giszter et al., 1993). Our next-state planner is a set of differential equations that provide the desired end-effector or joint accelerations using feedback of the current state of the limb. These accelerations can be interpreted as resulting from a damped spring that links the current limb position with a virtual trajectory. This virtual trajectory can be learned to realize any desired limb trajectory and velocity profile, and learning is efficient since the time-modulated sum of convergent force fields equals a sum of weighted basis functions (Gaussian time pulses). Thus, linear algebra is sufficient to compute these weights, which correspond to points on the virtual trajectory. During movement execution, the differential equation corrects automatically for perturbations and brings back smoothly the limb towards the goal. Virtual trajectories can be rescaled and added allowing to build a set of movement primitives to describe movements more complex than previously learned. We demonstrate the potential of the suggested model by learning and generating a wide variety of movements.

am

[BibTex]

[BibTex]


no image
On the theory of magnetization dynamics of non-collinear spin systems in the s-d model

De Angeli, L.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Zur ab-initio Elektronentheorie des Magnetismus bei endlichen Temperaturen

Dietermann, F.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Röntgenzirkulardichroische Untersuchungen an ferromagnetischen verdünnten Halbleitersystemen

Tietze, T.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Low-dimensional Fe on vicinal Ir(997): Growth and magnetic properties

Kawwam, M.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Micromagnetic simulations of switching processes and the role of thermal fluctuations

Macke, S.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Hydrogen storage in metal-organic frameworks

Hirscher, M., Panella, B.

{Scripta Materialia}, 56, pages: 809-812, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Substrate-induced current anisotropy in YBa2Cu3O7-δthin films

Djupmyr, M., Albrecht, J.

{Physica C}, 460-462, pages: 1190-1191, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
A micellar approach to magnetic ultrahigh-density data-storage media: extending the limits of current colloidal methods

Ethirajan, A., Wiedwald, U., Boyen, H.-G., Kern, B., Han, L., Klimmer, A., Weigl, F., Kästle, G., Ziemann, P., Fauth, K., Cai, J., Behm, J., Romanyuk, A., Oelhafen, P., Walther, P., Biskupek, J., Kaiser, U.

{Advanced Materials}, 19, pages: 406-410, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Size dependence in the magnetization reversal of Fe/Gd multilayers on self-assembled arrays of nanospheres

Amaladass, E., Ludescher, B., Schütz, G., Tyliszczak, T., Eimüller, T.

{Applied Physics Letters}, 91, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Ma\ssgeschneiderte Wasserstoffspeicher

Hirscher, M., Panella, B.

{Nachrichten aus der Gdch-Energieinitiative}, (Sonderheft April 2007):12-13, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Reconstruction of historical alloys for pipe organs brings true Baroque music back to life.

Baretzky, B., Friesel, M., Straumal, B.

{MRS Bulletin}, 32, pages: 249-255, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Analysis of results from X-ray magnetic reflectometry for magnetic multilayer systems

Fähnle, M., Steiauf, D., Martosiswoyo, L., Goering, E., Brück, S., Schütz, G.

{Physical Review B}, 75, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Dramatic role of critical current anisotropy on flux avalanches in MgB2 films

Albrecht, J., Matveev, A. T., Strempfer, J., Habermeier, H.-U., Shantsev, D. V., Galperin, Y. M., Johansen, T. H.

{Physical Review Letters}, 98, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Transport properties of LCMO/YBCO hybrid structures

Soltan, S., Albrecht, J., Habermeier, H.-U.

{Materials Science and Engineering B}, 144, pages: 15-18, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Microscale and nanoscale robotics systems [grand challenges of robotics]

Sitti, M.

IEEE Robotics \& Automation Magazine, 14(1):53-60, IEEE, 2007 (article)

pi

[BibTex]

[BibTex]


no image
A new biomimetic adhesive for therapeutic capsule endoscope applications in the gastrointestinal tract

Glass, P., Sitti, M., Appasamy, R.

Gastrointestinal Endoscopy, 65(5):AB91, Mosby, 2007 (article)

pi

[BibTex]

[BibTex]


no image
Visual servoing-based autonomous 2-D manipulation of microparticles using a nanoprobe

Onal, C. D., Sitti, M.

IEEE Transactions on control systems technology, 15(5):842-852, IEEE, 2007 (article)

pi

[BibTex]

[BibTex]


no image
A Computational Model of Arm Trajectory Modification Using Dynamic Movement Primitives

Mohajerian, P., Hoffmann, H., Mistry, M., Schaal, S.

In Abstracts of the 37st Meeting of the Society of Neuroscience, San Diego, CA, Nov 3-7, 2007, clmc (inproceedings)

Abstract
Several scientists used a double-step target-displacement protocol to investigate how an unexpected upcoming new target modifies ongoing discrete movements. Interesting observations are the initial direction of the movement, the spatial path of the movement to the second target, and the amplification of the speed in the second movement. Experimental data show that the above properties are influenced by the movement reaction time and the interstimulus interval between the onset of the first and second target. Hypotheses in the literature concerning the interpretation of the observed data include a) the second movement is superimposed on the first movement (Henis and Flash, 1995), b) the first movement is aborted and the second movement is planned to smoothly connect the current state of the arm with the new target (Hoff and Arbib, 1992), c) the second movement is initiated by a new control signal that replaces the first movement's control signal, but does not take the state of the system into account (Flanagan et al., 1993), and (d) the second movement is initiated by a new goal command, but the control structure stays unchanged, and feed-back from the current state is taken into account (Hoff and Arbib, 1993). We investigate target switching from the viewpoint of Dynamic Movement Primitives (DMPs). DMPs are trajectory planning units that are formalized as stable nonlinear attractor systems (Ijspeert et al., 2002). They are a useful framework for biological motor control as they are highly flexible in creating complex rhythmic and discrete behaviors that can quickly adapt to the inevitable perturbations of dynamically changing, stochastic environments. In this model, target switching is accomplished simply by updating the target input to the discrete movement primitive for reaching. The reaching trajectory in this model can be straight or take any other route; in contrast, the Hoff and Arbib (1993) model is restricted to straight reaching movement plans. In the present study, we use DMPs to reproduce in simulation a large number of target-switching experimental data from the literature and to show that online correction and the observed target switching phenomena can be accomplished by changing the goal state of an on-going DMP, without the need to switch to different movement primitives or to re-plan the movement. :

am

PDF [BibTex]

PDF [BibTex]


no image
Inverse dynamics control with floating base and constraints

Nakanishi, J., Mistry, M., Schaal, S.

In International Conference on Robotics and Automation (ICRA2007), pages: 1942-1947, Rome, Italy, April 10-14, 2007, clmc (inproceedings)

Abstract
In this paper, we address the issues of compliant control of a robot under contact constraints with a goal of using joint space based pattern generators as movement primitives, as often considered in the studies of legged locomotion and biological motor control. For this purpose, we explore inverse dynamics control of constrained dynamical systems. When the system is overconstrained, it is not straightforward to formulate an inverse dynamics control law since the problem becomes an ill-posed one, where infinitely many combinations of joint torques are possible to achieve the desired joint accelerations. The goal of this paper is to develop a general and computationally efficient inverse dynamics algorithm for a robot with a free floating base and constraints. We suggest an approximate way of computing inverse dynamics algorithm by treating constraint forces computed with a Lagrange multiplier method as simply external forces based on FeatherstoneÕs floating base formulation of inverse dynamics. We present how all the necessary quantities to compute our controller can be efficiently extracted from FeatherstoneÕs spatial notation of robot dynamics. We evaluate the effectiveness of the suggested approach on a simulated biped robot model.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Physisorption von Wasserstoff in neuen Materialien mit gro\sser spezifischer Oberfläche

Schmitz, B.

Universität Bonn, Bonn, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Cluster expansions in multicomponent systems: precise expansions from noisy databases

Diaz-Ortiz, A., Dosch, H., Drautz, R.

{Journal of Physics: Condensed Matter}, 19, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Towards spin injection into silicon

Dash, S. P.

Universität Stuttgart, Stuttgart, 2007 (phdthesis)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Bestimmung der kritischen Schichtdicken ferromagnetischer Plättchen für Eindomänenverhalten

Soehnle, S.

Universität Stuttgart, Stuttgart, 2007 (mastersthesis)

mms

[BibTex]

[BibTex]


no image
Unusual propagation of magnetic avalanches in gold covered MgB2

Albrecht, J., Matveev, A. T., Habermeier, H.-U.

{Physica C}, 460-462, pages: 1245-1246, 2007 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Lowering of the L10 ordering temperature of FePt nanoparticles by He+ ion irradiation

Wiedwald, U., Klimmer, A., Kern, B., Han, L., Boyen, H.-G., Ziemann, P., Fauth, K.

{Applied Physics Letters}, 90, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Magnetic core shell nanoparticles characterized by X-ray absorption and magnetic circular dichroism

Fauth, K.

{Modern Physics Letters B}, 21(18):1179-1187, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Magnetic moment of Fe in oxide-free FePt nanoparticles

Dmitrieva, O., Spasova, M., Antoniak, C., Acet, M., Dumpich, G., Kästner, J., Farle, M., Fauth, K., Wiedwald, U., Boyen, H.-G., Ziemann, P.

{Physical Review B}, 76, 2007 (article)

mms

[BibTex]

[BibTex]


no image
The effect of bismuth segregation on the faceting of \Sigma3 and \Sigma9 coincidence boundaries in copper bicrystals

Straumal, B. B., Polyakov, S. A., Chang, L.-S., Mittemeijer, E. J.

{International Journal of Materials Research}, 98, pages: 451-456, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Hot isostatic pressing of Cu-Bi polycrystals with liquid-like grain boundary layers

Chang, L.-S., Straumal, B., Rabkin, E., Lojkowski, W., Gust, W.

{Acta Materialia}, 55, pages: 335-343, 2007 (article)

mms

[BibTex]

[BibTex]


no image
Spatially resolved magnetic response in core shell nanoparticles

Fauth, K., Goering, E., Theil Kuhn, L.

{Modern Physics Letters B}, 21(18):1197-1200, 2007 (article)

mms

[BibTex]

[BibTex]