Jim Mainprice received his M.S. from Polytech' Montpellier, France, and his Ph.D. in robotics and computer science from the University of Toulouse, France, in 2009 and 2012 respectively. His research interests include motion planning, machine learning, human-robot interaction and human movement understanding. While completing his Ph.D. at LAAS-CNRS, he took part in the European community's 7th framework program projects Dexmart and Saphari. From January 2013 to December 2014, he was a postdoctoral researcher in the Autonomous Robotic Collaboration Lab at the Worcester Polytechnic Institute in Massachusetts, USA, where he participated in the DARPA Robotic Challenge with the DRCHubo team. Since January 2015, he is affiliated with the Autonomous Motion Department of the Max Planck Institute for Intelligent Systems in Tübingen, Germany. Since April 2017, he leads the Humans to Robots Motion (HRM) research group at the University of Stuttgart, Germany.
Social robots or collaborative robots that have to interact with people in a reactive way are difficult to program. This difficulty stems from the different skills required by the programmer: to provide an engaging user experience the behavior must include a sense of aesthetics while robustly operating in a continuously changing environment. The Playful framework allows composing such dynamic behaviors using a basic set of action and perception primitives. Within this framework, a behavior is encoded as a list of declarative statements corresponding to high-level sensory-motor couplings. To facilitate non-expert users to program such behaviors, we propose a Learning from Demonstration (LfD) technique that maps motion capture of humans directly to a Playful script. The approach proceeds by identifying the sensory-motor couplings that are active at each step using the Viterbi path in a Hidden Markov Model (HMM). Given these activation patterns, binary classifiers called evaluations are trained to associate activations to sensory data. Modularity is increased by clustering the sensory-motor couplings, leading to a hierarchical tree structure. The novelty of the proposed approach is that the learned behavior is encoded not in terms of trajectories in a task space, but as couplings between sensory information and high-level motor actions. This provides advantages in terms of behavioral generalization and reactivity displayed by the robot.
IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems