Back
Unknown information required to plan grasps such as object shape and pose needs to be extracted from the environment through sensors. However, sensory measurements are noisy and associated with a degree of uncertainty. Furthermore, object parameters relevant to grasp planning may not be accurately estimated, e.g., friction and mass. In real-world settings, these issues can lead to grasp failures with serious consequences. I will talk about learning approaches using real sensory data, e.g., visual and tactile, to assess grasp success (discriminative and generative) that can be used to trigger plan corrections. I will also present a probabilistic approach for learning object models based on visual and tactile data through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape.
Yasemin Bekiroglu (KTH Stockholm, Computer Vision and Active Perception lab)
More information