Institute Homepage
Institute Homepage
DE
Sign In
Research
Research
Research
Research Overview
Publications
Departments
Empirical Inference
Haptic Intelligence
Perceiving Systems
Physical Intelligence
Robotic Materials
Social Foundations of Computation
Research Groups
AI Safety and Alignment
Algorithms and Society
Biomimetic Materials and Machines
Computational Applied Mathematics & AI Lab
Cooperative Machine Intelligence for People-Aligned Safe Systems
Cyborg Robotics and Intelligent Sensing
Deep Models and Optimization
Embodied Social Interaction
Human Aspects of Machine Learning
Learning and Dynamical Systems
Mechanics for Intelligent Structures
Neuromechanics of Movement
Organizational Leadership and Diversity
Robotic Composites and Compositions
Robust Machine Learning
Safety- and Efficiency- aligned Learning
Science and Probabilistic Intelligence
Wild, Efficient, and Innovative AI
About Us
About us
People
Faculty
People Directory
Alumni Network
Contact
Management & Boards
Contact
Directions
Our Institute
About our institute
Campus Overview
Campus Facilities
Code of Conduct
Points of Contact for Employees
Our History
100/10 year anniversary
Career
Career
Career
Career
Open positions
Doctoral Programs
Doctoral Programs Overview
International Max Planck Research School for Intelligent Systems
Max Planck ETH Center for Learning Systems
ELLIS PhD Program
Max Planck Artificial Intelligence Network
Training
Internships
Planck Academy
Service
Service
Service Units
Service Overview
Administration
Scientific Coordination Office
IT Services
Building Services
Welcome Service
Central Scientific Facilities
Facilities Overview
Materials
Medical Systems
Optics and Sensing Laboratory
Robotics
Scientific Computing
Software Workshop
Workshops
Workshop Overview
Fine Mechanical Workshop
Glass Workshop
Central Mechanical Workshop
Mechatronics Workshop
Campus Services
Campus Facilities
Library
Max Planck House Tübingen
Impact
Impact
Impact
Impact Overview
Diversity, Equity, and Inclusion
Sustainability
Entrepreneurship & Innovation
Cooperation
Partnerships and Collaborations
Doctoral Programs
Partners and Initiatives
Cyber Valley
European Laboratory for Learning and Intelligent Systems
ELLIS Institute Tübingen
Tübingen AI Center
People
News
Events
Research
Research
Research
Research Overview
Publications
Departments
Empirical Inference
Haptic Intelligence
Perceiving Systems
Physical Intelligence
Robotic Materials
Social Foundations of Computation
Research Groups
AI Safety and Alignment
Algorithms and Society
Biomimetic Materials and Machines
Computational Applied Mathematics & AI Lab
Cooperative Machine Intelligence for People-Aligned Safe Systems
Cyborg Robotics and Intelligent Sensing
Deep Models and Optimization
Embodied Social Interaction
Human Aspects of Machine Learning
Learning and Dynamical Systems
Mechanics for Intelligent Structures
Neuromechanics of Movement
Organizational Leadership and Diversity
Robotic Composites and Compositions
Robust Machine Learning
Safety- and Efficiency- aligned Learning
Science and Probabilistic Intelligence
Wild, Efficient, and Innovative AI
About Us
About us
People
Faculty
People Directory
Alumni Network
Contact
Management & Boards
Contact
Directions
Our Institute
About our institute
Campus Overview
Campus Facilities
Code of Conduct
Points of Contact for Employees
Our History
100/10 year anniversary
Career
Career
Career
Career
Open positions
Doctoral Programs
Doctoral Programs Overview
International Max Planck Research School for Intelligent Systems
Max Planck ETH Center for Learning Systems
ELLIS PhD Program
Max Planck Artificial Intelligence Network
Training
Internships
Planck Academy
Service
Service
Service Units
Service Overview
Administration
Scientific Coordination Office
IT Services
Building Services
Welcome Service
Central Scientific Facilities
Facilities Overview
Materials
Medical Systems
Optics and Sensing Laboratory
Robotics
Scientific Computing
Software Workshop
Workshops
Workshop Overview
Fine Mechanical Workshop
Glass Workshop
Central Mechanical Workshop
Mechatronics Workshop
Campus Services
Campus Facilities
Library
Max Planck House Tübingen
Impact
Impact
Impact
Impact Overview
Diversity, Equity, and Inclusion
Sustainability
Entrepreneurship & Innovation
Cooperation
Partnerships and Collaborations
Doctoral Programs
Partners and Initiatives
Cyber Valley
European Laboratory for Learning and Intelligent Systems
ELLIS Institute Tübingen
Tübingen AI Center
People
News
Events
Back
Autonomous Motion
Symposium
27 March 2017 - 29 March 2017 - All day | Stanford University
Interactive Multisensory Object Perception for Embodied Agents
ORGANIZERS
Autonomous Motion
Jeannette Bohg
Affiliated Researcher
Symposium at the AAAI Spring Symposium Series in 2017 at Stanford University.
For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (i.e., the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient. Furthermore, learning to perceive and reason about objects in terms of multiple sensory modalities remains a longstanding challenge in robotics. Multiple lines of evidence from the fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (e.g., audio, haptics, tactile, etc.) in a broad variety of contexts ranging from language learning to learning manipulation skills. Nevertheless, most object representations used by robots today rely solely on visual input (e.g., a 3D object model) and thus, cannot be used to learn or reason about non-visual object properties (weight, texture, etc.). This major question we want to address is, how do we collect large datasets from robots exploring the world with multi-sensory inputs and what algorithms can we use to learn and act with this data? For instance, at several major universities, there are robots that can operate autonomously (e.g., navigate throughout the building, manipulate objects, etc.) for long periods of time. Such robots could potentially generate large amount of multi-modal sensory data, coupled with the robot's actions. While the community has focused on how to deal with visual information (e.g., deep learning for visual features from large scale databases), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors. Specific challenges include the fact that different sensors produce data at different sampling rates and different resolutions. Furthermore, data produced by a robot acting in the world is typically not independently and identically distributed (a common assumption of machine learning algorithms) as the current data point often depends on previous actions. This eymposium is co-organized with Vivian Chu, Jivko Sinapov, Sonia Chernova and Andrea Thomaz.
More Information
This website uses cookies to ensure you get the best experience.
Learn more
.
Accept