Institute Homepage
Institute Homepage
EN
Sign In
Forschung
Forschung
Forschung
Übersicht
Publikationen
Abteilungen
Empirische Inferenz
Haptische Intelligenz
Perzeptive Systeme
Physische Intelligenz
Robotik-Materialien
Soziale Grundlagen der Informatik
Forschungsgruppen
AI Safety and Alignment
Algorithms and Society
Biomimetic Materials and Machines
Computational Applied Mathematics & AI Lab
Cooperative Machine Intelligence for People-Aligned Safe Systems
Cyborg Robotics and Intelligent Sensing
Deep Models and Optimization
Embodied Social Interaction
Human Aspects of Machine Learning
Learning and Dynamical Systems
Mechanics for Intelligent Structures
Neuromechanics of Movement
Organizational Leadership and Diversity
Robotic Composites and Compositions
Robust Machine Learning
Safety- and Efficiency-Aligned Learning
Science and Probabilistic Intelligence
Wild, Efficient, and Innovative AI
Über uns
Über uns
Personen
Wissenschaft
Personenverzeichnis
Alumni-Netzwerk
Kontakt
Management & Boards
Kontakt
Anreise
Our Institute
Unser Institut
Campus-Überblick
Campuseinrichtungen
Code of Conduct
Anlaufstellen für Institutsangehörige
Unsere Geschichte
100/10-jähriges Jubiläum
Karriere
Karriere
Karriere
Karriere
Offene Stellen
Überblick über Promotionsprogramme
Doctoral Programs Overview
International Max Planck Research School for Intelligent Systems
Max Planck ETH Center for Learning Systems
ELLIS PhD Program
Max Planck Artificial Intelligence Network
Karriere
Praktika
Planck Academy
Service
Service
Service-Einrichtungen
Unsere Services
Verwaltung
Scientific Coordination Office
IT Services
Building Services
Welcome Service
Zentrale Wissenschaftliche Einrichtungen
Überblick
Materials
Medical Systems
Optics and Sensing Laboratory
Robotics
Scientific Computing
Software Workshop
Werkstätten
Workshop Overview
Fine Mechanical Workshop
Glass Workshop
Central Mechanical Workshop
Mechatronics Workshop
Campus Services
Campuseinrichtungen
Bibliothek
Max Planck House Tübingen
Impact
Impact
Impact
Unser Impact
Diversität
Nachhaltigkeit
Entrepreneurship & Innovation
Kooperationen
Unsere Partner
Promotionsprogramme
Initiativen und Partner
Cyber Valley
European Laboratory for Learning and Intelligent Systems
ELLIS Institute Tübingen
Tübingen AI Center
Personen
Aktuelles
Events
Forschung
Forschung
Forschung
Übersicht
Publikationen
Abteilungen
Empirische Inferenz
Haptische Intelligenz
Perzeptive Systeme
Physische Intelligenz
Robotik-Materialien
Soziale Grundlagen der Informatik
Forschungsgruppen
AI Safety and Alignment
Algorithms and Society
Biomimetic Materials and Machines
Computational Applied Mathematics & AI Lab
Cooperative Machine Intelligence for People-Aligned Safe Systems
Cyborg Robotics and Intelligent Sensing
Deep Models and Optimization
Embodied Social Interaction
Human Aspects of Machine Learning
Learning and Dynamical Systems
Mechanics for Intelligent Structures
Neuromechanics of Movement
Organizational Leadership and Diversity
Robotic Composites and Compositions
Robust Machine Learning
Safety- and Efficiency-Aligned Learning
Science and Probabilistic Intelligence
Wild, Efficient, and Innovative AI
Über uns
Über uns
Personen
Wissenschaft
Personenverzeichnis
Alumni-Netzwerk
Kontakt
Management & Boards
Kontakt
Anreise
Our Institute
Unser Institut
Campus-Überblick
Campuseinrichtungen
Code of Conduct
Anlaufstellen für Institutsangehörige
Unsere Geschichte
100/10-jähriges Jubiläum
Karriere
Karriere
Karriere
Karriere
Offene Stellen
Überblick über Promotionsprogramme
Doctoral Programs Overview
International Max Planck Research School for Intelligent Systems
Max Planck ETH Center for Learning Systems
ELLIS PhD Program
Max Planck Artificial Intelligence Network
Karriere
Praktika
Planck Academy
Service
Service
Service-Einrichtungen
Unsere Services
Verwaltung
Scientific Coordination Office
IT Services
Building Services
Welcome Service
Zentrale Wissenschaftliche Einrichtungen
Überblick
Materials
Medical Systems
Optics and Sensing Laboratory
Robotics
Scientific Computing
Software Workshop
Werkstätten
Workshop Overview
Fine Mechanical Workshop
Glass Workshop
Central Mechanical Workshop
Mechatronics Workshop
Campus Services
Campuseinrichtungen
Bibliothek
Max Planck House Tübingen
Impact
Impact
Impact
Unser Impact
Diversität
Nachhaltigkeit
Entrepreneurship & Innovation
Kooperationen
Unsere Partner
Promotionsprogramme
Initiativen und Partner
Cyber Valley
European Laboratory for Learning and Intelligent Systems
ELLIS Institute Tübingen
Tübingen AI Center
Personen
Aktuelles
Events
Back
Autonomous Motion
Symposium
27 March 2017 - 29 March 2017 - All day | Stanford University
Interactive Multisensory Object Perception for Embodied Agents
ORGANIZERS
Autonomous Motion
Jeannette Bohg
Affiliated Researcher
Symposium at the AAAI Spring Symposium Series in 2017 at Stanford University.
For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (i.e., the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient. Furthermore, learning to perceive and reason about objects in terms of multiple sensory modalities remains a longstanding challenge in robotics. Multiple lines of evidence from the fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (e.g., audio, haptics, tactile, etc.) in a broad variety of contexts ranging from language learning to learning manipulation skills. Nevertheless, most object representations used by robots today rely solely on visual input (e.g., a 3D object model) and thus, cannot be used to learn or reason about non-visual object properties (weight, texture, etc.). This major question we want to address is, how do we collect large datasets from robots exploring the world with multi-sensory inputs and what algorithms can we use to learn and act with this data? For instance, at several major universities, there are robots that can operate autonomously (e.g., navigate throughout the building, manipulate objects, etc.) for long periods of time. Such robots could potentially generate large amount of multi-modal sensory data, coupled with the robot's actions. While the community has focused on how to deal with visual information (e.g., deep learning for visual features from large scale databases), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors. Specific challenges include the fact that different sensors produce data at different sampling rates and different resolutions. Furthermore, data produced by a robot acting in the world is typically not independently and identically distributed (a common assumption of machine learning algorithms) as the current data point often depends on previous actions. This eymposium is co-organized with Vivian Chu, Jivko Sinapov, Sonia Chernova and Andrea Thomaz.
Weitere Informationen
Diese Website verwendet Cookies, um sicherzustellen, dass Sie die bestmögliche Nutzererfahrung erhalten.
Mehr erfahren
.
Accept