Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.
Organizers: Sergi Pujades
In 1995 Fraunhofer IPA embarked on a mission towards designing a personal robot assistant for everyday tasks. In the following years Care-O-bot developed into a long-term experiment for exploring and demonstrating new robot technologies and future product visions. The recent fourth generation of the Care-O-bot, introduced in 2014 aimed at designing an integrated system which addressed a number of innovations such as modularity, “low-cost” by making use of new manufacturing processes, and advanced human-user interaction. Some 15 systems were built and the intellectual property (IP) generated by over 20 years of research was recently licensed to a start-up. The presentation will review the path from an experimental platform for building up expertise in various robotic disciplines to recent pilot applications based on the now commercial Care-O-bot hardware.
With the ubiquity of catalyzed reactions in manufacturing, the emergence of the device laden internet of things, and global challenges with respect to water and energy, it has never been more important to understand atomic interactions in the functional materials that can provide solutions in these spaces.
Big Data has become the general term relating to the benefits and threats which result from the huge amount of data collected in all parts of society. While data acquisition, storage and access are relevant technical aspects, the analysis of the collected data turns out to be at the core of the Big Data challenge. Automatic data mining and information retrieval techniques have made much progress but many application scenarios remain in which the human in the loop plays an essential role. Consequently, interactive visualization techniques have become a key discipline of Big Data analysis and the field is reaching out to many new application domains. This talk will give examples from current visualization research projects at the University of Stuttgart demonstrating the thematic breadth of application scenarios and the technical depth of the employed methods. We will cover advances in scientific visualization of fields and particles, visual analytics of document collections and movement patterns as well as cognitive aspects.
Gaussian Processes are a principled, practical, probabilistic approach to learning in flexible non-parametric models and have found numerous applications in regression, classification, unsupervised learning and reinforcement learning. Inference, learning and prediction can be done exactly on small data sets with Gaussian likelihood. In more realistic application with large scale data and more complicated likelihoods approximations are necessary. The variational framework for approximate inference in Gaussian processes has emerged recently as a highly effective and practical tool. I will review and demonstrate the capabilities of this framework applied to non-linear state space models.
Organizers: Philipp Hennig
Taking advantages of state-of-art micro/nanotechnologies, fascinating functional biomaterials and integrated biosystems, we can address numerous important problems in fundamental biology as well as clinical applications in cancer diagnosis and treatment.
Organizers: Peer Fischer
Exciting talk on modeling anguilliform swimming, robotic testing.
Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. In this talk, I will present my past and current work on Zero-Shot Learning, Vision and Language for Generative Modeling and Explainable Artificial Intelligence in that (1) how we can generalize the image classification models to the cases when no visual training data is available, (2) how to generate images and image features using detailed visual descriptions, and (3) how our models focus on discriminating properties of the visible object, jointly predict a class label,explain why the predicted label is appropriate for the image whereas another label is not.
Organizers: Andreas Geiger
Complex shapes can can be summarized using a coarsely defined structure which is consistent and robust across variety of observations. However, existing synthesis techniques do not consider structural decomposition during synthesis, causing generation of implausible or structurally unrealistic shapes. We explore how structure-aware reasoning can benefit existing generative techniques for complex 2D and 3D shapes. We evaluate our methodology on a 3D dataset of chairs and a 2D dataset of typefaces.
Organizers: Sergi Pujades
Touch requires mechanical contact and is governed by the physics of friction. Frictional movements may convert the continuous 3D profile of textural objects into discrete and probabilistic movement events of the viscoelastic integument (skin/hair) called stick-slip movements (slips). This complex transformation may further be determined by the microanatomy and the active movements of the sensing organ. Thus, the integument may realize a computation, transforming the tactile world in a context dependent way - long before it even activates neurons. The possibility that the tactile world is perceived through these ‘fractured goggles’ of friction has been largely ignored by classical perceptual and neuro-scientific work. I will present biomechanical, neuro-scientific, and behavioral work supporting the slip hypothesis.
Organizers: Katherine J. Kuchenbecker