Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.
Organizers: Sergi Pujades
Optimal control problems are often too complex to solve analytically. Computational methods usually replace the continuous infinite dimensional problem by a finite dimensional discrete approximation. The talk will survey classical discretization techniques based on a Runge-Kutta approximation to the differential equations (an h-method) and then introduce recent approximations based on collocation at the roots of orthogonal polynomials (a p-method). The best approximations are often achieved using an hp-framework that combines the best features of both approaches. Numerical results using the GPOPS-II (General Pseudospectral Optimal Control Software package) will be presented.
Organizers: Jia-Jie Zhu
The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms. In this talk I will revisit the classical analysis of the Gaussian mechanism and show it has several important limitations. For example, our analysis reveals that the variance formula for the original mechanism is far from tight in the high privacy regime and that it cannot be extended to the low privacy regime. We address these limitations by developing a new Gaussian mechanism whose variance is optimally calibrated by solving an equation involving the Gaussian cumulative density function. Our analysis side-steps the use of tail bounds approximations and relies on a novel characterisation of differential privacy that might be of independent interest. We numerically show that analytical calibration removes at least a third of the variance of the noise compared to the classical Gaussian mechanism. We also propose to equip the Gaussian mechanism with a post-processing step based on adaptive denoising estimators by leveraging that the variance of the perturbation is known. Experiments with synthetic and real data show that this denoising step yields dramatic accuracy improvements in the high-dimensional regime. Based on joint work with Y.-X. Wang to appear at ICML 2018. Pre-print: https://arxiv.org/abs/1805.06530
I will describe recent research in my lab on haptics and robotics. It has been a longstanding challenge to realize engineering systems that can match the amazing perceptual and motor feats of biological systems for touch, including the human hand. Some of the difficulties of meeting this objective can be traced to our limited understanding of the mechanics, and to the high dimensionality of the signals, and to the multiple length and time scales - physical regimes - involved. An additional source of richness and complication arises from the sensitive dependence of what we feel on what we do, i.e. on the tight coupling between touch-elicited mechanical signals, object contacts, and actions. I will describe research in my lab that has aimed at addressing these challenges, and will explain how the results are guiding the development of new technologies for haptics, wearable computing, and robotics.
Organizers: Katherine J. Kuchenbecker
This paper uses the relationship between graph conductance and spectral clustering to study (i) the failures of spectral clustering and (ii) the benefits of regularization. The explanation is simple. Sparse and stochastic graphs create a lot of small trees that are connected to the core of the graph by only one edge. Graph conductance is sensitive to these noisy "dangling sets." Spectral clustering inherits this sensitivity. The second part of the paper starts from a previously proposed form of regularized spectral clustering and shows that it is related to the graph conductance on a "regularized graph." We call the conductance on the regularized graph CoreCut. Based upon previous arguments that relate graph conductance to spectral clustering (e.g. Cheeger inequality), minimizing CoreCut relaxes to regularized spectral clustering. Simple inspection of CoreCut reveals why it is less sensitive to small cuts in the graph. Together, these results show that unbalanced partitions from spectral clustering can be understood as overfitting to noise in the periphery of a sparse and stochastic graph. Regularization fixes this overfitting. In addition to this statistical benefit, these results also demonstrate how regularization can improve the computational speed of spectral clustering. We provide simulations and data examples to illustrate these results.
Organizers: Damien Garreau
The problem of text normalization is simple to understand: transform a given arbitrary text into its spoken form. In the context of text-to-speech systems – that we will focus on – this can be exemplified by turning the text “$200” into “two hundred dollars”. Lately, the interest of solving this problem with deep learning techniques has raised since it is a highly context-dependent problem that is still being solved by ad-hoc solutions. So much so that Google even started a contest in the web Kaggle to solve this problem. In this talk we will see how this problem has been approached as part of a Master thesis. Namely, the problem is tackled as if it were an automatic translation problem from English to normalized English, and so the architecture proposed is a neural machine translation architecture with the addition of traditional attention mechanisms. This network is typically composed of an encoder and a decoder, where both of them are multi-layer LSTM networks. As part of this work, and with the aim of proving the feasibility of convolutional neural networks in natural-language processing problems, we propose and compare different architectures for the encoder based on convolutional networks. In particular, we propose a new architecture called Causal Feature Extractor which proves to be a great encoder as well as an attention-friendly architecture.
Organizers: Philipp Hennig
Organizers: Ahmed Osman
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization. Typically, BO relies on conventional Gaussian process regression, whose algorithmic complexity is cubic in the number of evaluations. As a result, Gaussian process-based BO cannot leverage large numbers of past function evaluations, for example, to warm-start related BO runs. After a brief intro to BO and an overview of several use cases at Amazon, I will discuss a multi-task adaptive Bayesian linear regression model, whose computational complexity is attractive (linear) in the number of function evaluations and able to leverage information of related black-box functions through a shared deep neural net. Experimental results show that the neural net learns a representation suitable for warm-starting related BO runs and that they can be accelerated when the target black-box function (e.g., validation loss) is learned together with other related signals (e.g., training loss). The proposed method was found to be at least one order of magnitude faster than competing neural network-based methods recently published in the literature. This is joint work with Valerio Perrone, Rodolphe Jenatton, and Matthias Seeger.
Organizers: Isabel Valera
In this talk first an introduction to the double machine learning framework is given. This allows inference on parameters in high-dimensional settings. Then, two applications are given, namely transformation models and Gaussian graphical models in high-dimensional settings. Both kind of models are widely used by practitioners. As high-dimensional data sets become more and more available, it is important to allow situations where the number of parameters is large compared to the sample size.