Header logo is


2018


no image
A virtual reality environment for experiments in assistive robotics and neural interfaces

Bustamante, S.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

PDF [BibTex]

2018


PDF [BibTex]


no image
Optimal Trajectory Generation and Learning Control for Robot Table Tennis

Koc, O.

Technical University Darmstadt, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Distribution-Dissimilarities in Machine Learning

Simon-Gabriel, C. J.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Domain Adaptation Under Causal Assumptions

Lechner, T.

Eberhard Karls Universität Tübingen, Germany, 2018 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
A Causal Perspective on Deep Representation Learning

Suter, R.

ETH Zurich, 2018 (mastersthesis)

ei

[BibTex]


no image
Maschinelles Lernen: Entwicklung ohne Grenzen?

Schökopf, B.

In Mit Optimismus in die Zukunft schauen. Künstliche Intelligenz - Chancen und Rahmenbedingungen, pages: 26-34, (Editors: Bender, G. and Herbrich, R. and Siebenhaar, K.), B&S Siebenhaar Verlag, 2018 (incollection)

ei

[BibTex]

[BibTex]


no image
Probabilistic Approaches to Stochastic Optimization

Mahsereci, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei pn

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Reinforcement Learning for High-Speed Robotics with Muscular Actuation

Guist, S.

Ruprecht-Karls-Universität Heidelberg , 2018 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Methods in Psychophysics

Wichmann, F. A., Jäkel, F.

In Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience, 5 (Methodology), 7, 4th, John Wiley & Sons, Inc., 2018 (inbook)

ei

[BibTex]

[BibTex]


no image
Transfer Learning for BCIs

Jayaram, V., Fiebig, K., Peters, J., Grosse-Wentrup, M.

In Brain–Computer Interfaces Handbook, pages: 425-442, 22, (Editors: Chang S. Nam, Anton Nijholt and Fabien Lotte), CRC Press, 2018 (incollection)

ei

Project Page [BibTex]

Project Page [BibTex]


no image
Probabilistic Ordinary Differential Equation Solvers — Theory and Applications

Schober, M.

Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei pn

[BibTex]

[BibTex]


no image
A machine learning approach to taking EEG-based computer interfaces out of the lab

Jayaram, V.

Graduate Training Centre of Neuroscience, IMPRS, Eberhard Karls Universität Tübingen, Germany, 2018 (phdthesis)

ei

[BibTex]

[BibTex]

1999


no image
Nonparametric regression for learning nonlinear transformations

Schaal, S.

In Prerational Intelligence in Strategies, High-Level Processes and Collective Behavior, 2, pages: 595-621, (Editors: Ritter, H.;Cruse, H.;Dean, J.), Kluwer Academic Publishers, 1999, clmc (inbook)

Abstract
Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during development, it is highly likely that such transformations are learned rather than pre-programmed by evolution. Such self-organizing processes, capable of discovering nonlinear dependencies between different groups of signals, are one essential part of prerational intelligence. While neural network algorithms seem to be the natural choice when searching for solutions for learning transformations, this paper will take a more careful look at which types of neural networks are actually suited for the requirements of an autonomous learning system. The approach that we will pursue is guided by recent developments in learning theory that have linked neural network learning to well established statistical theories. In particular, this new statistical understanding has given rise to the development of neural network systems that are directly based on statistical methods. One family of such methods stems from nonparametric regression. This paper will compare nonparametric learning with the more widely used parametric counterparts in a non technical fashion, and investigate how these two families differ in their properties and their applicabilities. We will argue that nonparametric neural networks offer a set of characteristics that make them a very promising candidate for on-line learning in autonomous system.

am

link (url) [BibTex]

1999


link (url) [BibTex]

1996


no image
From isolation to cooperation: An alternative of a system of experts

Schaal, S., Atkeson, C. G.

In Advances in Neural Information Processing Systems 8, pages: 605-611, (Editors: Touretzky, D. S.;Mozer, M. C.;Hasselmo, M. E.), MIT Press, Cambridge, MA, 1996, clmc (inbook)

Abstract
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to adjust the size and shape of the receptive field in which its predictions are valid, and also to adjust its bias on the importance of individual input dimensions. The size and shape adjustment corresponds to finding a local distance metric, while the bias adjustment accomplishes local dimensionality reduction. We derive asymptotic results for our method. In a variety of simulations we demonstrate the properties of the algorithm with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning. 

am

link (url) [BibTex]

1996


link (url) [BibTex]