Header logo is


2013


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

ei

Web [BibTex]

2013


Web [BibTex]


Learning and Optimization with Submodular Functions
Learning and Optimization with Submodular Functions

Sankaran, B., Ghazvininejad, M., He, X., Kale, D., Cohen, L.

ArXiv, May 2013 (techreport)

Abstract
In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.

am

arxiv link (url) [BibTex]

arxiv link (url) [BibTex]


no image
Animating Samples from Gaussian Distributions

Hennig, P.

(8), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2013 (techreport)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K.

30th International Conference on Machine Learning (ICML2013), 2013 (talk)

ei

PDF [BibTex]

PDF [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]

2000


no image
The Kernel Trick for Distances

Schölkopf, B.

(MSR-TR-2000-51), Microsoft Research, Redmond, WA, USA, 2000 (techreport)

Abstract
A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as normbased distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.

ei

PDF Web [BibTex]

2000


PDF Web [BibTex]


no image
Kernel method for percentile feature extraction

Schölkopf, B., Platt, J., Smola, A.

(MSR-TR-2000-22), Microsoft Research, 2000 (techreport)

Abstract
A method is proposed which computes a direction in a dataset such that a speci􏰘ed fraction of a particular class of all examples is separated from the overall mean by a maximal margin􏰤 The pro jector onto that direction can be used for class􏰣speci􏰘c feature extraction􏰤 The algorithm is carried out in a feature space associated with a support vector kernel function􏰢 hence it can be used to construct a large class of nonlinear fea􏰣 ture extractors􏰤 In the particular case where there exists only one class􏰢 the method can be thought of as a robust form of principal component analysis􏰢 where instead of variance we maximize percentile thresholds􏰤 Fi􏰣 nally􏰢 we generalize it to also include the possibility of specifying negative examples􏰤

ei

PDF [BibTex]

PDF [BibTex]

1999


no image
Estimating the support of a high-dimensional distribution

Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.

(MSR-TR-99-87), Microsoft Research, 1999 (techreport)

ei

Web [BibTex]

1999


Web [BibTex]


no image
Generalization Bounds via Eigenvalues of the Gram matrix

Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.

(99-035), NeuroCOLT, 1999 (techreport)

ei

[BibTex]

[BibTex]


no image
Sparse kernel feature analysis

Smola, A., Mangasarian, O., Schölkopf, B.

(99-04), Data Mining Institute, 1999, 24th Annual Conference of Gesellschaft f{\"u}r Klassifikation, University of Passau (techreport)

ei

PostScript [BibTex]

PostScript [BibTex]

1994


no image
View-based cognitive mapping and path planning

Schölkopf, B., Mallot, H.

(7), Max Planck Institute for Biological Cybernetics Tübingen, November 1994, This technical report has also been published elsewhere (techreport)

Abstract
We present a scheme for learning a cognitive map of a maze from a sequence of views and movement decisions. The scheme is based on an intermediate representation called the view graph. We show that this representation carries sufficient information to reconstruct the topological and directional structure of the maze. Moreover, we present a neural network that learns the view graph during a random exploration of the maze. We use a unsupervised competitive learning rule which translates temporal sequence (rather than similarity) of views into connectedness in the network. The network uses its knowledge of the topological and directional structure of the maze to generate expectations about which views are likely to be perceived next, improving the view recognition performance. We provide an additional mechanism which uses the map to find paths between arbitrary points of the previously explored environment. The results are compared to findings of behavioural neuroscience.

ei

[BibTex]

1994


[BibTex]