Header logo is


2016


no image
Supplemental material for ’Communication Rate Analysis for Event-based State Estimation’

Ebner, S., Trimpe, S.

Max Planck Institute for Intelligent Systems, January 2016 (techreport)

am ics

PDF [BibTex]

2016


PDF [BibTex]

2011


no image
Combined whole-body PET/MR imaging: MR contrast agents do not affect the quantitative accuracy of PET following attenuation correction

Lois, C., Kupferschläger, J., Bezrukov, I., Schmidt, H., Werner, M., Mannheim, J., Pichler, B., Schwenzer, N., Beyer, T.

(SST15-05 ), 97th Scientific Assemble and Annual Meeting of the Radiological Society of North America (RSNA), December 2011 (talk)

Abstract
PURPOSE Combined PET/MR imaging entails the use of MR contrast agents (MRCA) as part of integrated protocols. We assess additional attenuation of the PET emission signals in the presence of oral and intraveneous (iv) MRCA made up of iron oxide and Gd-chelates, respectively. METHOD AND MATERIALS Phantom scans were performed on a clinical PET/CT (Biograph HiRez16, Siemens) and integrated whole-body PET/MR (Biograph mMR, Siemens) using oral (Lumirem) and intraveneous (Gadovist) MRCA. Reference PET attenuation values were determined on a small-animal PET (Inveon, Siemens) using standard PET transmission imaging (TX). Seven syringes of 5mL were filled with (a) Water, (b) Lumirem_100 (100% conc.), (c) Gadovist_100 (100%), (d) Gadovist_18 (18%), (e) Gadovist_02 (0.2%), (f) Imeron-400 CT iv-contrast (100%) and (g) Imeron-400 (2.4%). The same set of syringes was scanned on CT (Sensation16, Siemens) at 120kVp and 160mAs. The effect of MRCA on the attenuation of PET emission data was evaluated using a 20cm cylinder filled uniformly with [18F]-FDG (FDG) in water (BGD). Three 4.5cm diameter cylinders were inserted into the phantom: (C1) Teflon, (C2) Water+FDG (2:1) and (C3) Lumirem_100+FDG (2:1). Two 50mL syringes filled with Gadovist_02+FDG (Sy1) and water+FDG (Sy2) were attached to the sides of (C1) to mimick the effects of iv-contrast in vessels near bone. Syringe-to-background activity ratio was 4-to-1. PET emission data were acquired for 10min each using the PET/CT and the PET/MR. Images were reconstructed using CT- and MR-based attenuation correction. RESULTS Mean linear PET attenuation (cm-1) on TX was (a) 0.098, (b) 0.098, (c) 0.300, (d) 0.134, (e) 0.095, (f) 0.397 and (g) 0.105. Corresponding CT attenuation (HU) was: (a) 5, (b) 14, (c) 3070, (d) 1040, (e) 13, (f) 3070 and (g) 347. Lumirem had little effect on PET attenuation with (C3) being 13% and 10% higher than (C2) on PET/CT and PET/MR, respectively. Gadovist_02 had even smaller effects with (Sy1) being 2.5% lower than (Sy2) on PET/CT and 1.2% higher than (Sy2) on PET/MR. CONCLUSION MRCA in high and clinically relevant concentrations have attenuation values similar to that of CT contrast and water, respectively. In clinical PET/MR scenarios MRCA are not expected to lead to significant attenuation of the PET emission signals.

ei

Web [BibTex]

2011


Web [BibTex]


no image
Cooperative Cuts: a new use of submodularity in image segmentation

Jegelka, S.

Second I.S.T. Austria Symposium on Computer Vision and Machine Learning, October 2011 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Effect of MR Contrast Agents on Quantitative Accuracy of PET in Combined Whole-Body PET/MR Imaging

Lois, C., Bezrukov, I., Schmidt, H., Schwenzer, N., Werner, M., Pichler, B., Kupferschläger, J., Beyer, T.

2011(MIC3-3), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
Combined whole-body PET/MR systems are being tested in clinical practice today. Integrated imaging protocols entail the use of MR contrast agents (MRCA) that could bias PET attenuation correction. In this work, we assess the effect of MRCA in PET/MR imaging. We analyze the effect of oral and intravenous MRCA on PET activity after attenuation correction. We conclude that in clinical scenarios, MRCA are not expected to lead to significant attenuation of PET signals, and that attenuation maps are not biased after the ingestion of adequate oral contrasts.

ei

Web [BibTex]

Web [BibTex]


no image
First Results on Patients and Phantoms of a Fully Integrated Clinical Whole-Body PET/MRI

Schmidt, H., Schwenzer, N., Bezrukov, I., Kolb, A., Mantlik, F., Kupferschläger, J., Lois, C., Sauter, A., Brendle, C., Pfannenberg, C., Pichler, B.

2011(J2-8), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
First clinical fully integrated whole-body PET/MR scanners are just entering the field. Here, we present studies toward quantification accuracy and variation within the PET field of view of small lesions from our BrainPET/MRI, a dedicated clinical brain scanner which was installed three years ago in Tbingen. Also, we present first results for patient and phantom scans of a fully integral whole-body PET/MRI, which was installed two months ago at our department. The quantification accuracy and homogeneity of the BrainPET-Insert (Siemens Medical Solutions, Germany) installed inside the magnet bore of a clinical 3T MRI scanner (Magnetom TIM Trio, Siemens Medical Solutions, Germany) was evaluated by using eight hollow spheres with inner diameters from 3.95 to 7.86 mm placed at different positions inside a homogeneous cylinder phantom with an 9:1 and 6:1 sphere to background ratio. The quantification accuracy for small lesions at different positions in the PET FoV shows a standard deviation of up to 11% and is acceptable for quantitative brain studies where the homogeneity of quantification on the entire FoV is essental. Image quality and resolution of the new Siemens whole-body PET/MR system (Biograph mMR, Siemens Medical Solutions, Germany) was evaluated according to the NEMA NU2 2007 protocol using a body phantom containing six spheres with inner diameter from 10 to 37 mm at sphere to background ratios of 8:1 and 4:1 and the F-18 point sources located at different positions inside the PET FoV, respectively. The evaluation of the whole-body PET/MR system reveals a good PET image quality and resolution comparable to state-of-the-art clinical PET/CT scanners. First images of patient studies carried out at the whole-body PET/MR are presented highlighting the potency of combined PET/MR imaging.

ei

Web [BibTex]

Web [BibTex]


no image
Effect of MR contrast agents on quantitative accuracy of PET in combined whole-body PET/MR imaging

Lois, C., Kupferschläger, J., Bezrukov, I., Schmidt, H., Werner, M., Mannheim, J., Pichler, B., Schwenzer, N., Beyer, T.

(OP314), Annual Congress of the European Association of Nuclear Medicine (EANM), October 2011 (talk)

Abstract
PURPOSE:Combined PET/MR imaging entails the use of MR contrast agents (MRCA) as part of integrated protocols. MRCA are made up of iron oxide and Gd-chelates for oral and intravenous (iv) application, respectively. We assess additional attenuation of the PET emission signals in the presence of oral and iv MRCA.MATERIALS AND METHODS:Phantom scans were performed on a clinical PET/CT (Biograph HiRez16, Siemens) and an integrated whole-body PET/MR (Biograph mMR, Siemens). Two common MRCA were evaluated: Lumirem (oral) and Gadovist (iv).Reference PET attenuation values were determined on a dedicated small-animal PET (Inveon, Siemens) using equivalent standard PET transmission source imaging (TX). Seven syringes of 5mL were filled with (a) Water, (b) Lumirem_100 (100% concentration), (c) Gadovist_100 (100%), (d) Gadovist_18 (18%), (e) Gadovist_02 (0.2%), (f) Imeron-400 CT iv-contrast (100%) and (g) Imeron-400 (2.4%). The same set of syringes was scanned on CT (Sensation16, Siemens) at 120kVp and 160mAs.The effect of MRCA on the attenuation of PET emission data was evaluated using a 20cm cylinder filled uniformly with [18F]-FDG (FDG) in water (BGD). Three 4.5cm diameter cylinders were inserted into the phantom: (C1) Teflon, (C2) Water+FDG (2:1) and (C3) Lumirem_100+FDG (2:1). Two 50mL syringes filled with Gadovist_02+FDG (Sy1) and water+FDG (Sy2) were attached to the sides of (C1) to mimick the effects of iv-contrast in vessels near bone. Syringe-to-background activity ratio was 4-to-1.PET emission data were acquired for 10min each using the PET/CT and the PET/MR. Images were reconstructed using CT- and MR-based attenuation correction (AC). Since Teflon is not correctly identified on MR, PET(/MR) data were reconstructed using MR-AC and CT-AC.RESULTS:Mean linear PET attenuation (cm-1) on TX was (a) 0.098, (b) 0.098, (c) 0.300, (d) 0.134, (e) 0.095, (f) 0.397 and (g) 0.105. Corresponding CT attenuation (HU) was: (a) 5, (b) 14, (c) 3070, (d) 1040, (e) 13, (f) 3070 and (g) 347.Lumirem had little effect on PET attenuation with (C3) being 13%, 10% and 11% higher than (C2) on PET/CT, PET/MR with MR-AC, and PET/MR with CT-AC, respectively. Gadovist_02 had even smaller effects with (Sy1) being 2.5% lower, 1.2% higher, and 3.5% lower than (Sy2) on PET/CT, PET/MR with MR-AC and PET/MR with CT-AC, respectively.CONCLUSION:MRCA in high and clinically relevant concentrations have attenuation values similar to that of CT contrast and water, respectively. In clinical PET/MR scenarios MRCA are not expected to lead to significant attenuation of the PET emission signals.

ei

Web [BibTex]

Web [BibTex]


no image
Multi-parametric Tumor Characterization and Therapy Monitoring using Simultaneous PET/MRI: initial results for Lung Cancer and GvHD

Sauter, A., Schmidt, H., Gueckel, B., Brendle, C., Bezrukov, I., Mantlik, F., Kolb, A., Mueller, M., Reimold, M., Federmann, B., Hetzel, J., Claussen, C., Pfannenberg, C., Horger, M., Pichler, B., Schwenzer, N.

(T110), 2011 World Molecular Imaging Congress (WMIC), September 2011 (talk)

Abstract
Hybrid imaging modalities such as [18F]FDG-PET/CT are superior in staging of e.g. lung cancer disease compared with stand-alone modalities. Clinical PET/MRI systems are about to enter the field of hybrid imaging and offer potential advantages. One added value could be a deeper insight into the tumor metabolism and tumorigenesis due to the combination of PET and dedicated MR methods such as MRS and DWI. Additionally, therapy monitoring of diffucult to diagnose disease such as chronic sclerodermic GvHD (csGvHD) can potentially be improved by this combination. We have applied PET/MRI in 3 patients with lung cancer and 4 patients with csGvHD before and during therapy. All 3 patients had lung cancer confirmed by histology (2 adenocarcinoma, 1 carcinoid). First, a [18F]FDG-PET/CT was performed with the following parameters: injected dose 351.7±25.1 MBq, uptake time 59.0±2.6 min, 3 min/bed. Subsequently, patients were brought to the PET/MRI imaging facility. The whole-body PET/MRI Biograph mMR system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The MRI is a modified Verio system with a magnet bore of 60 cm. The following parameters for PET acquisition were applied: uptake time 121.3±2.3 min, 3 bed positions, 6 min/bed. T1w, T2w, and DWI MR images were recorded simultaneously for each bed. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. The 4 patients with GvHD were brought to the brainPET/MRI imaging facility 2:10h-2:28h after tracer injection. A 9 min brainPET-acquisition with simultaneous MRI of the lower extremities was accomplished. MRI examination included T1-weighted (pre and post gadolinium) and T2-weighted sequences. Attenuation correction was calculated based on manual bone segmentation and thresholds for soft tissue, fat and air. Soleus muscle (m), crural fascia (f1) and posterior crural intermuscular septum fascia (f2) were surrounded with ROIs based on the pre-treatment T1-weighted images and coregistered using IRW (Siemens). Fascia-to-muscle ratios for PET (f/m), T1 contrast uptake (T1_post-contrast_f-pre-contrast_f/post-contrast_m-pre-contrast_m) and T2 (T2_f/m) were calculated. Both patients with adenocarcinoma show a lower ADC value compared with the carcinoid patient suggesting a higher cellularity. This is also reflected in FDG-PET with higher SUV values. Our initial results reveal that PET/MRI can provide complementary information for a profound tumor characterization and therapy monitoring. The high soft tissue contrast provided by MRI is valuable for the assessment of the fascial inflammation. While in the first patient FDG and contrast uptake as well as edema, represented by T2 signals, decreased with ongoing therapy, all parameters remained comparatively stable in the second patient. Contrary to expectations, an increase in FDG uptake of patient 3 and 4 was accompanied by an increase of the T2 signals, but a decrease in contrast uptake. These initial results suggest that PET/MRI provides complementary information of the complex disease mechanisms in fibrosing disorders.

ei

Web [BibTex]

Web [BibTex]


no image
Statistical Image Analysis and Percolation Theory

Langovoy, M., Habeck, M., Schölkopf, B.

2011 Joint Statistical Meetings (JSM), August 2011 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

Web [BibTex]

Web [BibTex]


no image
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

Seldin, Y., Laviolette, F., Shawe-Taylor, J., Peters, J., Auer, P.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2011 (techreport)

Abstract
We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many other fields, where martingales and limited feedback are encountered.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Non-stationary Correction of Optical Aberrations

Schuler, C., Hirsch, M., Harmeling, S., Schölkopf, B.

(1), Max Planck Institute for Intelligent Systems, Tübingen, Germany, May 2011 (techreport)

Abstract
Taking a sharp photo at several megapixel resolution traditionally relies on high grade lenses. In this paper, we present an approach to alleviate image degradations caused by imperfect optics. We rely on a calibration step to encode the optical aberrations in a space-variant point spread function and obtain a corrected image by non-stationary deconvolution. By including the Bayer array in our image formation model, we can perform demosaicing as part of the deconvolution.

ei

PDF [BibTex]

PDF [BibTex]


no image
Cooperative Cuts

Jegelka, S.

COSA Workshop: Combinatorial Optimization, Statistics, and Applications, March 2011 (talk)

Abstract
Combinatorial problems with submodular cost functions have recently drawn interest. In a standard combinatorial problem, the sum-of-weights cost is replaced by a submodular set function. The result is a powerful model that is though very hard. In this talk, I will introduce cooperative cuts, minimum cuts with submodular edge weights. I will outline methods to approximately solve this problem, and show an application in computer vision. If time permits, the talk will also sketch regret-minimizing online algorithms for submodular-cost combinatorial problems. This is joint work with Jeff Bilmes (University of Washington).

ei

Web [BibTex]

Web [BibTex]


no image
Multiple Kernel Learning: A Unifying Probabilistic Viewpoint

Nickisch, H., Seeger, M.

Max Planck Institute for Biological Cybernetics, March 2011 (techreport)

Abstract
We present a probabilistic viewpoint to multiple kernel learning unifying well-known regularised risk approaches and recent advances in approximate Bayesian inference relaxations. The framework proposes a general objective function suitable for regression, robust regression and classification that is lower bound of the marginal likelihood and contains many regularised risk approaches as special cases. Furthermore, we derive an efficient and provably convergent optimisation algorithm.

ei

Web [BibTex]

Web [BibTex]


no image
Multiple testing, uncertainty and realistic pictures

Langovoy, M., Wittich, O.

(2011-004), EURANDOM, Technische Universiteit Eindhoven, January 2011 (techreport)

Abstract
We study statistical detection of grayscale objects in noisy images. The object of interest is of unknown shape and has an unknown intensity, that can be varying over the object and can be negative. No boundary shape constraints are imposed on the object, only a weak bulk condition for the object's interior is required. We propose an algorithm that can be used to detect grayscale objects of unknown shapes in the presence of nonparametric noise of unknown level. Our algorithm is based on a nonparametric multiple testing procedure. We establish the limit of applicability of our method via an explicit, closed-form, non-asymptotic and nonparametric consistency bound. This bound is valid for a wide class of nonparametric noise distributions. We achieve this by proving an uncertainty principle for percolation on nite lattices.

ei

PDF [BibTex]

PDF [BibTex]


no image
Nonconvex proximal splitting: batch and incremental algorithms

Sra, S.

(2), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2011 (techreport)

Abstract
Within the unmanageably large class of nonconvex optimization, we consider the rich subclass of nonsmooth problems having composite objectives (this includes the extensively studied convex, composite objective problems as a special case). For this subclass, we introduce a powerful, new framework that permits asymptotically non-vanishing perturbations. In particular, we develop perturbation-based batch and incremental (online like) nonconvex proximal splitting algorithms. To our knowledge, this is the rst time that such perturbation-based nonconvex splitting algorithms are being proposed and analyzed. While the main contribution of the paper is the theoretical framework, we complement our results by presenting some empirical results on matrix factorization.

ei

PDF [BibTex]

PDF [BibTex]

2010


no image
Computationally efficient algorithms for statistical image processing: Implementation in R

Langovoy, M., Wittich, O.

(2010-053), EURANDOM, Technische Universiteit Eindhoven, December 2010 (techreport)

Abstract
In the series of our earlier papers on the subject, we proposed a novel statistical hy- pothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We developed algorithms that allowed to detect objects of unknown shapes in the presence of nonparametric noise of unknown level and of un- known distribution. No boundary shape constraints were imposed on the objects, only a weak bulk condition for the object's interior was required. Our algorithms have linear complexity and exponential accuracy. In the present paper, we describe an implementation of our nonparametric hypothesis testing method. We provide a program that can be used for statistical experiments in image processing. This program is written in the statistical programming language R.

ei

PDF [BibTex]

2010


PDF [BibTex]


no image
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference

Seeger, M., Nickisch, H.

Max Planck Institute for Biological Cybernetics, December 2010 (techreport)

Abstract
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuous-variable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from (Opper&Winther 05) with covariance decoupling techniques (Wipf&Nagarajan 08, Nickisch&Seeger 09), it runs at least an order of magnitude faster than the most commonly used EP solver.

ei

Web [BibTex]

Web [BibTex]


no image
Comparative Quantitative Evaluation of MR-Based Attenuation Correction Methods in Combined Brain PET/MR

Mantlik, F., Hofmann, M., Bezrukov, I., Kolb, A., Beyer, T., Reimold, M., Pichler, B., Schölkopf, B.

2010(M08-4), 2010 Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC), November 2010 (talk)

Abstract
Combined PET/MR provides at the same time molecular and functional imaging as well as excellent soft tissue contrast. It does not allow one to directly measure the attenuation properties of scanned tissues, despite the fact that accurate attenuation maps are necessary for quantitative PET imaging. Several methods have therefore been proposed for MR-based attenuation correction (MR-AC). So far, they have only been evaluated on data acquired from separate MR and PET scanners. We evaluated several MR-AC methods on data from 10 patients acquired on a combined BrainPET/MR scanner. This allowed the consideration of specific PET/MR issues, such as the RF coil that attenuates and scatters 511 keV gammas. We evaluated simple MR thresholding methods as well as atlas and machine learning-based MR-AC. CT-based AC served as gold standard reference. To comprehensively evaluate the MR-AC accuracy, we used RoIs from 2 anatomic brain atlases with different levels of detail. Visual inspection of the PET images indicated that even the basic FLASH threshold MR-AC may be sufficient for several applications. Using a UTE sequence for bone prediction in MR-based thresholding occasionally led to false prediction of bone tissue inside the brain, causing a significant overestimation of PET activity. Although it yielded a lower mean underestimation of activity, it exhibited the highest variance of all methods. The atlas averaging approach had a smaller mean error, but showed high maximum overestimation on the RoIs of the more detailed atlas. The Nave Bayes and Atlas-Patch MR-AC yielded the smallest variance, and the Atlas-Patch also showed the smallest mean error. In conclusion, Atlas-based AC using only MR information on the BrainPET/MR yields a high level of accuracy that is sufficient for clinical quantitative imaging requirements. The Atlas-Patch approach was superior to alternative atlas-based methods, yielding a quantification error below 10% for all RoIs except very small ones.

ei

[BibTex]

[BibTex]


no image
A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering

Seldin, Y.

Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2010 (techreport)

Abstract
We formulate weighted graph clustering as a prediction problem: given a subset of edge weights we analyze the ability of graph clustering to predict the remaining edge weights. This formulation enables practical and theoretical comparison of different approaches to graph clustering as well as comparison of graph clustering with other possible ways to model the graph. We adapt the PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008; Seldin, 2009) to derive a PAC-Bayesian generalization bound for graph clustering. The bound shows that graph clustering should optimize a trade-off between empirical data fit and the mutual information that clusters preserve on the graph nodes. A similar trade-off derived from information-theoretic considerations was already shown to produce state-of-the-art results in practice (Slonim et al., 2005; Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by providing a better theoretical foundation, suggesting formal generalization guarantees, and offering a more accurate way to deal with finite sample issues. We derive a bound minimization algorithm and show that it provides good results in real-life problems and that the derived PAC-Bayesian bound is reasonably tight.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse nonnegative matrix approximation: new formulations and algorithms

Tandon, R., Sra, S.

(193), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2010 (techreport)

Abstract
We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.

ei

PDF [BibTex]

PDF [BibTex]


no image
Robust nonparametric detection of objects in noisy images

Langovoy, M., Wittich, O.

(2010-049), EURANDOM, Technische Universiteit Eindhoven, September 2010 (techreport)

Abstract
We propose a novel statistical hypothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We present an algorithm that allows to detect objects of unknown shapes in the presence of nonparametric noise of unknown level and of unknown distribution. No boundary shape constraints are imposed on the object, only a weak bulk condition for the object's interior is required. The algorithm has linear complexity and exponential accuracy and is appropriate for real-time systems. In this paper, we develop further the mathematical formalism of our method and explore im- portant connections to the mathematical theory of percolation and statistical physics. We prove results on consistency and algorithmic complexity of our testing procedure. In addition, we address not only an asymptotic behavior of the method, but also a nite sample performance of our test.

ei

PDF [BibTex]

PDF [BibTex]


no image
Statistical image analysis and percolation theory

Davies, P., Langovoy, M., Wittich, O.

73rd Annual Meeting of the Institute of Mathematical Statistics (IMS), August 2010 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures.

ei

Web [BibTex]

Web [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

Max Planck Institute for Biological Cybernetics, August 2010 (techreport)

Abstract
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density's mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.

ei

Web [BibTex]


no image
Cooperative Cuts for Image Segmentation

Jegelka, S., Bilmes, J.

(UWEETR-1020-0003), University of Washington, Washington DC, USA, August 2010 (techreport)

Abstract
We propose a novel framework for graph-based cooperative regularization that uses submodular costs on graph edges. We introduce an efficient iterative algorithm to solve the resulting hard discrete optimization problem, and show that it has a guaranteed approximation factor. The edge-submodular formulation is amenable to the same extensions as standard graph cut approaches, and applicable to a range of problems. We apply this method to the image segmentation problem. Specifically, Here, we apply it to introduce a discount for homogeneous boundaries in binary image segmentation on very difficult images, precisely, long thin objects and color and grayscale images with a shading gradient. The experiments show that significant portions of previously truncated objects are now preserved.

ei

Web [BibTex]

Web [BibTex]


no image
Statistical image analysis and percolation theory

Langovoy, M., Wittich, O.

28th European Meeting of Statisticians (EMS), August 2010 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Fast algorithms for total-variationbased optimization

Barbero, A., Sra, S.

(194), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2010 (techreport)

Abstract
We derive a number of methods to solve efficiently simple optimization problems subject to a totalvariation (TV) regularization, under different norms of the TV operator and both for the case of 1-dimensional and 2-dimensional data. In spite of the non-smooth, non-separable nature of the TV terms considered, we show that a dual formulation with strong structure can be derived. Taking advantage of this structure we develop adaptions of existing algorithms from the optimization literature, resulting in efficient methods for the problem at hand. Experimental results show that for 1-dimensional data the proposed methods achieve convergence within good accuracy levels in practically linear time, both for L1 and L2 norms. For the more challenging 2-dimensional case a performance of order O(N2 log2 N) for N x N inputs is achieved when using the L2 norm. A final section suggests possible extensions and lines of further work.

ei

PDF [BibTex]

PDF [BibTex]


no image
Cooperative Cuts: Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

24th European Conference on Operational Research (EURO XXIV), July 2010 (talk)

Abstract
We introduce cooperative cut, a minimum cut problem whose cost is a submodular function on sets of edges: the cost of an edge that is added to a cut set depends on the edges in the set. Applications are e.g. in probabilistic graphical models and image processing. We prove NP hardness and a polynomial lower bound on the approximation factor, and upper bounds via four approximation algorithms based on different techniques. Our additional heuristics have attractive practical properties, e.g., to rely only on standard min-cut. Both our algorithms and heuristics appear to do well in practice.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving Large-Scale Nonnegative Least Squares

Sra, S.

16th Conference of the International Linear Algebra Society (ILAS), June 2010 (talk)

Abstract
We study the fundamental problem of nonnegative least squares. This problem was apparently introduced by Lawson and Hanson [1] under the name NNLS. As is evident from its name, NNLS seeks least-squares solutions that are also nonnegative. Owing to its wide-applicability numerous algorithms have been derived for NNLS, beginning from the active-set approach of Lawson and Han- son [1] leading up to the sophisticated interior-point method of Bellavia et al. [2]. We present a new algorithm for NNLS that combines projected subgradients with the non-monotonic gradient descent idea of Barzilai and Borwein [3]. Our resulting algorithm is called BBSG, and we guarantee its convergence by ex- ploiting properties of NNLS in conjunction with projected subgradients. BBSG is surprisingly simple and scales well to large problems. We substantiate our claims by empirically evaluating BBSG and comparing it with established con- vex solvers and specialized NNLS algorithms. The numerical results suggest that BBSG is a practical method for solving large-scale NNLS problems.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Gaussian Mixture Modeling with Gaussian Process Latent Variable Models

Nickisch, H., Rasmussen, C.

Max Planck Institute for Biological Cybernetics, June 2010 (techreport)

Abstract
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets.

ei

Web [BibTex]

Web [BibTex]


no image
Matrix Approximation Problems

Sra, S.

EU Regional School: Rheinisch-Westf{\"a}lische Technische Hochschule Aachen, May 2010 (talk)

ei

PDF AVI [BibTex]

PDF AVI [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 realtime software package.

ei

PDF [BibTex]

PDF [BibTex]


no image
Extending BCI2000 Functionality with Your Own C++ Code

Hill, NJ.

Invited lecture at the 7th International BCI2000 Workshop, Pacific Grove, CA, USA, May 2010 (talk)

Abstract
A tutorial, with exercises, on how to use BCI2000 C++ framework to write your own real-time signal-processing modules.

ei

[BibTex]

[BibTex]


no image
Generalized Proximity and Projection with Norms and Mixed-norms

Sra, S.

(192), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2010 (techreport)

Abstract
We discuss generalized proximity operators (GPO) and their associated generalized projection problems. On inputs of size n, we show how to efficiently apply GPOs and generalized projections for separable norms and distance-like functions to accuracy e in O(n log(1/e)) time. We also derive projection algorithms that run theoretically in O(n log n log(1/e)) time but can for suitable parameter ranges empirically outperform the O(n log(1/e)) projection method. The proximity and projection tasks are either separable, and solved directly, or are reduced to a single root-finding step. We highlight that as a byproduct, our analysis also yields an O(n log(1/e)) (weakly linear-time) procedure for Euclidean projections onto the l1;1-norm ball; previously only an O(n log n) method was known. We provide empirical evaluation to illustrate the performance of our methods, noting that for the l1;1-norm projection, our implementation is more than two orders of magnitude faster than the previously known method.

ei

PDF [BibTex]

PDF [BibTex]


no image
Machine-Learning Methods for Decoding Intentional Brain States

Hill, NJ.

Symposium "Non-Invasive Brain Computer Interfaces: Current Developments and Applications" (BIOMAG), March 2010 (talk)

Abstract
Brain-computer interfaces (BCI) work by making the user perform a specific mental task, such as imagining moving body parts or performing some other covert mental activity, or attending to a particular stimulus out of an array of options, in order to encode their intention into a measurable brain signal. Signal-processing and machine-learning techniques are then used to decode the measured signal to identify the encoded mental state and hence extract the user‘s initial intention. The high-noise high-dimensional nature of brain-signals make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since “it doesn‘t matter what classifier you use once your features are extracted.” Using examples from our own MEG and EEG experiments, I‘ll demonstrate how machine-learning principles can be applied in order to improve BCI performance, if they are formulated in a domain-specific way. The result is a type of data-driven analysis that is more than “just” classification, and can be used to find better feature extractors.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Cooperative Cuts: Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

(189), Max Planck Institute for Biological Cybernetics, Tuebingen, Germany, March 2010 (techreport)

Abstract
We introduce a problem we call Cooperative cut, where the goal is to find a minimum-cost graph cut but where a submodular function is used to define the cost of a subsets of edges. That means, the cost of an edge that is added to the current cut set C depends on the edges in C. This generalization of the cost in the standard min-cut problem to a submodular cost function immediately makes the problem harder. Not only do we prove NP hardness even for nonnegative submodular costs, but also show a lower bound of Omega(|V|^(1/3)) on the approximation factor for the problem. On the positive side, we propose and compare four approximation algorithms with an overall approximation factor of min { |V|/2, |C*|, O( sqrt(|E|) log |V|), |P_max|}, where C* is the optimal solution, and P_max is the longest s, t path across the cut between given s, t. We also introduce additional heuristics for the problem which have attractive properties from the perspective of practical applications and implementations in that existing fast min-cut libraries may be used as subroutines. Both our approximation algorithms, and our heuristics, appear to do well in practice.

ei

PDF [BibTex]

PDF [BibTex]


no image
PAC-Bayesian Analysis in Unsupervised Learning

Seldin, Y.

Foundations and New Trends of PAC Bayesian Learning Workshop, March 2010 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning Motor Primitives for Robotics

Kober, J., Peters, J.

EVENT Lab: Reinforcement Learning in Robotics and Virtual Reality, January 2010 (talk)

Abstract
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing the Dynamic Systems Motor primitives originally introduced by Ijspeert et al. (2003), appropriate learning algorithms for a concerted approach of both imitation and reinforcement learning are presented. Using these algorithms new motor skills, i.e., Ball-in-a-Cup, Ball-Paddling and Dart-Throwing, are learned.

ei

[BibTex]

[BibTex]


no image
Information-theoretic inference of common ancestors

Steudel, B., Ay, N.

Computing Research Repository (CoRR), abs/1010.5720, pages: 18, 2010 (techreport)

ei

Web [BibTex]

Web [BibTex]

2008


no image
BCPy2000

Hill, N., Schreiner, T., Puzicha, C., Farquhar, J.

Workshop "Machine Learning Open-Source Software" at NIPS, December 2008 (talk)

ei

Web [BibTex]

2008


Web [BibTex]


no image
Logistic Regression for Graph Classification

Shervashidze, N., Tsuda, K.

NIPS Workshop on "Structured Input - Structured Output" (NIPS SISO), December 2008 (talk)

Abstract
In this paper we deal with graph classification. We propose a new algorithm for performing sparse logistic regression for graphs, which is comparable in accuracy with other methods of graph classification and produces probabilistic output in addition. Sparsity is required for the reason of interpretability, which is often necessary in domains such as bioinformatics or chemoinformatics.

ei

Web [BibTex]

Web [BibTex]


no image
New Projected Quasi-Newton Methods with Applications

Sra, S.

Microsoft Research Tech-talk, December 2008 (talk)

Abstract
Box-constrained convex optimization problems are central to several applications in a variety of fields such as statistics, psychometrics, signal processing, medical imaging, and machine learning. Two fundamental examples are the non-negative least squares (NNLS) problem and the non-negative Kullback-Leibler (NNKL) divergence minimization problem. The non-negativity constraints are usually based on an underlying physical restriction, for e.g., when dealing with applications in astronomy, tomography, statistical estimation, or image restoration, the underlying parameters represent physical quantities such as concentration, weight, intensity, or frequency counts and are therefore only interpretable with non-negative values. Several modern optimization methods can be inefficient for simple problems such as NNLS and NNKL as they are really designed to handle far more general and complex problems. In this work we develop two simple quasi-Newton methods for solving box-constrained (differentiable) convex optimization problems that utilize the well-known BFGS and limited memory BFGS updates. We position our method between projected gradient (Rosen, 1960) and projected Newton (Bertsekas, 1982) methods, and prove its convergence under a simple Armijo step-size rule. We illustrate our method by showing applications to: Image deblurring, Positron Emission Tomography (PET) image reconstruction, and Non-negative Matrix Approximation (NMA). On medium sized data we observe performance competitive to established procedures, while for larger data the results are even better.

ei

PDF [BibTex]

PDF [BibTex]


no image
Frequent Subgraph Retrieval in Geometric Graph Databases

Nowozin, S., Tsuda, K.

(180), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
Discovery of knowledge from geometric graph databases is of particular importance in chemistry and biology, because chemical compounds and proteins are represented as graphs with 3D geometric coordinates. In such applications, scientists are not interested in the statistics of the whole database. Instead they need information about a novel drug candidate or protein at hand, represented as a query graph. We propose a polynomial-delay algorithm for geometric frequent subgraph retrieval. It enumerates all subgraphs of a single given query graph which are frequent geometric epsilon-subgraphs under the entire class of rigid geometric transformations in a database. By using geometric epsilon-subgraphs, we achieve tolerance against variations in geometry. We compare the proposed algorithm to gSpan on chemical compound data, and we show that for a given minimum support the total number of frequent patterns is substantially limited by requiring geometric matching. Although the computation time per pattern is larger than for non-geometric graph mining, the total time is within a reasonable level even for small minimum support.

ei

PDF [BibTex]

PDF [BibTex]


no image
Simultaneous Implicit Surface Reconstruction and Meshing

Giesen, J., Maier, M., Schölkopf, B.

(179), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We investigate an implicit method to compute a piecewise linear representation of a surface from a set of sample points. As implicit surface functions we use the weighted sum of piecewise linear kernel functions. For such a function we can partition Rd in such a way that these functions are linear on the subsets of the partition. For each subset in the partition we can then compute the zero level set of the function exactly as the intersection of a hyperplane with the subset.

ei

PDF [BibTex]

PDF [BibTex]


no image
Taxonomy Inference Using Kernel Dependence Measures

Blaschko, M., Gretton, A.

(181), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2008 (techreport)

Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to substantially improve the quality of the clustering, when compared with state-of-the-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text data.

ei

PDF [BibTex]

PDF [BibTex]


no image
MR-Based PET Attenuation Correction: Initial Results for Whole Body

Hofmann, M., Steinke, F., Aschoff, P., Lichy, M., Brady, M., Schölkopf, B., Pichler, B.

Medical Imaging Conference, October 2008 (talk)

ei

[BibTex]

[BibTex]


no image
Nonparametric Indepedence Tests: Space Partitioning and Kernel Approaches

Gretton, A., Györfi, L.

19th International Conference on Algorithmic Learning Theory (ALT08), October 2008 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models

Seeger, M., Nickisch, H.

(175), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

ei

PDF [BibTex]

PDF [BibTex]


no image
Block-Iterative Algorithms for Non-Negative Matrix Approximation

Sra, S.

(176), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

Abstract
In this report we present new algorithms for non-negative matrix approximation (NMA), commonly known as the NMF problem. Our methods improve upon the well-known methods of Lee & Seung [19] for both the Frobenius norm as well the Kullback-Leibler divergence versions of the problem. For the latter problem, our results are especially interesting because it seems to have witnessed much lesser algorithmic progress as compared to the Frobenius norm NMA problem. Our algorithms are based on a particular block-iterative acceleration technique for EM, which preserves the multiplicative nature of the updates and also ensures monotonicity. Furthermore, our algorithms also naturally apply to the Bregman-divergence NMA algorithms of Dhillon and Sra [8]. Experimentally, we show that our algorithms outperform the traditional Lee/Seung approach most of the time.

ei

PDF [BibTex]

PDF [BibTex]


no image
Approximation Algorithms for Bregman Clustering Co-clustering and Tensor Clustering

Sra, S., Jegelka, S., Banerjee, A.

(177), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2008 (techreport)

Abstract
The Euclidean K-means problem is fundamental to clustering and over the years it has been intensely investigated. More recently, generalizations such as Bregman k-means [8], co-clustering [10], and tensor (multi-way) clustering [40] have also gained prominence. A well-known computational difficulty encountered by these clustering problems is the NP-Hardness of the associated optimization task, and commonly used methods guarantee at most local optimality. Consequently, approximation algorithms of varying degrees of sophistication have been developed, though largely for the basic Euclidean K-means (or `1-norm K-median) problem. In this paper we present approximation algorithms for several Bregman clustering problems by building upon the recent paper of Arthur and Vassilvitskii [5]. Our algorithms obtain objective values within a factor O(logK) for Bregman k-means, Bregman co-clustering, Bregman tensor clustering, and weighted kernel k-means. To our knowledge, except for some special cases, approximation algorithms have not been considered for these general clustering problems. There are several important implications of our work: (i) under the same assumptions as Ackermann et al. [1] it yields a much faster algorithm (non-exponential in K, unlike [1]) for information-theoretic clustering, (ii) it answers several open problems posed by [4], including generalizations to Bregman co-clustering, and tensor clustering, (iii) it provides practical and easy to implement methods—in contrast to several other common approximation approaches.

ei

PDF [BibTex]

PDF [BibTex]


no image
Combining Appearance and Motion for Human Action Classification in Videos

Dhillon, P., Nowozin, S., Lampert, C.

(174), Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany, August 2008 (techreport)

Abstract
We study the question of activity classification in videos and present a novel approach for recognizing human action categories in videos by combining information from appearance and motion of human body parts. Our approach uses a tracking step which involves Particle Filtering and a local non - parametric clustering step. The motion information is provided by the trajectory of the cluster modes of a local set of particles. The statistical information about the particles of that cluster over a number of frames provides the appearance information. Later we use a “Bag ofWords” model to build one histogram per video sequence from the set of these robust appearance and motion descriptors. These histograms provide us characteristic information which helps us to discriminate among various human actions and thus classify them correctly. We tested our approach on the standard KTH and Weizmann human action datasets and the results were comparable to the state of the art. Additionally our approach is able to distinguish between activities that involve the motion of complete body from those in which only certain body parts move. In other words, our method discriminates well between activities with “gross motion” like running, jogging etc. and “local motion” like waving, boxing etc.

ei

PDF [BibTex]

PDF [BibTex]