Header logo is


2018


no image
Poster Abstract: Toward Fast Closed-loop Control over Multi-hop Low-power Wireless Networks

Mager, F., Baumann, D., Trimpe, S., Zimmerling, M.

Proceedings of the 17th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 158-159, Porto, Portugal, April 2018 (poster)

ics

DOI Project Page [BibTex]

2018


DOI Project Page [BibTex]


no image
Representation of sensory uncertainty in macaque visual cortex

Goris, R., Henaff, O., Meding, K.

Computational and Systems Neuroscience (COSYNE) 2018, March 2018 (poster)

ei

[BibTex]

[BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

7th AREADNE Conference on Research in Encoding and Decoding of Neural Ensembles, 2018 (poster)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Photorealistic Video Super Resolution

Pérez-Pellitero, E., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

Workshop and Challenge on Perceptual Image Restoration and Manipulation (PIRM) at the 15th European Conference on Computer Vision (ECCV), 2018 (poster)

ei

[BibTex]

[BibTex]


no image
Retinal image quality of the human eye across the visual field

Meding, K., Hirsch, M., Wichmann, F. A.

14th Biannual Conference of the German Society for Cognitive Science (KOGWIS 2018), 2018 (poster)

ei

[BibTex]

[BibTex]

2017


no image
Improving performance of linear field generation with multi-coil setup by optimizing coils position

Aghaeifar, A., Loktyushin, A., Eschelbach, M., Scheffler, K.

Magnetic Resonance Materials in Physics, Biology and Medicine, 30(Supplement 1):S259, 34th Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), October 2017 (poster)

ei

link (url) DOI [BibTex]

2017


link (url) DOI [BibTex]


no image
Estimating B0 inhomogeneities with projection FID navigator readouts

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Image Quality Improvement by Applying Retrospective Motion Correction on Quantitative Susceptibility Mapping and R2*

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J.

25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), April 2017 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

ESI Systems Neuroscience Conference (ESI-SyNC 2017): Principles of Structural and Functional Connectivity, 2017 (poster)

ei

[BibTex]

[BibTex]

2013


no image
Coupling between spiking activity and beta band spatio-temporal patterns in the macaque PFC

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

43rd Annual Meeting of the Society for Neuroscience (Neuroscience), 2013 (poster)

ei

[BibTex]

2013


[BibTex]


no image
Gaussian Process Vine Copulas for Multivariate Dependence

Lopez-Paz, D., Hernandez-Lobato, J., Ghahramani, Z.

International Conference on Machine Learning (ICML), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K., Balduzzi, D., Schölkopf, B.

30th International Conference on Machine Learning (ICML2013), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Analyzing locking of spikes to spatio-temporal patterns in the macaque prefrontal cortex

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N., Besserve, M.

Bernstein Conference, 2013 (poster)

ei

DOI [BibTex]

DOI [BibTex]


no image
One-class Support Measure Machines for Group Anomaly Detection

Muandet, K., Schölkopf, B.

29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
The Randomized Dependence Coefficient

Lopez-Paz, D., Hennig, P., Schölkopf, B.

Neural Information Processing Systems (NIPS), 2013 (poster)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Characterization of different types of sharp-wave ripple signatures in the CA1 of the macaque hippocampus

Ramirez-Villegas, J., Logothetis, N., Besserve, M.

4th German Neurophysiology PhD Meeting Networks, 2013 (poster)

ei

Web [BibTex]

Web [BibTex]

2007


no image
MR-Based PET Attenuation Correction: Method and Validation

Hofmann, M., Steinke, F., Scheel, V., Charpiat, G., Brady, M., Schölkopf, B., Pichler, B.

2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC 2007), 2007(M16-6):1-2, November 2007 (poster)

Abstract
PET/MR combines the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET). For quantitative PET information, correction of tissue photon attenuation is mandatory. Usually in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating source, or from the CT scan in case of combined PET/CT. In the case of a PET/MR scanner, there is insufficient space for the rotating source and ideally one would want to calculate the attenuation map from the MR image instead. Since MR images provide information about proton density of the different tissue types, it is not trivial to use this data for PET attenuation correction. We present a method for predicting the PET attenuation map from a given the MR image, using a combination of atlas-registration and recognition of local patterns. Using "leave one out cross validation" we show on a database of 16 MR-CT image pairs that our method reliably allows estimating the CT image from the MR image. Subsequently, as in PET/CT, the PET attenuation map can be predicted from the CT image. On an additional dataset of MR/CT/PET triplets we quantitatively validate that our approach allows PET quantification with an error that is smaller than what would be clinically significant. We demonstrate our approach on T1-weighted human brain scans. However, the presented methods are more general and current research focuses on applying the established methods to human whole body PET/MRI applications.

ei

PDF PDF [BibTex]

2007


PDF PDF [BibTex]


no image
Estimating receptive fields without spike-triggering

Macke, J., Zeck, G., Bethge, M.

37th annual Meeting of the Society for Neuroscience (Neuroscience 2007), 37(768.1):1, November 2007 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Evaluation of Deformable Registration Methods for MR-CT Atlas Alignment

Scheel, V., Hofmann, M., Rehfeld, N., Judenhofer, M., Claussen, C., Pichler, B.

2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC 2007), 2007(M13-121):1, November 2007 (poster)

Abstract
Deformable registration methods are essential for multimodality imaging. Many different methods exist but due to the complexity of the deformed images a direct comparison of the methods is difficult. One particular application that requires high accuracy registration of MR-CT images is atlas-based attenuation correction for PET/MR. We compare four deformable registration algorithms for 3D image data included in the Open Source "National Library of Medicine Insight Segmentation and Registration Toolkit" (ITK). An interactive landmark based registration using MiraView (Siemens) has been used as gold standard. The automatic algorithms provided by ITK are based on the metrics Mattes mutual information as well as on normalized mutual information. The transformations are calculated by interpolating over a uniform B-Spline grid laying over the image to be warped. The algorithms were tested on head images from 10 subjects. We implemented a measure which segments head interior bone and air based on the CT images and l ow intensity classes of corresponding MRI images. The segmentation of bone is performed by individually calculating the lowest Hounsfield unit threshold for each CT image. The compromise is made by quantifying the number of overlapping voxels of the remaining structures. We show that the algorithms provided by ITK achieve similar or better accuracy than the time-consuming interactive landmark based registration. Thus, ITK provides an ideal platform to generate accurately fused datasets from different modalities, required for example for building training datasets for Atlas-based attenuation correction.

ei

PDF [BibTex]

PDF [BibTex]


no image
A time/frequency decomposition of information transmission by LFPs and spikes in the primary visual cortex

Belitski, A., Gretton, A., Magri, C., Murayama, Y., Montemurro, M., Logothetis, N., Panzeri, S.

37th Annual Meeting of the Society for Neuroscience (Neuroscience 2007), 37, pages: 1, November 2007 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Mining expression-dependent modules in the human interaction network

Georgii, E., Dietmann, S., Uno, T., Pagel, P., Tsuda, K.

BMC Bioinformatics, 8(Suppl. 8):S4, November 2007 (poster)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Hilbert Space Embedding for Distributions

Smola, A., Gretton, A., Song, L., Schölkopf, B.

Proceedings of the 10th International Conference on Discovery Science (DS 2007), 10, pages: 40-41, October 2007 (poster)

Abstract
While kernel methods are the basis of many popular techniques in supervised learning, they are less commonly used in testing, estimation, and analysis of probability distributions, where information theoretic approaches rule the roost. However it becomes difficult to estimate mutual information or entropy if the data are high dimensional.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Studying the effects of noise correlations on population coding using a sampling method

Ecker, A., Berens, P., Bethge, M., Logothetis, N., Tolias, A.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 21, September 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Berens, P., Bethge, M.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 19, September 2007 (poster)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data---the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning the Influence of Spatio-Temporal Variations in Local Image Structure on Visual Saliency

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

10th T{\"u}binger Wahrnehmungskonferenz (TWK 2007), 10, pages: 1, July 2007 (poster)

Abstract
Computational models for bottom-up visual attention traditionally consist of a bank of Gabor-like or Difference-of-Gaussians filters and a nonlinear combination scheme which combines the filter responses into a real-valued saliency measure [1]. Recently it was shown that a standard machine learning algorithm can be used to derive a saliency model from human eye movement data with a very small number of additional assumptions. The learned model is much simpler than previous models, but nevertheless has state-of-the-art prediction performance [2]. A central result from this study is that DoG-like center-surround filters emerge as the unique solution to optimizing the predictivity of the model. Here we extend the learning method to the temporal domain. While the previous model [2] predicts visual saliency based on local pixel intensities in a static image, our model also takes into account temporal intensity variations. We find that the learned model responds strongly to temporal intensity changes ocurring 200-250ms before a saccade is initiated. This delay coincides with the typical saccadic latencies, indicating that the learning algorithm has extracted a meaningful statistic from the training data. In addition, we show that the model correctly predicts a significant proportion of human eye movements on previously unseen test data.

ei

Web [BibTex]

Web [BibTex]


no image
Better Codes for the P300 Visual Speller

Biessmann, F., Hill, N., Farquhar, J., Schölkopf, B.

G{\"o}ttingen Meeting of the German Neuroscience Society, 7, pages: 123, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Do We Know What the Early Visual System Computes?

Bethge, M., Kayser, C.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 352, March 2007 (poster)

Abstract
Decades of research provided much data and insights into the mechanisms of the early visual system. Currently, however, there is great controversy on whether these findings can provide us with a thorough functional understanding of what the early visual system does, or formulated differently, of what it computes. At the Society for Neuroscience meeting 2005 in Washington, a symposium was held on the question "Do we know that the early visual system does", which was accompanied by a widely regarded publication in the Journal of Neuroscience. Yet, that discussion was rather specialized as it predominantly addressed the question of how well neural responses in retina, LGN, and cortex can be predicted from noise stimuli, but did not emphasize the question of whether we understand what the function of these early visual areas is. Here we will concentrate on this neuro-computational aspect of vision. Experts from neurobiology, psychophysics and computational neuroscience will present studies which approach this question from different viewpoints and promote a critical discussion of whether we actually understand what early areas contribute to the processing and perception of visual information.

ei

PDF [BibTex]

PDF [BibTex]


no image
Implicit Wiener Series for Estimating Nonlinear Receptive Fields

Franz, MO., Macke, JH., Saleem, A., Schultz, SR.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1199, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
3D Reconstruction of Neural Circuits from Serial EM Images

Maack, N., Kapfer, C., Macke, J., Schölkopf, B., Denk, W., Borst, A.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1195, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Identifying temporal population codes in the retina using canonical correlation analysis

Bethge, M., Macke, J., Gerwinn, S., Zeck, G.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 359, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Bayesian Neural System identification: error bars, receptive fields and neural couplings

Gerwinn, S., Seeger, M., Zeck, G., Bethge, M.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 360, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
About the Triangle Inequality in Perceptual Spaces

Jäkel, F., Schölkopf, B., Wichmann, F.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 308, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Center-surround filters emerge from optimizing predictivity in a free-viewing task

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 207, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Nonlinear Receptive Field Analysis: Making Kernel Methods Interpretable

Kienzle, W., Macke, J., Wichmann, F., Schölkopf, B., Franz, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 16, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Estimating Population Receptive Fields in Space and Time

Macke, J., Zeck, G., Bethge, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 44, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]

2003


no image
Texture and haptic cues in slant discrimination: Measuring the effect of texture type on cue combination

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

Journal of Vision, 3(12):26, 2003 Fall Vision Meeting of the Optical Society of America, December 2003 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The influence of each cue in such average depends on the reliability of the source of information. (Young, Landy, & Maloney, 1993; Ernst & Banks, 2002.) In particular, Ernst & Banks (2002) formulate the combination performed by the human brain as that of the minimum variance unbiased estimator that can be constructed from the available cues. Using slant discrimination and slant judgment via probe adjustment as tasks, we have observed systematic differences in performance of human observers when a number of different types of textures were used as cue to slant (Rosas, Wichmann & Wagemans, 2003). If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. We have combined these texture types with object motion but the obtained results are difficult to reconcile with the unbiased minimum variance estimator model (Rosas & Wagemans, 2003). This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, & Landy (2002) have shown that while for between-modality combination the human visual system has access to the single-cue information, for within-modality combination (visual cues: disparity and texture) the single-cue information is lost, suggesting a coupling between these cues. Then, in the present study we combine the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition the texture cue and the haptic cue to slant are combined as predicted by an unbiased, minimum variance estimator model.

ei

Web DOI [BibTex]

2003


Web DOI [BibTex]


no image
Phase Information and the Recognition of Natural Images

Braun, D., Wichmann, F., Gegenfurtner, K.

6, pages: 138, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Fourier phase plays an important role in determining image structure. For example, when the phase spectrum of an image showing a ower is swapped with the phase spectrum of an image showing a tank, then we will usually perceive a tank in the resulting image, even though the amplitude spectrum is still that of the ower. Also, when the phases of an image are randomly swapped across frequencies, the resulting image becomes impossible to recognize. Our goal was to evaluate the e ect of phase manipulations in a more quantitative manner. On each trial subjects viewed two images of natural scenes. The subject had to indicate which one of the two images contained an animal. The spectra of the images were manipulated by adding random phase noise at each frequency. The phase noise was uniformly distributed in the interval [;+], where  was varied between 0 degree and 180 degrees. Image pairs were displayed for 100 msec. Subjects were remarkably resistant to the addition of phase noise. Even with [120; 120] degree noise, subjects still were at a level of 75% correct. The introduction of phase noise leads to a reduction of image contrast. Subjects were slightly better than a simple prediction based on this contrast reduction. However, when contrast response functions were measured in the same experimental paradigm, we found that performance in the phase noise experiment was signi cantly lower than that predicted by the corresponding contrast reduction.

ei

Web [BibTex]

Web [BibTex]


no image
Constraints measures and reproduction of style in robot imitation learning

Bakir, GH., Ilg, W., Franz, MO., Giese, M.

6, pages: 70, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Imitation learning is frequently discussed as a method for generating complex behaviors in robots by imitating human actors. The kinematic and the dynamic properties of humans and robots are typically quite di erent, however. For this reason observed human trajectories cannot be directly transferred to robots, even if their geometry is humanoid. Instead the human trajectory must be approximated by trajectories that can be realized by the robot. During this approximation deviations from the human trajectory may arise that change the style of the executed movement. Alternatively, the style of the movement might be well reproduced, but the imitated trajectory might be suboptimal with respect to di erent constraint measures from robotics control, leading to non-robust behavior. Goal of the presented work is to quantify this trade-o between \imitation quality" and constraint compatibility for the imitation of complex writing movements. In our experiment, we used trajectory data from human writing movements (see the abstract of Ilg et al. in this volume). The human trajectories were mapped onto robot trajectories by minimizing an error measure that integrates constraints that are important for the imitation of movement style and a regularizing constraint that ensures smooth joint trajectories with low velocities. In a rst experiment, both the end-e ector position and the shoulder angle of the robot were optimized in order to achieve good imitation together with accurate control of the end-e ector position. In a second experiment only the end-e ector trajectory was imitated whereas the motion of the elbow joint was determined using the optimal inverse kinematic solution for the robot. For both conditions di erent constraint measures (dexterity and relative jointlimit distances) and a measure for imitation quality were assessed. By controling the weight of the regularization term we can vary continuously between robot behavior optimizing imitation quality, and behavior minimizing joint velocities.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Study of Human Classification using Psychophysics and Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

6, pages: 149, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), Febuary 2003 (poster)

Abstract
We attempt to reach a better understanding of classi cation in humans using both psychophysical and machine learning techniques. In our psychophysical paradigm the stimuli presented to the human subjects are modi ed using machine learning algorithms according to their responses. Frontal views of human faces taken from a processed version of the MPI face database are employed for a gender classi cation task. The processing assures that all heads have same mean intensity, same pixel-surface area and are centered. This processing stage is followed by a smoothing of the database in order to eliminate, as much as possible, scanning artifacts. Principal Component Analysis is used to obtain a low-dimensional representation of the faces in the database. A subject is asked to classify the faces and experimental parameters such as class (i.e. female/male), con dence ratings and reaction times are recorded. A mean classi cation error of 14.5% is measured and, on average, 0.5 males are classi ed as females and 21.3females as males. The mean reaction time for the correctly classi ed faces is 1229 +- 252 [ms] whereas the incorrectly classi ed faces have a mean reaction time of 1769 +- 304 [ms] showing that the reaction times increase with the subject's classi- cation error. Reaction times are also shown to decrease with increasing con dence, both for the correct and incorrect classi cations. Classi cation errors, reaction times and con dence ratings are then correlated to concepts of machine learning such as separating hyperplane obtained when considering Support Vector Machines, Relevance Vector Machines, boosted Prototype and K-means Learners. Elements near the separating hyperplane are found to be classi ed with more errors than those away from it. In addition, the subject's con dence increases when moving away from the hyperplane. A preliminary analysis on the available small number of subjects indicates that K-means classi cation seems to re ect the subject's classi cation behavior best. The above learnersare then used to generate \special" elements, or representations, of the low-dimensional database according to the labels given by the subject. A memory experiment follows where the representations are shown together with faces seen or unseen during the classi cation experiment. This experiment aims to assess the representations by investigating whether some representations, or special elements, are classi ed as \seen before" despite that they never appeared in the classi cation experiment, possibly hinting at their use during human classi cation.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Representation of Complex Movement Sequences Based on Hierarchical Spatio-Temporal Correspondence for Imitation Learning in Robotics

Ilg, W., Bakir, GH., Franz, MO., Giese, M.

6, pages: 74, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Imitation learning of complex movements has become a popular topic in neuroscience, as well as in robotics. A number of conceptual as well as practical problems are still unsolved. One example is the determination of the aspects of movements which are relevant for imitation. Problems concerning the movement representation are twofold: (1) The movement characteristics of observed movements have to be transferred from the perceptual level to the level of generated actions. (2) Continuous spaces of movements with variable styles have to be approximated based on a limited number of learned example sequences. Therefore, one has to use representation with a high generalisation capability. We present methods for the representation of complex movement sequences that addresses these questions in the context of the imitation learning of writing movements using a robot arm with human-like geometry. For the transfer of complex movements from perception to action we exploit a learning-based method that represents complex action sequences by linear combination of prototypical examples (Ilg and Giese, BMCV 2002). The method of hierarchical spatio-temporal morphable models (HSTMM) decomposes action sequences automatically into movement primitives. These primitives are modeled by linear combinations of a small number of learned example trajectories. The learned spatio-temporal models are suitable for the analysis and synthesis of long action sequences, which consist of movement primitives with varying style parameters. The proposed method is illustrated by imitation learning of complex writing movements. Human trajectories were recorded using a commercial motion capture system (VICON). In the rst step the recorded writing sequences are decomposed into movement primitives. These movement primitives can be analyzed and changed in style by de ning linear combinations of prototypes with di erent linear weight combinations. Our system can imitate writing movements of di erent actors, synthesize new writing styles and can even exaggerate the writing movements of individual actors. Words and writing movements of the robot look very natural, and closely match the natural styles. These preliminary results makes the proposed method promising for further applications in learning-based robotics. In this poster we focus on the acquisition of the movement representation (identi cation and segmentation of movement primitives, generation of new writing styles by spatio-temporal morphing). The transfer of the generated writing movements to the robot considering the given kinematic and dynamic constraints is discussed in Bakir et al (this volume).

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Models of contrast transfer as a function of presentation time and spatial frequency.

Wichmann, F.

2003 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Using standard 2AFC contrast discrimination experiments conducted using a carefully calibrated display we previously showed that the shape of the threshold versus (pedestal) contrast (TvC) curve changes with presentation time and the performance level defined as threshold (Wichmann, 1999; Wichmann & Henning, 1999). Additional experiments looked at the change of the TvC curve with spatial frequency (Bird, Henning & Wichmann, 2002), and at how to constrain the parameters of models of contrast processing (Wichmann, 2002). Here I report modelling results both across spatial frequency and presentation time. An extensive model-selection exploration was performed using Bayesian confidence regions for the fitted parameters as well as cross-validation methods. Bird, C.M., G.B. Henning and F.A. Wichmann (2002). Contrast discrimination with sinusoidal gratings of different spatial frequency. Journal of the Optical Society of America A, 19, 1267-1273. Wichmann, F.A. (1999). Some aspects of modelling human spatial vision: contrast discrimination. Unpublished doctoral dissertation, The University of Oxford. Wichmann, F.A. & Henning, G.B. (1999). Implications of the Pedestal Effect for Models of Contrast-Processing and Gain-Control. OSA Annual Meeting Program, 62. Wichmann, F.A. (2002). Modelling Contrast Transfer in Spatial Vision [Abstract]. Journal of Vision, 2, 7a.

ei

[BibTex]