Header logo is


2016


no image
Autofocusing-based correction of B0 fluctuation-induced ghosting

Loktyushin, A., Ehses, P., Schölkopf, B., Scheffler, K.

24th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM), May 2016 (poster)

ei

link (url) [BibTex]

2016


link (url) [BibTex]


no image
Distinct adaptation to abrupt and gradual torque perturbations with a multi-joint exoskeleton robot

Oh, Y., Sutanto, G., Mistry, M., Schweighofer, N., Schaal, S.

Abstracts of Neural Control of Movement Conference (NCM 2016), Montego Bay, Jamaica, April 2016 (poster)

am

[BibTex]

[BibTex]


no image
Novel Random Forest based framework enables the segmentation of cerebral ischemic regions using multiparametric MRI

Katiyar, P., Castaneda, S., Patzwaldt, K., Russo, F., Poli, S., Ziemann, U., Disselhorst, J. A., Pichler, B. J.

European Molecular Imaging Meeting, 2016 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
PGO wave-triggered functional MRI: mapping the networks underlying synaptic consolidation

Logothetis, N. K., Murayama, Y., Ramirez-Villegas, J. F., Besserve, M., Evrard, H.

47th Annual Meeting of the Society for Neuroscience (Neuroscience), 2016 (poster)

ei

[BibTex]

[BibTex]


no image
Multiparametric Imaging of Ischemic Stroke using [89Zr]-Desferal-EPO-PET/MRI in combination with Gaussian Mixture Modeling enables unsupervised lesions identification

Castaneda, S., Katiyar, P., Russo, F., Maurer, A., Patzwaldt, K., Poli, S., Calaminus, C., Disselhorst, J. A., Ziemann, U., Pichler, B. J.

European Molecular Imaging Meeting, 2016 (poster)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Statistical source separation of rhythmic LFP patterns during sharp wave ripples in the macaque hippocampus

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

47th Annual Meeting of the Society for Neuroscience (Neuroscience), 2016 (poster)

ei

[BibTex]

[BibTex]


no image
Hippocampal neural events predict ongoing brain-wide BOLD activity

Besserve, M., Logothetis, N. K.

47th Annual Meeting of the Society for Neuroscience (Neuroscience), 2016 (poster)

ei

[BibTex]

[BibTex]

2015


no image
Diversity of sharp wave-ripples in the CA1 of the macaque hippocampus and their brain wide signatures

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015), October 2015 (poster)

ei

link (url) [BibTex]

2015


link (url) [BibTex]


no image
Retrospective rigid motion correction of undersampled MRI data

Loktyushin, A., Babayeva, M., Gallichan, D., Krueger, G., Scheffler, K., Kober, T.

23rd Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine, ISMRM, June 2015 (poster)

ei

[BibTex]

[BibTex]


no image
Improving Quantitative Susceptibility and R2* Mapping by Applying Retrospective Motion Correction

Feng, X., Loktyushin, A., Deistung, A., Reichenbach, J. R.

23rd Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine, ISMRM, June 2015 (poster)

ei

[BibTex]

[BibTex]


no image
Increasing the sensitivity of Kepler to Earth-like exoplanets

Foreman-Mackey, D., Hogg, D., Schölkopf, B., Wang, D.

Workshop: 225th American Astronomical Society Meeting 2015 , pages: 105.01D, 2015 (poster)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Calibrating the pixel-level Kepler imaging data with a causal data-driven model

Wang, D., Foreman-Mackey, D., Hogg, D., Schölkopf, B.

Workshop: 225th American Astronomical Society Meeting 2015 , pages: 258.08, 2015 (poster)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Assessment of tumor heterogeneity using unsupervised graph based clustering of multi-modality imaging data

Katiyar, P., Divine, M. R., Pichler, B. J., Disselhorst, J. A.

European Molecular Imaging Meeting, 2015 (poster)

ei

[BibTex]

[BibTex]


no image
Disparity estimation from a generative light field model

Köhler, R., Schölkopf, B., Hirsch, M.

IEEE International Conference on Computer Vision (ICCV 2015), Workshop on Inverse Rendering, 2015, Note: This work has been presented as a poster and is not included in the workshop proceedings. (poster)

ei

[BibTex]

[BibTex]

2007


no image
MR-Based PET Attenuation Correction: Method and Validation

Hofmann, M., Steinke, F., Scheel, V., Charpiat, G., Brady, M., Schölkopf, B., Pichler, B.

2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC 2007), 2007(M16-6):1-2, November 2007 (poster)

Abstract
PET/MR combines the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET). For quantitative PET information, correction of tissue photon attenuation is mandatory. Usually in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating source, or from the CT scan in case of combined PET/CT. In the case of a PET/MR scanner, there is insufficient space for the rotating source and ideally one would want to calculate the attenuation map from the MR image instead. Since MR images provide information about proton density of the different tissue types, it is not trivial to use this data for PET attenuation correction. We present a method for predicting the PET attenuation map from a given the MR image, using a combination of atlas-registration and recognition of local patterns. Using "leave one out cross validation" we show on a database of 16 MR-CT image pairs that our method reliably allows estimating the CT image from the MR image. Subsequently, as in PET/CT, the PET attenuation map can be predicted from the CT image. On an additional dataset of MR/CT/PET triplets we quantitatively validate that our approach allows PET quantification with an error that is smaller than what would be clinically significant. We demonstrate our approach on T1-weighted human brain scans. However, the presented methods are more general and current research focuses on applying the established methods to human whole body PET/MRI applications.

ei

PDF PDF [BibTex]

2007


PDF PDF [BibTex]


no image
Estimating receptive fields without spike-triggering

Macke, J., Zeck, G., Bethge, M.

37th annual Meeting of the Society for Neuroscience (Neuroscience 2007), 37(768.1):1, November 2007 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Evaluation of Deformable Registration Methods for MR-CT Atlas Alignment

Scheel, V., Hofmann, M., Rehfeld, N., Judenhofer, M., Claussen, C., Pichler, B.

2007 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS-MIC 2007), 2007(M13-121):1, November 2007 (poster)

Abstract
Deformable registration methods are essential for multimodality imaging. Many different methods exist but due to the complexity of the deformed images a direct comparison of the methods is difficult. One particular application that requires high accuracy registration of MR-CT images is atlas-based attenuation correction for PET/MR. We compare four deformable registration algorithms for 3D image data included in the Open Source "National Library of Medicine Insight Segmentation and Registration Toolkit" (ITK). An interactive landmark based registration using MiraView (Siemens) has been used as gold standard. The automatic algorithms provided by ITK are based on the metrics Mattes mutual information as well as on normalized mutual information. The transformations are calculated by interpolating over a uniform B-Spline grid laying over the image to be warped. The algorithms were tested on head images from 10 subjects. We implemented a measure which segments head interior bone and air based on the CT images and l ow intensity classes of corresponding MRI images. The segmentation of bone is performed by individually calculating the lowest Hounsfield unit threshold for each CT image. The compromise is made by quantifying the number of overlapping voxels of the remaining structures. We show that the algorithms provided by ITK achieve similar or better accuracy than the time-consuming interactive landmark based registration. Thus, ITK provides an ideal platform to generate accurately fused datasets from different modalities, required for example for building training datasets for Atlas-based attenuation correction.

ei

PDF [BibTex]

PDF [BibTex]


no image
A time/frequency decomposition of information transmission by LFPs and spikes in the primary visual cortex

Belitski, A., Gretton, A., Magri, C., Murayama, Y., Montemurro, M., Logothetis, N., Panzeri, S.

37th Annual Meeting of the Society for Neuroscience (Neuroscience 2007), 37, pages: 1, November 2007 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Mining expression-dependent modules in the human interaction network

Georgii, E., Dietmann, S., Uno, T., Pagel, P., Tsuda, K.

BMC Bioinformatics, 8(Suppl. 8):S4, November 2007 (poster)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Hilbert Space Embedding for Distributions

Smola, A., Gretton, A., Song, L., Schölkopf, B.

Proceedings of the 10th International Conference on Discovery Science (DS 2007), 10, pages: 40-41, October 2007 (poster)

Abstract
While kernel methods are the basis of many popular techniques in supervised learning, they are less commonly used in testing, estimation, and analysis of probability distributions, where information theoretic approaches rule the roost. However it becomes difficult to estimate mutual information or entropy if the data are high dimensional.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Studying the effects of noise correlations on population coding using a sampling method

Ecker, A., Berens, P., Bethge, M., Logothetis, N., Tolias, A.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 21, September 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

Berens, P., Bethge, M.

Neural Coding, Computation and Dynamics (NCCD 07), 1, pages: 19, September 2007 (poster)

Abstract
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data---the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extrem ely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning the Influence of Spatio-Temporal Variations in Local Image Structure on Visual Saliency

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

10th T{\"u}binger Wahrnehmungskonferenz (TWK 2007), 10, pages: 1, July 2007 (poster)

Abstract
Computational models for bottom-up visual attention traditionally consist of a bank of Gabor-like or Difference-of-Gaussians filters and a nonlinear combination scheme which combines the filter responses into a real-valued saliency measure [1]. Recently it was shown that a standard machine learning algorithm can be used to derive a saliency model from human eye movement data with a very small number of additional assumptions. The learned model is much simpler than previous models, but nevertheless has state-of-the-art prediction performance [2]. A central result from this study is that DoG-like center-surround filters emerge as the unique solution to optimizing the predictivity of the model. Here we extend the learning method to the temporal domain. While the previous model [2] predicts visual saliency based on local pixel intensities in a static image, our model also takes into account temporal intensity variations. We find that the learned model responds strongly to temporal intensity changes ocurring 200-250ms before a saccade is initiated. This delay coincides with the typical saccadic latencies, indicating that the learning algorithm has extracted a meaningful statistic from the training data. In addition, we show that the model correctly predicts a significant proportion of human eye movements on previously unseen test data.

ei

Web [BibTex]

Web [BibTex]


no image
Better Codes for the P300 Visual Speller

Biessmann, F., Hill, N., Farquhar, J., Schölkopf, B.

G{\"o}ttingen Meeting of the German Neuroscience Society, 7, pages: 123, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Do We Know What the Early Visual System Computes?

Bethge, M., Kayser, C.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 352, March 2007 (poster)

Abstract
Decades of research provided much data and insights into the mechanisms of the early visual system. Currently, however, there is great controversy on whether these findings can provide us with a thorough functional understanding of what the early visual system does, or formulated differently, of what it computes. At the Society for Neuroscience meeting 2005 in Washington, a symposium was held on the question "Do we know that the early visual system does", which was accompanied by a widely regarded publication in the Journal of Neuroscience. Yet, that discussion was rather specialized as it predominantly addressed the question of how well neural responses in retina, LGN, and cortex can be predicted from noise stimuli, but did not emphasize the question of whether we understand what the function of these early visual areas is. Here we will concentrate on this neuro-computational aspect of vision. Experts from neurobiology, psychophysics and computational neuroscience will present studies which approach this question from different viewpoints and promote a critical discussion of whether we actually understand what early areas contribute to the processing and perception of visual information.

ei

PDF [BibTex]

PDF [BibTex]


no image
Implicit Wiener Series for Estimating Nonlinear Receptive Fields

Franz, MO., Macke, JH., Saleem, A., Schultz, SR.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1199, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
3D Reconstruction of Neural Circuits from Serial EM Images

Maack, N., Kapfer, C., Macke, J., Schölkopf, B., Denk, W., Borst, A.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 1195, March 2007 (poster)

ei

PDF [BibTex]

PDF [BibTex]


no image
Identifying temporal population codes in the retina using canonical correlation analysis

Bethge, M., Macke, J., Gerwinn, S., Zeck, G.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 359, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Bayesian Neural System identification: error bars, receptive fields and neural couplings

Gerwinn, S., Seeger, M., Zeck, G., Bethge, M.

31st G{\"o}ttingen Neurobiology Conference, 31, pages: 360, March 2007 (poster)

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
About the Triangle Inequality in Perceptual Spaces

Jäkel, F., Schölkopf, B., Wichmann, F.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 308, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Center-surround filters emerge from optimizing predictivity in a free-viewing task

Kienzle, W., Wichmann, F., Schölkopf, B., Franz, M.

Proceedings of the Computational and Systems Neuroscience Meeting 2007 (COSYNE), 4, pages: 207, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Nonlinear Receptive Field Analysis: Making Kernel Methods Interpretable

Kienzle, W., Macke, J., Wichmann, F., Schölkopf, B., Franz, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 16, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Estimating Population Receptive Fields in Space and Time

Macke, J., Zeck, G., Bethge, M.

Computational and Systems Neuroscience Meeting 2007 (COSYNE 2007), 4, pages: 44, February 2007 (poster)

ei

PDF Web [BibTex]

PDF Web [BibTex]

2001


no image
Perception of Planar Shapes in Depth

Wichmann, F., Willems, B., Rosas, P., Wagemans, J.

Journal of Vision, 1(3):176, First Annual Meeting of the Vision Sciences Society (VSS), December 2001 (poster)

Abstract
We investigated the influence of the perceived 3D-orientation of planar elliptical shapes on the perception of the shapes themselves. Ellipses were projected onto the surface of a sphere and subjects were asked to indicate if the projected shapes looked as if they were a circle on the surface of the sphere. The image of the sphere was obtained from a real, (near) perfect sphere using a highly accurate digital camera (real sphere diameter 40 cm; camera-to-sphere distance 320 cm; for details see Willems et al., Perception 29, S96, 2000; Photometrics SenSys 400 digital camera with Rodenstock lens, 12-bit linear luminance resolution). Stimuli were presented monocularly on a carefully linearized Sony GDM-F500 monitor keeping the scene geometry as in the real case (sphere diameter on screen 8.2 cm; viewing distance 66 cm). Experiments were run in a darkened room using a viewing tube to minimize, as far as possible, extraneous monocular cues to depth. Three different methods were used to obtain subjects' estimates of 3D-shape: the method of adjustment, temporal 2-alternative forced choice (2AFC) and yes/no. Several results are noteworthy. First, mismatch between perceived and objective slant tended to decrease with increasing objective slant. Second, the variability of the settings, too, decreased with increasing objective slant. Finally, we comment on the results obtained using different psychophysical methods and compare our results to those obtained using a real sphere and binocular vision (Willems et al.).

ei

Web DOI [BibTex]

2001


Web DOI [BibTex]


no image
Plaid maskers revisited: asymmetric plaids

Wichmann, F.

pages: 57, 4. T{\"u}binger Wahrnehmungskonferenz (TWK), March 2001 (poster)

Abstract
A large number of psychophysical and physiological experiments suggest that luminance patterns are independently analysed in channels responding to different bands of spatial frequency. There are, however, interactions among stimuli falling well outside the usual estimates of channels' bandwidths. Derrington & Henning (1989) first reported that, in 2-AFC sinusoidal-grating detection, plaid maskers, whose components are oriented symmetrically about the signal orientation, cause a substantially larger threshold elevation than would be predicted from their sinusoidal constituents alone. Wichmann & Tollin (1997a,b) and Wichmann & Henning (1998) confirmed and extended the original findings, measuring masking as a function of presentation time and plaid mask contrast. Here I investigate masking using plaid patterns whose components are asymmetrically positioned about the signal orientation. Standard temporal 2-AFC pattern discrimination experiments were conducted using plaid patterns and oblique sinusoidal gratings as maskers, and horizontally orientated sinusoidal gratings as signals. Signal and maskers were always interleaved on the display (refresh rate 152 Hz). As in the case of the symmetrical plaid maskers, substantial masking was observed for many of the asymmetrical plaids. Masking is neither a straightforward function of the plaid's constituent sinusoidal components nor of the periodicity of the luminance beats between components. These results cause problems for the notion that, even for simple stimuli, detection and discrimination are based on the outputs of channels tuned to limited ranges of spatial frequency and orientation, even if a limited set of nonlinear interactions between these channels is allowed.

ei

Web [BibTex]

Web [BibTex]


no image
The pedestal effect with a pulse train and its constituent sinusoids

Henning, G., Wichmann, F., Bird, C.

Twenty-Sixth Annual Interdisciplinary Conference, 2001 (poster)

Abstract
Curves showing "threshold" contrast for detecting a signal grating as a function of the contrast of a masking grating of the same orientation, spatial frequency, and phase show a characteristic improvement in performance at masker contrasts near the contrast threshold of the unmasked signal. Depending on the percentage of correct responses used to define the threshold, the best performance can be as much as a factor of three better than the unmasked threshold obtained in the absence of any masking grating. The result is called the pedestal effect (sometimes, the dipper function). We used a 2AFC procedure to measure the effect with harmonically related sinusoids ranging from 2 to 16 c/deg - all with maskers of the same orientation, spatial frequency and phase - and with masker contrasts ranging from 0 to 50%. The curves for different spatial frequencies are identical if both the vertical axis (showing the threshold signal contrast) and the horizontal axis (showing the masker contrast) are scaled by the threshold contrast of the signal obtained with no masker. Further, a pulse train with a fundamental frequency of 2 c/deg produces a curve that is indistinguishable from that of a 2-c/deg sinusoid despite the fact that at higher masker contrasts, the pulse train contains at least 8 components all of them equally detectable. The effect of adding 1-D spatial noise is also discussed.

ei

[BibTex]

[BibTex]


no image
Modeling the Dynamics of Individual Neurons of the Stomatogastric Networks with Support Vector Machines

Frontzek, T., Gutzen, C., Lal, TN., Heinzel, H-G., Eckmiller, R., Böhm, H.

Abstract Proceedings of the 6th International Congress of Neuroethology (ICN'2001) Bonn, abstract 404, 2001 (poster)

Abstract
In small rhythmic active networks timing of individual neurons is crucial for generating different spatial-temporal motor patterns. Switching of one neuron between different rhythms can cause transition between behavioral modes. In order to understand the dynamics of rhythmically active neurons we analyzed the oscillatory membranpotential of a pacemaker neuron and used different neural network models to predict dynamics of its time series. In a first step we have trained conventional RBF networks and Support Vector Machines (SVMs) using gaussian kernels with intracellulary recordings of the pyloric dilatator neuron in the Australian crayfish, Cherax destructor albidus. As a rule SVMs were able to learn the nonlinear dynamics of pyloric neurons faster (e.g. 15s) than RBF networks (e.g. 309s) under the same hardware conditions. After training SVMs performed a better iterated one-step-ahead prediction of time series in the pyloric dilatator neuron with regard to test error and error sum. The test error decreased with increasing number of support vectors. The best SVM used 196 support vectors and produced a test error of 0.04622 as opposed to the best RBF with 0.07295 using 26 RBF-neurons. In pacemaker neuron PD the timepoint at which the membranpotential will cross threshold for generation of its oscillatory peak is most important for determination of the test error. Interestingly SVMs are especially better in predicting this important part of the membranpotential which is superimposed by various synaptic inputs, which drive the membranpotential to its threshold.

ei

[BibTex]

[BibTex]

2000


no image
Contrast discrimination using periodic pulse trains

Wichmann, F., Henning, G.

pages: 74, 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Previous research (Wichmann et al. 1998; Wichmann, 1999; Henning and Wichmann, 1999) has demonstrated the importance of high contrasts to distinguish between alternative models of contrast discrimination. However, the modulation transfer function of the eye imposes large contrast losses on stimuli, particularly for stimuli of high spatial frequency, making high retinal contrasts difficult to obtain using sinusoidal gratings. Standard 2AFC contrast discrimination experiments were conducted using periodic pulse trains as stimuli. Given our Mitsubishi display we achieve stimuli with up to 160% contrast at the fundamental frequency. The shape of the threshold versus (pedestal) contrast (TvC) curve using pulse trains shows the characteristic dipper shape, i.e. contrast discrimination is sometimes “easier” than detection. The rising part of the TvC function has the same slope as that measured for contrast discrimination using sinusoidal gratings of the same frequency as the fundamental. Periodic pulse trains offer the possibility to explore the visual system’s properties using high retinal contrasts. Thus they might prove useful in tasks other than contrast discrimination. Second, at least for high spatial frequencies (8 c/deg) it appears that contrast discrimination using sinusoids and periodic pulse trains results in virtually identical TvC functions, indicating a lack of probability summation. Further implications of these results are discussed.

ei

Web [BibTex]

2000


Web [BibTex]


no image
Subliminale Darbietung verkehrsrelevanter Information in Kraftfahrzeugen

Staedtgen, M., Hahn, S., Franz, MO., Spitzer, M.

pages: 98, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot), 3. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2000 (poster)

Abstract
Durch moderne Bildverarbeitungstechnologien ist es m{\"o}glich, in Kraftfahrzeugen bestimmte kritische Verkehrssituationen automatisch zu erkennen und den Fahrer zu warnen bzw. zu informieren. Ein Problem ist dabei die Darbietung der Ergebnisse, die den Fahrer m{\"o}glichst wenig belasten und seine Aufmerksamkeit nicht durch zus{\"a}tzliche Warnleuchten oder akustische Signale vom Verkehrsgeschehen ablenken soll. In einer Reihe von Experimenten wurde deshalb untersucht, ob subliminal dargebotene, das heißt nicht bewußt wahrgenommene, verkehrsrelevante Informationen verhaltenswirksam werden und zur Informations{\"u}bermittlung an den Fahrer genutzt werden k{\"o}nnen. In einem Experiment zur semantischen Bahnung konnte mit Hilfe einer lexikalischen Entscheidungsaufgabe gezeigt werden, daß auf den Straßenverkehr bezogene Worte schneller verarbeitet werden, wenn vorher ein damit in Zusammenhang stehendes Bild eines Verkehrsschildes subliminal pr{\"a}sentiert wurde. Auch bei parafovealer Darbietung der subliminalen Stimuli wurde eine Beschleunigung erzielt. In einer visuellen Suchaufgabe wurden in Bildern realer Verkehrssituationen Verkehrszeichen schneller entdeckt, wenn das Bild des Verkehrszeichens vorher subliminal dargeboten wurde. In beiden Experimenten betrug die Pr{\"a}sentationszeit f{\"u}r die Hinweisreize 17 ms, zus{\"a}tzlich wurde durch Vorw{\"a}rts- und R{\"u}ckw{\"a}rtsmaskierung die bewußteWahrnehmung verhindert. Diese Laboruntersuchungen zeigten, daß sich auch im Kontext des Straßenverkehrs Beschleunigungen der Informationsverarbeitung durch subliminal dargebotene Stimuli erreichen lassen. In einem dritten Experiment wurde die Darbietung eines subliminalen Hinweisreizes auf die Reaktionszeit beim Bremsen in einem realen Fahrversuch untersucht. Die Versuchspersonen (n=17) sollten so schnell wie m{\"o}glich bremsen, wenn die Bremsleuchten eines im Abstand von 12-15 m voran fahrenden Fahrzeuges aufleuchteten. In 50 von insgesamt 100 Durchg{\"a}ngen wurde ein subliminaler Stimulus (zwei rote Punkte mit einem Zentimeter Durchmesser und zehn Zentimeter Abstand) 150 ms vor Aufleuchten der Bremslichter pr{\"a}sentiert. Die Darbietung erfolgte durch ein im Auto an Stelle des Tachometers integriertes TFT-LCD Display. Im Vergleich zur Reaktion ohne subliminalen Stimulus verk{\"u}rzte sich die Reaktionszeit dadurch signifikant um 51 ms. In den beschriebenen Experimenten konnte gezeigt werden, daß die subliminale Darbietung verkehrsrelevanter Information auch in Kraftfahrzeugen verhaltenswirksam werden kann. In Zukunft k{\"o}nnte durch die Kombination der online-Bildverarbeitung im Kraftfahrzeug mit subliminaler Darbietung der Ergebnisse eine Erh{\"o}hung der Verkehrssicherheit und des Komforts erreicht werden.

ei

Web [BibTex]

Web [BibTex]

1999


no image
Unexpected and anticipated pain: identification of specific brain activations by correlation with reference functions derived form conditioning theory

Ploghaus, A., Clare, S., Wichmann, F., Tracey, I.

29, 29th Annual Meeting of the Society for Neuroscience (Neuroscience), October 1999 (poster)

ei

[BibTex]

1999


[BibTex]


no image
Single-class Support Vector Machines

Schölkopf, B., Williamson, R., Smola, A., Shawe-Taylor, J.

Dagstuhl-Seminar on Unsupervised Learning, pages: 19-20, (Editors: J. Buhmann, W. Maass, H. Ritter and N. Tishby), 1999 (poster)

ei

[BibTex]

[BibTex]


no image
Pedestal effects with periodic pulse trains

Henning, G., Wichmann, F.

Perception, 28, pages: S137, 1999 (poster)

Abstract
It is important to know for theoretical reasons how performance varies with stimulus contrast. But, for objects on CRT displays, retinal contrast is limited by the linear range of the display and the modulation transfer function of the eye. For example, with an 8 c/deg sinusoidal grating at 90% contrast, the contrast of the retinal image is barely 45%; more retinal contrast is required, however, to discriminate among theories of contrast discrimination (Wichmann, Henning and Ploghaus, 1998). The stimulus with the greatest contrast at any spatial-frequency component is a periodic pulse train which has 200% contrast at every harmonic. Such a waveform cannot, of course, be produced; the best we can do with our Mitsubishi display provides a contrast of 150% at an 8-c/deg fundamental thus producing a retinal image with about 75% contrast. The penalty of using this stimulus is that the 2nd harmonic of the retinal image also has high contrast (with an emmetropic eye, more than 60% of the contrast of the 8-c/deg fundamental ) and the mean luminance is not large (24.5 cd/m2 on our display). We have used standard 2-AFC experiments to measure the detectability of an 8-c/deg pulse train against the background of an identical pulse train of different contrasts. An unusually large improvement in detetectability was measured, the pedestal effect or "dipper," and the dipper was unusually broad. The implications of these results will be discussed.

ei

[BibTex]

[BibTex]


no image
Implications of the pedestal effect for models of contrast-processing and gain-control

Wichmann, F., Henning, G.

OSA Conference Program, pages: 62, 1999 (poster)

Abstract
Understanding contrast processing is essential for understanding spatial vision. Pedestal contrast systematically affects slopes of functions relating 2-AFC contrast discrimination performance to pedestal contrast. The slopes provide crucial information because only full sets of data allow discrimination among contrast-processing and gain-control models. Issues surrounding Weber's law will also be discussed.

ei

[BibTex]