Header logo is


2018


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration (4 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
For simple and realistic vibrotactile feedback, 3D accelerations from real contact interactions are usually rendered using a single-axis vibration actuator; this dimensional reduction can be performed in many ways. This demonstration implements a real-time conversion system that simultaneously measures 3D accelerations and renders corresponding 1D vibrations using a two-pen interface. In the demonstration, a user freely interacts with various objects using an In-Pen that contains a 3-axis accelerometer. The captured accelerations are converted to a single-axis signal, and an Out-Pen renders the reduced signal for the user to feel. We prepared seven conversion methods from the simple use of a single-axis signal to applying principal component analysis (PCA) so that users can compare the performance of each conversion method in this demonstration.

hi

Project Page [BibTex]

2018


Project Page [BibTex]


Thumb xl representative image2
A Large-Scale Fabric-Based Tactile Sensor Using Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

Hands-on demonstration (3 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
Large-scale tactile sensing is important for household robots and human-robot interaction because contacts can occur all over a robot’s body surface. This paper presents a new fabric-based tactile sensor that is straightforward to manufacture and can cover a large area. The tactile sensor is made of conductive and non-conductive fabric layers, and the electrodes are stitched with conductive thread, so the resulting device is flexible and stretchable. The sensor utilizes internal array electrodes and a reconstruction method called electrical resistance tomography (ERT) to achieve a high spatial resolution with a small number of electrodes. The developed sensor shows that only 16 electrodes can accurately estimate single and multiple contacts over a square that measures 20 cm by 20 cm.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl teaser ps hi
Statistical Modelling of Fingertip Deformations and Contact Forces during Tactile Interaction

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Extended abstract presented at the Hand, Brain and Technology conference (HBT), Ascona, Switzerland, August 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl toc image
A machine from machines

Fischer, P.

Nature Physics, 14, pages: 1072–1073, July 2018 (misc)

Abstract
Building spinning microrotors that self-assemble and synchronize to form a gear sounds like an impossible feat. However, it has now been achieved using only a single type of building block -- a colloid that self-propels.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Haptipedia: Exploring Haptic Device Design Through Interactive Visualizations

Seifi, H., Fazlollahi, F., Park, G., Kuchenbecker, K. J., MacLean, K. E.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
How many haptic devices have been proposed in the last 30 years? How can we leverage this rich source of design knowledge to inspire future innovations? Our goal is to make historical haptic invention accessible through interactive visualization of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. In this demonstration, participants can explore Haptipedia’s growing library of grounded force feedback devices through several prototype visualizations, interact with 3D simulations of the device mechanisms and movements, and tell us about the attributes and devices that could make Haptipedia a useful resource for the haptic design community.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Delivering 6-DOF Fingertip Tactile Cues

Young, E., Kuchenbecker, K. J.

Work-in-progress paper (5 pages) presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the Robotics: Science and Systems Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, Pittsburgh, USA, June 2018 (misc)

Abstract
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Soft Multi-Axis Boundary-Electrode Tactile Sensors for Whole-Body Robotic Skin

Lee, H., Kim, J., Kuchenbecker, K. J.

Workshop paper (2 pages) presented at the RSS Pioneers Workshop, Pittsburgh, USA, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Poster Abstract: Toward Fast Closed-loop Control over Multi-hop Low-power Wireless Networks

Mager, F., Baumann, D., Trimpe, S., Zimmerling, M.

Proceedings of the 17th ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN), pages: 158-159, Porto, Portugal, April 2018 (poster)

ics

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Arm-Worn Tactile Displays

Kuchenbecker, K. J.

Cross-Cutting Challenge Interactive Discussion presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Fingertips and hands captivate the attention of most haptic interface designers, but humans can feel touch stimuli across the entire body surface. Trying to create devices that both can be worn and can deliver good haptic sensations raises challenges that rarely arise in other contexts. Most notably, tactile cues such as vibration, tapping, and squeezing are far simpler to implement in wearable systems than kinesthetic haptic feedback. This interactive discussion will present a variety of relevant projects to which I have contributed, attempting to pull out common themes and ideas for the future.

hi

[BibTex]

[BibTex]


Thumb xl wireframe main
Haptipedia: An Expert-Sourced Interactive Device Visualization for Haptic Designers

Seifi, H., MacLean, K. E., Kuchenbecker, K. J., Park, G.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Much of three decades of haptic device invention is effectively lost to today’s designers: dispersion across time, region, and discipline imposes an incalculable drag on innovation in this field. Our goal is to make historical haptic invention accessible through interactive navigation of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. To build this open resource, we will systematically mine the literature and engage the haptics community for expert annotation. In a multi-year broad-based initiative, we will empirically derive salient attributes of haptic devices, design an interactive visualization tool where device creators and repurposers can efficiently explore and search Haptipedia, and establish methods and tools to manually and algorithmically collect data from the haptics literature and our community of experts. This paper outlines progress in compiling an initial corpus of grounded force-feedback devices and their attributes, and it presents a concept sketch of the interface we envision.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human-Robot Interaction

Fitter, N. T., Mohan, M., Kuchenbecker, K. J., Johnson, M. J.

Workshop paper (6 pages) presented at the HRI Workshop on Personal Robots for Exercising and Coaching, Chicago, USA, March 2018 (misc)

Abstract
The worldwide population of older adults is steadily increasing and will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active and engaged while living at home. We developed eight human-robot exercise games for the Baxter Research Robot with the guidance of experts in game design, therapy, and rehabilitation. After extensive iteration, these games were employed in a user study that tested their viability with 20 younger and 20 older adult users. All participants were willing to enter Baxter’s workspace and physically interact with the robot. User trust and confidence in Baxter increased significantly between pre- and post-experiment assessments, and one individual from the target user population supplied us with abundant positive feedback about her experience. The preliminary results presented in this paper indicate potential for the use of two-armed human-scale robots for social-physical exercise interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Representation of sensory uncertainty in macaque visual cortex

Goris, R., Henaff, O., Meding, K.

Computational and Systems Neuroscience (COSYNE) 2018, March 2018 (poster)

ei

[BibTex]

[BibTex]


Thumb xl huggingpicture
Emotionally Supporting Humans Through Robot Hugs

Block, A. E., Kuchenbecker, K. J.

Workshop paper (2 pages) presented at the HRI Pioneers Workshop, Chicago, USA, March 2018 (misc)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, we want to enable robots to safely hug humans. This research strives to create and study a high fidelity robotic system that provides emotional support to people through hugs. This paper outlines our previous work evaluating human responses to a prototype’s physical and behavioral characteristics, and then it lays out our ongoing and future work.

hi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl teaser ps hi
Towards a Statistical Model of Fingertip Contact Deformations from 4D Data

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction even though this knowledge is essential to control wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning and modeling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution. The results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion of about 0.2 cm and proximal/distal bending of about 30◦, deformations that cannot be captured by imaging of the contact area alone. This project constitutes a first step towards an accurate statistical model of the finger’s behavior during haptic interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Can Humans Infer Haptic Surface Properties from Images?

Burka, A., Kuchenbecker, K. J.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Human children typically experience their surroundings both visually and haptically, providing ample opportunities to learn rich cross-sensory associations. To thrive in human environments and interact with the real world, robots also need to build models of these cross-sensory associations; current advances in machine learning should make it possible to infer models from large amounts of data. We previously built a visuo-haptic sensing device, the Proton Pack, and are using it to collect a large database of matched multimodal data from tool-surface interactions. As a benchmark to compare with machine learning performance, we conducted a human subject study (n = 84) on estimating haptic surface properties (here: hardness, roughness, friction, and warmness) from images. Using a 100-surface subset of our database, we showed images to study participants and collected 5635 ratings of the four haptic properties, which we compared with ratings made by the Proton Pack operator and with physical data recorded using motion, force, and vibration sensors. Preliminary results indicate weak correlation between participant and operator ratings, but potential for matching up certain human ratings (particularly hardness and roughness) with features from the literature.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl coregpatentfig
Co-Registration – Simultaneous Alignment and Modeling of Articulated 3D Shapes

Black, M., Hirshberg, D., Loper, M., Rachlin, E., Weiss, A.

Febuary 2018, U.S.~Patent 9,898,848 (misc)

Abstract
Present application refers to a method, a model generation unit and a computer program (product) for generating trained models (M) of moving persons, based on physically measured person scan data (S). The approach is based on a common template (T) for the respective person and on the measured person scan data (S) in different shapes and different poses. Scan data are measured with a 3D laser scanner. A generic personal model is used for co-registering a set of person scan data (S) aligning the template (T) to the set of person scans (S) while simultaneously training the generic personal model to become a trained person model (M) by constraining the generic person model to be scan-specific, person-specific and pose-specific and providing the trained model (M), based on the co registering of the measured object scan data (S).

ps

text [BibTex]


no image
Die kybernetische Revolution

Schölkopf, B.

15-Mar-2018, Süddeutsche Zeitung, 2018 (misc)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Generalized phase locking analysis of electrophysiology data

Safavi, S., Panagiotaropoulos, T., Kapoor, V., Logothetis, N. K., Besserve, M.

7th AREADNE Conference on Research in Encoding and Decoding of Neural Ensembles, 2018 (poster)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Photorealistic Video Super Resolution

Pérez-Pellitero, E., Sajjadi, M. S. M., Hirsch, M., Schölkopf, B.

Workshop and Challenge on Perceptual Image Restoration and Manipulation (PIRM) at the 15th European Conference on Computer Vision (ECCV), 2018 (poster)

ei

[BibTex]

[BibTex]


no image
Retinal image quality of the human eye across the visual field

Meding, K., Hirsch, M., Wichmann, F. A.

14th Biannual Conference of the German Society for Cognitive Science (KOGWIS 2018), 2018 (poster)

ei

[BibTex]

[BibTex]


no image
Emission and propagation of multi-dimensional spin waves in anisotropic spin textures

Sluka, V., Schneider, T., Gallardo, R. A., Kakay, A., Weigand, M., Warnatz, T., Mattheis, R., Roldan-Molina, A., Landeros, P., Tiberkevich, V., Slavin, A., Schütz, G., Erbe, A., Deac, A., Lindner, J., Raabe, J., Fassbender, J., Wintz, S.

2018 (misc)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Thermal skyrmion diffusion applied in probabilistic computing

Zázvorka, J., Jakobs, F., Heinze, D., Keil, N., Kromin, S., Jaiswal, S., Litzius, K., Jakob, G., Virnau, P., Pinna, D., Everschor-Sitte, K., Donges, A., Nowak, U., Kläui, M.

2018 (misc)

mms

link (url) [BibTex]

link (url) [BibTex]

2014


no image
Local Gaussian Regression

Meier, F., Hennig, P., Schaal, S.

arXiv preprint, March 2014, clmc (misc)

Abstract
Abstract: Locally weighted regression was created as a nonparametric learning method that is computationally efficient, can learn from very large amounts of data and add data incrementally. An interesting feature of locally weighted regression is that it can work with ...

am pn

Web link (url) [BibTex]

2014


Web link (url) [BibTex]


no image
Fibrillar structures to reduce viscous drag on aerodynamic and hydrodynamic wall surfaces

Castillo, L., Aksak, B., Sitti, M.

March 2014, US Patent App. 14/774,767 (misc)

pi

[BibTex]

[BibTex]


no image
The design of microfibers with mushroom-shaped tips for optimal adhesion

Sitti, M., Aksak, B.

February 2014, US Patent App. 14/766,561 (misc)

pi

[BibTex]

[BibTex]


no image
Dynamical source analysis of hippocampal sharp-wave ripple episodes

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

Bernstein Conference, 2014 (poster)

ei

DOI [BibTex]

DOI [BibTex]


no image
FID-guided retrospective motion correction based on autofocusing

Babayeva, M., Loktyushin, A., Kober, T., Granziera, C., Nickisch, H., Gruetter, R., Krueger, G.

Joint Annual Meeting ISMRM-ESMRMB, Milano, Italy, 2014 (poster)

ei

[BibTex]

[BibTex]


no image
Cluster analysis of sharp-wave ripple field potential signatures in the macaque hippocampus

Ramirez-Villegas, J. F., Logothetis, N. K., Besserve, M.

Computational and Systems Neuroscience Meeting (COSYNE), 2014 (poster)

ei

[BibTex]

[BibTex]

2011


no image
Spatiotemporal mapping of rhythmic activity in the inferior convexity of the macaque prefrontal cortex

Panagiotaropoulos, T., Besserve, M., Crocker, B., Kapoor, V., Tolias, A., Panzeri, S., Logothetis, N.

41(239.15), 41st Annual Meeting of the Society for Neuroscience (Neuroscience), November 2011 (poster)

Abstract
The inferior convexity of the macaque prefrontal cortex (icPFC) is known to be involved in higher order processing of sensory information mediating stimulus selection, attention and working memory. Until now, the vast majority of electrophysiological investigations of the icPFC employed single electrode recordings. As a result, relatively little is known about the spatiotemporal structure of neuronal activity in this cortical area. Here we study in detail the spatiotemporal properties of local field potentials (LFP's) in the icPFC using multi electrode recordings during anesthesia. We computed the LFP-LFP coherence as a function of frequency for thousands of pairs of simultaneously recorded sites anterior to the arcuate and inferior to the principal sulcus. We observed two distinct peaks of coherent oscillatory activity between approximately 4-10 and 15-25 Hz. We then quantified the instantaneous phase of these frequency bands using the Hilbert transform and found robust phase gradients across recording sites. The dependency of the phase on the spatial location reflects the existence of traveling waves of electrical activity in the icPFC. The dominant axis of these traveling waves roughly followed the ventral-dorsal plane. Preliminary results show that repeated visual stimulation with a 10s movie had no dramatic effect on the spatial structure of the traveling waves. Traveling waves of electrical activity in the icPFC could reflect highly organized cortical processing in this area of prefrontal cortex.

ei

Web [BibTex]

2011


Web [BibTex]


no image
Evaluation and Optimization of MR-Based Attenuation Correction Methods in Combined Brain PET/MR

Mantlik, F., Hofmann, M., Bezrukov, I., Schmidt, H., Kolb, A., Beyer, T., Reimold, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-96), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
Combined PET/MR provides simultaneous molecular and functional information in an anatomical context with unique soft tissue contrast. However, PET/MR does not support direct derivation of attenuation maps of objects and tissues within the measured PET field-of-view. Valid attenuation maps are required for quantitative PET imaging, specifically for scientific brain studies. Therefore, several methods have been proposed for MR-based attenuation correction (MR-AC). Last year, we performed an evaluation of different MR-AC methods, including simple MR thresholding, atlas- and machine learning-based MR-AC. CT-based AC served as gold standard reference. RoIs from 2 anatomic brain atlases with different levels of detail were used for evaluation of correction accuracy. We now extend our evaluation of different MR-AC methods by using an enlarged dataset of 23 patients from the integrated BrainPET/MR (Siemens Healthcare). Further, we analyze options for improving the MR-AC performance in terms of speed and accuracy. Finally, we assess the impact of ignoring BrainPET positioning aids during the course of MR-AC. This extended study confirms the overall prediction accuracy evaluation results of the first evaluation in a larger patient population. Removing datasets affected by metal artifacts from the Atlas-Patch database helped to improve prediction accuracy, although the size of the database was reduced by one half. Significant improvement in prediction speed can be gained at a cost of only slightly reduced accuracy, while further optimizations are still possible.

ei

Web [BibTex]

Web [BibTex]


no image
Atlas- and Pattern Recognition Based Attenuation Correction on Simultaneous Whole-Body PET/MR

Bezrukov, I., Schmidt, H., Mantlik, F., Schwenzer, N., Hofmann, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-116), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
With the recent availability of clinical whole-body PET/MRI it is possible to evaluate and further develop MR-based attenuation correction methods using simultaneously acquired PET/MR data. We present first results for MRAC on patient data acquired on a fully integrated whole-body PET/MRI (Biograph mMR, Siemens) using our method that applies atlas registration and pattern recognition (ATPR) and compare them to the segmentation-based (SEG) method provided by the manufacturer. The ATPR method makes use of a database of previously aligned pairs of MR-CT volumes to predict attenuation values on a continuous scale. The robustness of the method in presence of MR artifacts was improved by location and size based detection. Lesion to liver and lesion to blood ratios (LLR and LBR) were compared for both methods on 29 iso-contour ROIs in 4 patients. ATPR showed >20% higher LBR and LLR for ROIs in and >7% near osseous tissue. For ROIs in soft tissue, both methods yielded similar ratios with max. differences <6% . For ROIs located within metal artifacts in the MR image, ATPR showed >190% higher LLR and LBR than SEG, where ratios <0.1 occured. For lesions in the neighborhood of artifacts, both ratios were >15% higher for ATPR. If artifacts in MR volumes caused by metal implants are not accounted for in the computation of attenuation maps, they can lead to a strong decrease of lesion to background ratios, even to disappearance of hot spots. Metal implants are likely to occur in the patient collective receiving combined PET/MR scans, of our first 10 patients, 3 had metal implants. Our method is currently able to account for artifacts in the pelvis caused by prostheses. The ability of the ATPR method to account for bone leads to a significant increase of LLR and LBR in osseous tissue, which supports our previous evaluations with combined PET/CT and PET/MR data. For lesions within soft tissue, lesion to background ratios of ATPR and SEG were comparable.

ei

Web [BibTex]

Web [BibTex]


no image
Retrospective blind motion correction of MR images

Loktyushin, A., Nickisch, H., Pohmann, R.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):498, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
We present a retrospective method, which significantly reduces ghosting and blurring artifacts due to subject motion. No modifications to the sequence (as in [2, 3]), or the use of additional equipment (as in [1]) are required. Our method iteratively searches for the transformation, that applied to the lines in k-space -- yields the sparsest Laplacian filter output in the spatial domain.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Model based reconstruction for GRE EPI

Blecher, W., Pohmann, R., Schölkopf, B., Seeger, M.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):493-494, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
Model based nonlinear image reconstruction methods for MRI [3] are at the heart of modern reconstruction techniques (e.g.compressed sensing [6]). In general, models are expressed as a matrix equation where y and u are column vectors of k-space and image data, X model matrix and e independent noise. However, solving the corresponding linear system is not tractable. Therefore fast nonlinear algorithms that minimize a function wrt.the unknown image are the method of choice: In this work a model for gradient echo EPI, is proposed that incorporates N/2 Ghost correction and correction for field inhomogeneities. In addition to reconstruction from full data, the model allows for sparse reconstruction, joint estimation of image, field-, and relaxation-map (like [5,8] for spiral imaging), and improved N/2 ghost correction.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simultaneous multimodal imaging of patients with bronchial carcinoma in a whole body MR/PET system

Brendle, C., Sauter, A., Schmidt, H., Schraml, C., Bezrukov, I., Martirosian, P., Hetzel, J., Müller, M., Claussen, C., Schwenzer, N., Pfannenberg, C.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):141, 28th annual scientific meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRB), October 2011 (poster)

Abstract
Purpose/Introduction: Lung cancer is among the most frequent cancers (1). Exact determination of tumour extent and viability is crucial for adequate therapy guidance. [18F]-FDG-PET allows accurate staging and the evaluation of therapy response based on glucose metabolism. Diffusion weighted MRI (DWI) is another promising tool for the evaluation of tumour viability (2,3). The aim of the study was the simultaneous PET-MR acquisition in lung cancer patients and correlation of PET and MR data. Subjects and Methods: Seven patients (age 38-73 years, mean 61 years) with highly suspected or known bronchial carcinoma were examined. First, a [18F]-FDG-PET/CT was performed (injected dose: 332-380 MBq). Subsequently, patients were examined at the whole-body MR/PET (Siemens Biograph mMR). The MRI is a modified 3T Verio whole body system with a magnet bore of 60 cm (max. amplitude gradients 45 mT/m, max. slew rate 200 T/m/s). Concerning the PET, the whole-body MR/PET system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The following parameters for PET acquisition were applied: 2 bed positions, 6 min/bed with an average uptake time of 124 min after injection (range: 110-143 min). The attenuation correction of PET data was conducted with a segmentation-based method provided by the manufacturer. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. DWI MR images were recorded simultaneously for each bed using two b-values (0/800 s/mm2). SUVmax and ADCmin were assessed in a ROI analysis. The following ratios were calculated: SUVmax(tumor)/SUVmean(liver) and ADCmin(tumor)/ADCmean(muscle). Correlation between SUV and ADC was analyzed (Pearson’s correlation). Results: Diagnostic scans could be obtained in all patients with good tumour delineation. The spatial matching of PET and DWI data was very exact. Most tumours showed a pronounced FDG-uptake in combination with decreased ADC values. Significant correlation was found between SUV and ADC ratios (r = -0.87, p = 0.0118). Discussion/Conclusion: Simultaneous MR/PET imaging of lung cancer is feasible. The whole-body MR/PET system can provide complementary information regarding tumour viability and cellularity which could facilitate a more profound tumour characterization. Further studies have to be done to evaluate the importance of these parameters for therapy decisions and monitoring

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Support Vector Machines for finding deletions and short insertions using paired-end short reads

Grimm, D., Hagmann, J., König, D., Weigel, D., Borgwardt, KM.

International Conference on Intelligent Systems for Molecular Biology (ISMB), 2011 (poster)

ei

Web [BibTex]

Web [BibTex]


no image
Statistical estimation for optimization problems on graphs

Langovoy, M., Sra, S.

Empirical Inference Symposium, 2011 (poster)

ei

[BibTex]


no image
Transfer Learning with Copulas

Lopez-Paz, D., Hernandez-Lobato, J.

Neural Information Processing Systems (NIPS), 2011 (poster)

ei

PDF [BibTex]

PDF [BibTex]

2003


no image
Texture and haptic cues in slant discrimination: Measuring the effect of texture type on cue combination

Rosas, P., Wichmann, F., Ernst, M., Wagemans, J.

Journal of Vision, 3(12):26, 2003 Fall Vision Meeting of the Optical Society of America, December 2003 (poster)

Abstract
In a number of models of depth cue combination the depth percept is constructed via a weighted average combination of independent depth estimations. The influence of each cue in such average depends on the reliability of the source of information. (Young, Landy, & Maloney, 1993; Ernst & Banks, 2002.) In particular, Ernst & Banks (2002) formulate the combination performed by the human brain as that of the minimum variance unbiased estimator that can be constructed from the available cues. Using slant discrimination and slant judgment via probe adjustment as tasks, we have observed systematic differences in performance of human observers when a number of different types of textures were used as cue to slant (Rosas, Wichmann & Wagemans, 2003). If the depth percept behaves as described above, our measurements of the slopes of the psychometric functions provide the predicted weights for the texture cue for the ranked texture types. We have combined these texture types with object motion but the obtained results are difficult to reconcile with the unbiased minimum variance estimator model (Rosas & Wagemans, 2003). This apparent failure of such model might be explained by the existence of a coupling of texture and motion, violating the assumption of independence of cues. Hillis, Ernst, Banks, & Landy (2002) have shown that while for between-modality combination the human visual system has access to the single-cue information, for within-modality combination (visual cues: disparity and texture) the single-cue information is lost, suggesting a coupling between these cues. Then, in the present study we combine the different texture types with haptic information in a slant discrimination task, to test whether in the between-modality condition the texture cue and the haptic cue to slant are combined as predicted by an unbiased, minimum variance estimator model.

ei

Web DOI [BibTex]

2003


Web DOI [BibTex]


no image
Phase Information and the Recognition of Natural Images

Braun, D., Wichmann, F., Gegenfurtner, K.

6, pages: 138, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Fourier phase plays an important role in determining image structure. For example, when the phase spectrum of an image showing a ower is swapped with the phase spectrum of an image showing a tank, then we will usually perceive a tank in the resulting image, even though the amplitude spectrum is still that of the ower. Also, when the phases of an image are randomly swapped across frequencies, the resulting image becomes impossible to recognize. Our goal was to evaluate the e ect of phase manipulations in a more quantitative manner. On each trial subjects viewed two images of natural scenes. The subject had to indicate which one of the two images contained an animal. The spectra of the images were manipulated by adding random phase noise at each frequency. The phase noise was uniformly distributed in the interval [;+], where  was varied between 0 degree and 180 degrees. Image pairs were displayed for 100 msec. Subjects were remarkably resistant to the addition of phase noise. Even with [120; 120] degree noise, subjects still were at a level of 75% correct. The introduction of phase noise leads to a reduction of image contrast. Subjects were slightly better than a simple prediction based on this contrast reduction. However, when contrast response functions were measured in the same experimental paradigm, we found that performance in the phase noise experiment was signi cantly lower than that predicted by the corresponding contrast reduction.

ei

Web [BibTex]

Web [BibTex]


no image
Constraints measures and reproduction of style in robot imitation learning

Bakir, GH., Ilg, W., Franz, MO., Giese, M.

6, pages: 70, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Imitation learning is frequently discussed as a method for generating complex behaviors in robots by imitating human actors. The kinematic and the dynamic properties of humans and robots are typically quite di erent, however. For this reason observed human trajectories cannot be directly transferred to robots, even if their geometry is humanoid. Instead the human trajectory must be approximated by trajectories that can be realized by the robot. During this approximation deviations from the human trajectory may arise that change the style of the executed movement. Alternatively, the style of the movement might be well reproduced, but the imitated trajectory might be suboptimal with respect to di erent constraint measures from robotics control, leading to non-robust behavior. Goal of the presented work is to quantify this trade-o between \imitation quality" and constraint compatibility for the imitation of complex writing movements. In our experiment, we used trajectory data from human writing movements (see the abstract of Ilg et al. in this volume). The human trajectories were mapped onto robot trajectories by minimizing an error measure that integrates constraints that are important for the imitation of movement style and a regularizing constraint that ensures smooth joint trajectories with low velocities. In a rst experiment, both the end-e ector position and the shoulder angle of the robot were optimized in order to achieve good imitation together with accurate control of the end-e ector position. In a second experiment only the end-e ector trajectory was imitated whereas the motion of the elbow joint was determined using the optimal inverse kinematic solution for the robot. For both conditions di erent constraint measures (dexterity and relative jointlimit distances) and a measure for imitation quality were assessed. By controling the weight of the regularization term we can vary continuously between robot behavior optimizing imitation quality, and behavior minimizing joint velocities.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Study of Human Classification using Psychophysics and Machine Learning

Graf, A., Wichmann, F., Bülthoff, H., Schölkopf, B.

6, pages: 149, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), Febuary 2003 (poster)

Abstract
We attempt to reach a better understanding of classi cation in humans using both psychophysical and machine learning techniques. In our psychophysical paradigm the stimuli presented to the human subjects are modi ed using machine learning algorithms according to their responses. Frontal views of human faces taken from a processed version of the MPI face database are employed for a gender classi cation task. The processing assures that all heads have same mean intensity, same pixel-surface area and are centered. This processing stage is followed by a smoothing of the database in order to eliminate, as much as possible, scanning artifacts. Principal Component Analysis is used to obtain a low-dimensional representation of the faces in the database. A subject is asked to classify the faces and experimental parameters such as class (i.e. female/male), con dence ratings and reaction times are recorded. A mean classi cation error of 14.5% is measured and, on average, 0.5 males are classi ed as females and 21.3females as males. The mean reaction time for the correctly classi ed faces is 1229 +- 252 [ms] whereas the incorrectly classi ed faces have a mean reaction time of 1769 +- 304 [ms] showing that the reaction times increase with the subject's classi- cation error. Reaction times are also shown to decrease with increasing con dence, both for the correct and incorrect classi cations. Classi cation errors, reaction times and con dence ratings are then correlated to concepts of machine learning such as separating hyperplane obtained when considering Support Vector Machines, Relevance Vector Machines, boosted Prototype and K-means Learners. Elements near the separating hyperplane are found to be classi ed with more errors than those away from it. In addition, the subject's con dence increases when moving away from the hyperplane. A preliminary analysis on the available small number of subjects indicates that K-means classi cation seems to re ect the subject's classi cation behavior best. The above learnersare then used to generate \special" elements, or representations, of the low-dimensional database according to the labels given by the subject. A memory experiment follows where the representations are shown together with faces seen or unseen during the classi cation experiment. This experiment aims to assess the representations by investigating whether some representations, or special elements, are classi ed as \seen before" despite that they never appeared in the classi cation experiment, possibly hinting at their use during human classi cation.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Representation of Complex Movement Sequences Based on Hierarchical Spatio-Temporal Correspondence for Imitation Learning in Robotics

Ilg, W., Bakir, GH., Franz, MO., Giese, M.

6, pages: 74, (Editors: H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich, F.A. Wichmann), 6. T{\"u}binger Wahrnehmungskonferenz (TWK), February 2003 (poster)

Abstract
Imitation learning of complex movements has become a popular topic in neuroscience, as well as in robotics. A number of conceptual as well as practical problems are still unsolved. One example is the determination of the aspects of movements which are relevant for imitation. Problems concerning the movement representation are twofold: (1) The movement characteristics of observed movements have to be transferred from the perceptual level to the level of generated actions. (2) Continuous spaces of movements with variable styles have to be approximated based on a limited number of learned example sequences. Therefore, one has to use representation with a high generalisation capability. We present methods for the representation of complex movement sequences that addresses these questions in the context of the imitation learning of writing movements using a robot arm with human-like geometry. For the transfer of complex movements from perception to action we exploit a learning-based method that represents complex action sequences by linear combination of prototypical examples (Ilg and Giese, BMCV 2002). The method of hierarchical spatio-temporal morphable models (HSTMM) decomposes action sequences automatically into movement primitives. These primitives are modeled by linear combinations of a small number of learned example trajectories. The learned spatio-temporal models are suitable for the analysis and synthesis of long action sequences, which consist of movement primitives with varying style parameters. The proposed method is illustrated by imitation learning of complex writing movements. Human trajectories were recorded using a commercial motion capture system (VICON). In the rst step the recorded writing sequences are decomposed into movement primitives. These movement primitives can be analyzed and changed in style by de ning linear combinations of prototypes with di erent linear weight combinations. Our system can imitate writing movements of di erent actors, synthesize new writing styles and can even exaggerate the writing movements of individual actors. Words and writing movements of the robot look very natural, and closely match the natural styles. These preliminary results makes the proposed method promising for further applications in learning-based robotics. In this poster we focus on the acquisition of the movement representation (identi cation and segmentation of movement primitives, generation of new writing styles by spatio-temporal morphing). The transfer of the generated writing movements to the robot considering the given kinematic and dynamic constraints is discussed in Bakir et al (this volume).

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Models of contrast transfer as a function of presentation time and spatial frequency.

Wichmann, F.

2003 (poster)

Abstract
Understanding contrast transduction is essential for understanding spatial vision. Using standard 2AFC contrast discrimination experiments conducted using a carefully calibrated display we previously showed that the shape of the threshold versus (pedestal) contrast (TvC) curve changes with presentation time and the performance level defined as threshold (Wichmann, 1999; Wichmann & Henning, 1999). Additional experiments looked at the change of the TvC curve with spatial frequency (Bird, Henning & Wichmann, 2002), and at how to constrain the parameters of models of contrast processing (Wichmann, 2002). Here I report modelling results both across spatial frequency and presentation time. An extensive model-selection exploration was performed using Bayesian confidence regions for the fitted parameters as well as cross-validation methods. Bird, C.M., G.B. Henning and F.A. Wichmann (2002). Contrast discrimination with sinusoidal gratings of different spatial frequency. Journal of the Optical Society of America A, 19, 1267-1273. Wichmann, F.A. (1999). Some aspects of modelling human spatial vision: contrast discrimination. Unpublished doctoral dissertation, The University of Oxford. Wichmann, F.A. & Henning, G.B. (1999). Implications of the Pedestal Effect for Models of Contrast-Processing and Gain-Control. OSA Annual Meeting Program, 62. Wichmann, F.A. (2002). Modelling Contrast Transfer in Spatial Vision [Abstract]. Journal of Vision, 2, 7a.

ei

[BibTex]