Header logo is


2018


Thumb xl motion segmentation tracking clustering teaser
Motion Segmentation & Multiple Object Tracking by Correlation Co-Clustering

Keuper, M., Tang, S., Andres, B., Brox, T., Schiele, B.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018 (article)

ps

pdf DOI Project Page [BibTex]

2018


pdf DOI Project Page [BibTex]


no image
3D nanoprinted plastic kinoform x-ray optics

Sanli, U. T., Ceylan, H., Bykova, I., Weigand, M., Sitti, M., Schütz, G., Keskinbora, K.

{Advanced Materials}, 30(36), Wiley-VCH, Weinheim, 2018 (article)

mms pi

DOI [BibTex]

DOI [BibTex]


Thumb xl smalrteaser
Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images

Zuffi, S., Kanazawa, A., Black, M. J.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Animals are widespread in nature and the analysis of their shape and motion is important in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. Consequently, we propose a method to capture the detailed 3D shape of animals from images alone. The articulated and deformable nature of animals makes this problem extremely challenging, particularly in unconstrained environments with moving and uncalibrated cameras. To make this possible, we use a strong prior model of articulated animal shape that we fit to the image data. We then deform the animal shape in a canonical reference pose such that it matches image evidence when articulated and projected into multiple images. Our method extracts significantly more 3D shape detail than previous methods and is able to model new species, including the shape of an extinct animal, using only a few video frames. Additionally, the projected 3D shapes are accurate enough to facilitate the extraction of a realistic texture map from multiple frames.

ps

pdf code/data 3D models Project Page [BibTex]

pdf code/data 3D models Project Page [BibTex]


Thumb xl selection 002
PoTion: Pose MoTion Representation for Action Recognition

Choutas, V., Weinzaepfel, P., Revaud, J., Schmid, C.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Most state-of-the-art methods for action recognition rely on a two-stream architecture that processes appearance and motion independently. In this paper, we claim that consider- ing them jointly offers rich information for action recogni- tion. We introduce a novel representation that gracefully en- codes the movement of some semantic keypoints. We use the human joints as these keypoints and term our Pose moTion representation PoTion. Specifically, we first run a state- of-the-art human pose estimator [4] and extract heatmaps for the human joints in each frame. We obtain our PoTion representation by temporally aggregating these probability maps. This is achieved by ‘colorizing’ each of them de- pending on the relative time of the frames in the video clip and summing them. This fixed-size representation for an en- tire video clip is suitable to classify actions using a shallow convolutional neural network. Our experimental evaluation shows that PoTion outper- forms other state-of-the-art pose representations [6, 48]. Furthermore, it is complementary to standard appearance and motion streams. When combining PoTion with the recent two-stream I3D approach [5], we obtain state-of- the-art performance on the JHMDB, HMDB and UCF101 datasets.

ps

PDF [BibTex]

PDF [BibTex]


no image
Controllable switching between planar and helical flagellar swimming of a soft robotic sperm

Khalil, I. S. M., Tabak, A. F., Seif, M. A., Klingner, A., Sitti, M.

PloS One, 13(11):e0206456, 2018 (article)

pi

[BibTex]

[BibTex]


no image
Kinetics of orbitally shaken particles constrained to two dimensions

Ipparthi, D., Hageman, T. A. G., Cambier, N., Sitti, M., Dorigo, M., Abelmann, L., Mastrangeli, M.

Physical Review E, 98(4):042137, 2018 (article)

pi

[BibTex]

[BibTex]


no image
Seed-mediated synthesis of plasmonic gold nanoribbons using cancer cells for hyperthermia applications

Singh, A. V., Alapan, Y., Jahnke, T., Laux, P., Luch, A., Aghakhani, A., Kharratian, S., Onbasli, M. C., Bill, J., Sitti, M.

Journal of Materials Chemistry B, 6(46):7573-7581, 2018 (article)

pi

[BibTex]

[BibTex]

1998


Thumb xl bildschirmfoto 2012 12 06 um 10.05.20
Summarization of video-taped presentations: Automatic analysis of motion and gesture

Ju, S. X., Black, M. J., Minneman, S., Kimber, D.

IEEE Trans. on Circuits and Systems for Video Technology, 8(5):686-696, September 1998 (article)

Abstract
This paper presents an automatic system for analyzing and annotating video sequences of technical talks. Our method uses a robust motion estimation technique to detect key frames and segment the video sequence into subsequences containing a single overhead slide. The subsequences are stabilized to remove motion that occurs when the speaker adjusts their slides. Any changes remaining between frames in the stabilized sequences may be due to speaker gestures such as pointing or writing, and we use active contours to automatically track these potential gestures. Given the constrained domain, we define a simple set of actions that can be recognized based on the active contour shape and motion. The recognized actions provide an annotation of the sequence that can be used to access a condensed version of the talk from a Web page.

ps

pdf pdf from publisher DOI [BibTex]

1998


pdf pdf from publisher DOI [BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 12.22.18
Robust anisotropic diffusion

Black, M. J., Sapiro, G., Marimont, D., Heeger, D.

IEEE Transactions on Image Processing, 7(3):421-432, March 1998 (article)

Abstract
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The edge-stopping; function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new edge-stopping; function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges

ps

pdf pdf from publisher [BibTex]

pdf pdf from publisher [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.33.36
The Digital Office: Overview

Black, M., Berard, F., Jepson, A., Newman, W., Saund, E., Socher, G., Taylor, M.

In AAAI Spring Symposium on Intelligent Environments, pages: 1-6, Stanford, March 1998 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.46.31
A framework for modeling appearance change in image sequences

Black, M. J., Fleet, D. J., Yacoob, Y.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 660-667, Mumbai, India, January 1998 (inproceedings)

Abstract
Image "appearance" may change over time due to a variety of causes such as 1) object or camera motion; 2) generic photometric events including variations in illumination (e.g. shadows) and specular reflections; and 3) "iconic changes" which are specific to the objects being viewed and include complex occlusion events and changes in the material properties of the objects. We propose a general framework for representing and recovering these "appearance changes" in an image sequence as a "mixture" of different causes. The approach generalizes previous work on optical flow to provide a richer description of image events and more reliable estimates of image motion.

ps

pdf video [BibTex]

pdf video [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.49.49
Parameterized modeling and recognition of activities

Yacoob, Y., Black, M. J.

In Sixth International Conf. on Computer Vision, ICCV’98, pages: 120-127, Mumbai, India, January 1998 (inproceedings)

Abstract
A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented.

ps

pdf [BibTex]

pdf [BibTex]


no image
Tele-nanorobotics using an atomic force microscope as a nanorobot and sensor

Sitti, M., Hashimoto, H.

Advanced Robotics, 13(4):417-436, Taylor & Francis, 1998 (article)

pi

[BibTex]

[BibTex]


no image
Nano tele-manipulation using virtual reality interface

Sitti, M., Horiguchi, S., Hashimoto, H.

In Industrial Electronics, 1998. Proceedings. ISIE’98. IEEE International Symposium on, 1, pages: 171-176, 1998 (inproceedings)

pi

[BibTex]

[BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.23.21
Motion feature detection using steerable flow fields

Fleet, D. J., Black, M. J., Jepson, A. D.

In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR-98, pages: 274-281, IEEE, Santa Barbara, CA, 1998 (inproceedings)

Abstract
The estimation and detection of occlusion boundaries and moving bars are important and challenging problems in image sequence analysis. Here, we model such motion features as linear combinations of steerable basis flow fields. These models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models. We estimate the subspace coefficients of the motion feature models directly from spatiotemporal image derivatives using a robust regression method. From the subspace coefficients we detect the presence of a motion feature and solve for the orientation of the feature and the relative velocities of the surfaces. Our method does not require the prior computation of optical flow and recovers accurate estimates of orientation and velocity.

ps

pdf [BibTex]

pdf [BibTex]


no image
Tele-nanorobotics using atomic force microscope

Sitti, M., Hashimoto, H.

In Intelligent Robots and Systems, 1998. Proceedings., 1998 IEEE/RSJ International Conference on, 3, pages: 1739-1746, 1998 (inproceedings)

pi

[BibTex]

[BibTex]


Thumb xl paybotteaser
PLAYBOT: A visually-guided robot for physically disabled children

Tsotsos, J. K., Verghese, G., Dickinson, S., Jenkin, M., Jepson, A., Milios, E., Nuflo, F., Stevenson, S., Black, M., Metaxas, D., Culhane, S., Ye, Y., Mann, R.

Image & Vision Computing, Special Issue on Vision for the Disabled, 16(4):275-292, 1998 (article)

Abstract
This paper overviews the PLAYBOT project, a long-term, large-scale research program whose goal is to provide a directable robot which may enable physically disabled children to access and manipulate toys. This domain is the first test domain, but there is nothing inherent in the design of PLAYBOT that prohibits its extension to other tasks. The research is guided by several important goals: vision is the primary sensor; vision is task directed; the robot must be able to visually search its environment; object and event recognition are basic capabilities; environments must be natural and dynamic; users and environments are assumed to be unpredictable; task direction and reactivity must be smoothly integrated; and safety is of high importance. The emphasis of the research has been on vision for the robot this is the most challenging research aspect and the major bottleneck to the development of intelligent robots. Since the control framework is behavior-based, the visual capabilities of PLAYBOT are described in terms of visual behaviors. Many of the components of PLAYBOT are briefly described and several examples of implemented sub-systems are shown. The paper concludes with a description of the current overall system implementation, and a complete example of PLAYBOT performing a simple task.

ps

pdf pdf from publisher DOI [BibTex]

pdf pdf from publisher DOI [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.18.33
Visual surveillance of human activity

L. Davis, S. F., Harwood, D., Yacoob, Y., Hariatoglu, I., Black, M.

In Asian Conference on Computer Vision, ACCV, 1998 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.29.19
A Probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions

Black, M. J., Jepson, A. D.

In European Conf. on Computer Vision, ECCV-98, pages: 909-924, Freiburg, Germany, 1998 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
2D micro particle assembly using atomic force microscope

Sitti, M., Hirahara, K., Hashimoto, H.

In Micromechatronics and Human Science, 1998. MHS’98. Proceedings of the 1998 International Symposium on, pages: 143-148, 1998 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Macro to nano tele-manipulation through nanoelectromechanical systems

Sitti, M., Hashimoto, H.

In Industrial Electronics Society, 1998. IECON’98. Proceedings of the 24th Annual Conference of the IEEE, 1, pages: 98-103, 1998 (inproceedings)

pi

[BibTex]

[BibTex]


Thumb xl bildschirmfoto 2012 12 06 um 12.33.38
EigenTracking: Robust matching and tracking of articulated objects using a view-based representation

Black, M. J., Jepson, A.

International Journal of Computer Vision, 26(1):63-84, 1998 (article)

Abstract
This paper describes an approach for tracking rigid and articulated objects using a view-based representation. The approach builds on and extends work on eigenspace representations, robust estimation techniques, and parameterized optical flow estimation. First, we note that the least-squares image reconstruction of standard eigenspace techniques has a number of problems and we reformulate the reconstruction problem as one of robust estimation. Second we define a “subspace constancy assumption” that allows us to exploit techniques for parameterized optical flow estimation to simultaneously solve for the view of an object and the affine transformation between the eigenspace and the image. To account for large affine transformations between the eigenspace and the image we define a multi-scale eigenspace representation and a coarse-to-fine matching strategy. Finally, we use these techniques to track objects over long image sequences in which the objects simultaneously undergo both affine image motions and changes of view. In particular we use this “EigenTracking” technique to track and recognize the gestures of a moving hand.

ps

pdf pdf from publisher video [BibTex]


Thumb xl bildschirmfoto 2013 01 14 um 09.40.25
Recognizing temporal trajectories using the Condensation algorithm

Black, M. J., Jepson, A. D.

In Int. Conf. on Automatic Face and Gesture Recognition, pages: 16-21, Nara, Japan, 1998 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl cipollabook
Looking at people in action - An overview

Yacoob, Y., Davis, L. S., Black, M., Gavrila, D., Horprasert, T., Morimoto, C.

In Computer Vision for Human–Machine Interaction, (Editors: R. Cipolla and A. Pentland), Cambridge University Press, 1998 (incollection)

ps

publisher site google books [BibTex]

publisher site google books [BibTex]


no image
In vivo diabetic wound healing with nanofibrous scaffolds modified with gentamicin and recombinant human epidermal growth factor

Dwivedi, C., Pandey, I., Pandey, H., Patil, S., Mishra, S. B., Pandey, A. C., Zamboni, P., Ramteke, P. W., Singh, A. V.

Journal of Biomedical Materials Research Part A, 106(3):641-651, March (article)

Abstract
Abstract Diabetic wounds are susceptible to microbial infection. The treatment of these wounds requires a higher payload of growth factors. With this in mind, the strategy for this study was to utilize a novel payload comprising of Eudragit RL/RS 100 nanofibers carrying the bacterial inhibitor gentamicin sulfate (GS) in concert with recombinant human epidermal growth factor (rhEGF); an accelerator of wound healing. GS containing Eudragit was electrospun to yield nanofiber scaffolds, which were further modified by covalent immobilization of rhEGF to their surface. This novel fabricated nanoscaffold was characterized using scanning electron microscopy, Fourier transform infrared spectroscopy, and X‐ray diffraction. The thermal behavior of the nanoscaffold was determined using thermogravimetric analysis and differential scanning calorimetry. In the in vitro antibacterial assays, the nanoscaffolds exhibited comparable antibacterial activity to pure gentemicin powder. In vivo work using female C57/BL6 mice, the nanoscaffolds induced faster wound healing activity in dorsal wounds compared to the control. The paradigm in this study presents a robust in vivo model to enhance the applicability of drug delivery systems in wound healing applications. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 106A: 641–651, 2018.

pi

link (url) DOI [BibTex]


link (url) DOI [BibTex]


no image
Robotics Research

Tong, Chi Hay, Furgale, Paul, Barfoot, Timothy D, Guizilini, Vitor, Ramos, Fabio, Chen, Yushan, T\uumová, Jana, Ulusoy, Alphan, Belta, Calin, Tenorth, Moritz, others

(article)

pi

[BibTex]

[BibTex]