Embodied Vision Members Publications

Visual-Inertial State Estimation with Online Adaptation of Mobile Robot Dynamics Models

ST-VIO tightly integrates visual, inertial, and dynamic motion constraints and calibrates the motion model online. © IEEE. Reprinted, with permission, from [File Icon].

Members

Embodied Vision
Max Planck Research Group Leader
Embodied Vision
Embodied Vision
  • Master Student
Embodied Vision
Research Associate

Publications

Embodied Vision Conference Paper Online Calibration of a Single-Track Ground Vehicle Dynamics Model by Tight Fusion with Visual-Inertial Odometry Li, H., Stueckler, J. In 2024 IEEE International Conference on Robotics and Automation (ICRA 2024) , 1631-1637, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2024), August 2024 (Published)
Wheeled mobile robots need the ability to estimate their motion and the effect of their control actions for navigation planning. In this paper, we present ST-VIO, a novel approach which tightly fuses a single-track dynamics model for wheeled ground vehicles with visual-inertial odometry (VIO). Our method calibrates and adapts the dynamics model online to improve the accuracy of forward prediction conditioned on future control inputs. The single-track dynamics model approximates wheeled vehicle motion under specific control inputs on flat ground using ordinary differential equations. We use a singularity-free and differentiable variant of the single-track model to enable seamless integration as dynamics factor into VIO and to optimize the model parameters online together with the VIO state variables. We validate our method with real-world data in both indoor and outdoor environments with different terrain types and wheels. In experiments, we demonstrate that ST-VIO can not only adapt to wheel or ground changes and improve the accuracy of prediction under new control inputs, but can even improve tracking accuracy.
preprint supplemental video code datasets DOI URL BibTeX

Embodied Vision Autonomous Motion Movement Generation and Control Conference Paper Visual-Inertial and Leg Odometry Fusion for Dynamic Locomotion Dhédin, V., Li, H., Khorshidi, S., Mack, L., Ravi, A. K. C., Meduri, A., Shah, P., Grimminger, F., Righetti, L., Khadiv, M., Stueckler, J. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2023 (Published)
Implementing dynamic locomotion behaviors on legged robots requires a high-quality state estimation module. Especially when the motion includes flight phases, state-of-the-art approaches fail to produce reliable estimation of the robot posture, in particular base height. In this paper, we propose a novel approach for combining visual-inertial odometry (VIO) with leg odometry in an extended Kalman filter (EKF) based state estimator. The VIO module uses a stereo camera and IMU to yield low-drift 3D position and yaw orientation and drift-free pitch and roll orientation of the robot base link in the inertial frame. However, these values have a considerable amount of latency due to image processing and optimization, while the rate of update is quite low which is not suitable for low-level control. To reduce the latency, we predict the VIO state estimate at the rate of the IMU measurements of the VIO sensor. The EKF module uses the base pose and linear velocity predicted by VIO, fuses them further with a second high-rate IMU and leg odometry measurements, and produces robot state estimates with a high frequency and small latency suitable for control. We integrate this lightweight estimation framework with a nonlinear model predictive controller and show successful implementation of a set of agile locomotion behaviors, including trotting and jumping at varying horizontal speeds, on a torque-controlled quadruped robot.
preprint video DOI URL BibTeX

Embodied Vision Article Visual-Inertial Odometry with Online Calibration of Velocity-Control Based Kinematic Motion Models Li, H., Stueckler, J. IEEE Robotics and Automation Letters, 7(3):6415-6422, July 2022, Accepted for oral presentation at IEEE ICRA 2023 (Published)
Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints. In this paper, we propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online. Including such a motion model can help to improve the accuracy of VIO. Compared to several previous approaches proposed to integrate wheel odometer measurements for this purpose, our method does not require wheel encoders and can be applied when the robot motion can be modeled with velocity-control based kinematic motion model. We use radial basis function (RBF) kernels to compensate for the time delay and deviations between control commands and actual robot motion. The motion model is calibrated online by the VIO system and can be used as a forward model for motion control and planning. We evaluate our approach with data obtained in variously sized indoor environments, demonstrate improvements over a pure VIO method, and evaluate the prediction accuracy of the online calibrated model.
preprint DOI URL BibTeX

Embodied Vision Article Visual-Inertial Mapping with Non-Linear Factor Recovery Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D. IEEE Robotics and Automation Letters (RA-L), 5(2):422-429, 2020, presented at IEEE International Conference on Robotics and Automation (ICRA) 2020, preprint arXiv:1904.06504 (Published)
Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.
Code Preprint URL BibTeX

Embodied Vision Conference Paper Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry Yang, N., Wang, R., Stueckler, J., Cremers, D. In European Conference on Computer Vision (ECCV), September 2018, oral presentation, preprint https://arxiv.org/abs/1807.02570 (Published) URL BibTeX

Embodied Vision Conference Paper Direct Sparse Odometry With Rolling Shutter Schubert, D., Usenko, V., Demmel, N., Stueckler, J., Cremers, D. In European Conference on Computer Vision (ECCV), September 2018, oral presentation (Published) BibTeX

Embodied Vision Article Omnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras Matsuki, H., von Stumberg, L., Usenko, V., Stueckler, J., Cremers, D. IEEE Robotics and Automation Letters (RA-L) & Int. Conference on Intelligent Robots and Systems (IROS), Robotics and Automation Letters (RA-L), IEEE, 2018 BibTeX

Embodied Vision Conference Paper The TUM VI Benchmark for Evaluating Visual-Inertial Odometry Schubert, D., Goll, T., Demmel, N., Usenko, V., Stueckler, J., Cremers, D. In IEEE International Conference on Intelligent Robots and Systems (IROS), 2018, arXiv:1804.06120 BibTeX

Embodied Vision Conference Paper Keyframe-Based Visual-Inertial Online SLAM with Relocalization Kasyanov, A., Engelmann, F., Stueckler, J., Leibe, B. In IEEE/RSJ Int. Conference on Intelligent Robots and Systems, IROS, 2017 BibTeX

Embodied Vision Conference Paper CPA-SLAM: Consistent Plane-Model Alignment for Direct RGB-D SLAM Ma, L., Kerl, C., Stueckler, J., Cremers, D. In IEEE International Conference on Robotics and Automation (ICRA), 2016 BibTeX

Embodied Vision Conference Paper Direct Visual-Inertial Odometry with Stereo Cameras Usenko, V., Engel, J., Stueckler, J., Cremers, D. In IEEE International Conference on Robotics and Automation (ICRA), 2016 BibTeX

Embodied Vision Conference Paper Reconstructing Street-Scenes in Real-Time From a Driving Car Usenko, V., Engel, J., Stueckler, J., Cremers, D. In Proc. of the Int. Conference on 3D Vision (3DV), October 2015 BibTeX

Embodied Vision Conference Paper Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras Kerl, C., Stueckler, J., Cremers, D. In IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015 BibTeX

Embodied Vision Conference Paper Large-Scale Direct SLAM with Stereo Cameras Engel, J., Stueckler, J., Cremers, D. In IEEE International Conference on Intelligent Robots and Systems (IROS), 2015 (Published) BibTeX

Embodied Vision Conference Paper Super-Resolution Keyframe Fusion for 3D Modeling with High-Quality Textures Maier, R., Stueckler, J., Cremers, D. In International Conference on 3D Vision (3DV), 2015 BibTeX

Embodied Vision Conference Paper Combining the Strengths of Sparse Interest Point and Dense Image Registration for RGB-D Odometry Stueckler, J., Gutt, A., Behnke, S. In Proc. of the Joint 45th International Symposium on Robotics (ISR) and 8th German Conference on Robotics (ROBOTIK), 2014 URL BibTeX

Embodied Vision Article Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking Stueckler, J., Behnke, S. Journal of Visual Communication and Image Representation (JVCI), 25(1):137-147, 2014 DOI URL BibTeX