In our works [], we investigate visual-inertial odometry approaches that enable wheeled robots to estimate their 3D motion in real-time and adapt the kinematic or dynamic model of the robot online to variations of robot and terrain properties. Kin-VIO [] adapts parameters of the mapping from velocity controls to an effective control command for a kinematic velocity-based motion model. In ST-VIO [] a single-track dynamic model formulated as an ODE is used to predict robot motion. Several of its parameters which map steering angle and thrust commands to robot motion are calibrated online.
Wheeled mobile robots need the ability to estimate their motion and the effect of their control actions for navigation planning. In this paper, we present ST-VIO, a novel approach which tightly fuses a single-track dynamics model for wheeled ground vehicles with visual-inertial odometry (VIO). Our method calibrates and adapts the dynamics model online to improve the accuracy of forward prediction conditioned on future control inputs. The single-track dynamics model approximates wheeled vehicle motion under specific control inputs on flat ground using ordinary differential equations. We use a singularity-free and differentiable variant of the single-track model to enable seamless integration as dynamics factor into VIO and to optimize the model parameters online together with the VIO state variables. We validate our method with real-world data in both indoor and outdoor environments with different terrain types and wheels. In experiments, we demonstrate that ST-VIO can not only adapt to wheel or ground changes and improve the accuracy of prediction under new control inputs, but can even improve tracking accuracy.
Implementing dynamic locomotion behaviors on legged robots requires a high-quality state estimation module. Especially when the motion includes flight phases, state-of-the-art approaches fail to produce reliable estimation of the robot posture, in particular base height. In this paper, we propose a novel approach for combining visual-inertial odometry (VIO) with leg odometry in an extended Kalman filter (EKF) based state estimator. The VIO module uses a stereo camera and IMU to yield low-drift 3D position and yaw orientation and drift-free pitch and roll orientation of the robot base link in the inertial frame. However, these values have a considerable amount of latency due to image processing and optimization, while the rate of update is quite low which is not suitable for low-level control. To reduce the latency, we predict the VIO state estimate at the rate of the IMU measurements of the VIO sensor. The EKF module uses the base pose and linear velocity predicted by VIO, fuses them further with a second high-rate IMU and leg odometry measurements, and produces robot state estimates with a high frequency and small latency suitable for control. We integrate this lightweight estimation framework with a nonlinear model predictive controller and show successful implementation of a set of agile locomotion behaviors, including trotting and jumping at varying horizontal speeds, on a torque-controlled quadruped robot.
Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints. In this paper, we propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online. Including such a motion model can help to improve the accuracy of VIO. Compared to several previous approaches proposed to integrate wheel odometer measurements for this
purpose, our method does not require wheel encoders and can be applied when the robot motion can be modeled with velocity-control based kinematic motion model. We use radial basis function (RBF) kernels to compensate for the time delay and
deviations between control commands and actual robot motion. The motion model is calibrated online by the VIO system and can be used as a forward model for motion control and planning. We evaluate our approach with data obtained in variously sized indoor environments, demonstrate improvements over a pure VIO method, and evaluate the prediction accuracy of the online calibrated model.
Cameras and inertial measurement units are complementary sensors for
ego-motion estimation and environment mapping. Their combination makes
visual-inertial odometry (VIO) systems more accurate and robust. For
globally consistent mapping, however, combining visual and inertial
information is not straightforward. To estimate the motion and geometry
with a set of images large baselines are required. Because of that,
most systems operate on keyframes that have large time intervals
between each other. Inertial data on the other hand quickly degrades
with the duration of the intervals and after several seconds of
integration, it typically contains only little useful information.
In this paper, we propose to extract relevant information for
visual-inertial mapping from visual-inertial odometry using non-linear
factor recovery. We reconstruct a set of non-linear factors that make
an optimal approximation of the information on the trajectory
accumulated by VIO. To obtain a globally consistent map we combine
these factors with loop-closing constraints using bundle adjustment.
The VIO factors make the roll and pitch angles of the global map
observable, and improve the robustness and the accuracy of the mapping.
In experiments on a public benchmark, we demonstrate superior
performance of our method over the state-of-the-art approaches.