Back
For autonomous agents to operate reliably in the real world, they must perceive their environment, estimate their own and the environment's state, and derive appropriate actions based on their beliefs and the given task. While extensive research has been conducted on robot state estimation and geometric mapping, many existing methods remain tailored to specific hardware configurations or problem setups, limiting their scalability and applicability to future, more diverse scenarios. This thesis explores new directions for state and motion estimation, mapping, and localization, rendering it more robust, generalizable, and less dependent on handcrafted tuning or problem-specific engineering.
The work is structured into two main parts:
(i) The first part focuses on learning-based LiDAR odometry and mapping, a core component in field robotics, such as during the DARPA Subterranean Challenge. First, an end-to-end self-supervised odometry framework is proposed, jointly optimizing neural network parameters and objective functions without requiring ground truth labels. In a follow-up work, the thesis introduces a supervised learning approach to automate degeneracy detection in LiDAR odometry, traditionally a manual and sensitive step, using simulated data and auto-labeling techniques.
(ii) The second part addresses modular sensor fusion using factor graph optimization. First, a dual-graph framework is developed for outdoor robotics, which is capable of handling intermittent GNSS availability in construction environments. This is extended in Holistic Fusion, a generalized framework that supports multiple absolute measurements in different, drifting reference frames and aligns them dynamically during optimization. The resulting frameworks are the primary sensor fusion solutions on multiple real-world robotic systems.
These parts are presented in four chapters. Each chapter consists of content that has either been published or submitted to top robotic conferences and journals. They represent the main approach presented in this thesis, responding to the problems and motivation given in the introduction chapter.
The introduction first highlights state and motion estimation requirements for achieving robot autonomy. It then gives a high-level overview of state and motion estimation in robotics and what has been researched in the past, followed by concrete objectives for the approach chapters presented in this thesis. The main contributions are summarized at the end of the introduction, and each work is summarized and put into the context of the overarching problem formulation of this thesis.
After introducing the different techniques, an overview of other works conducted as part of this thesis is given, before concluding this multi-year research by summarizing the achievements and limitations of the presented techniques.
Lessons learned are then presented, which are both directly motivated by the research conducted and generally from conducting research and engineering in robot motion estimation. With these lessons learned and limitations, we propose directions for future research and hope to help other robotic researchers pick new, exciting directions for research to make systems even more robust and versatile for future requirements on robot state and motion estimation.
All methods presented in this thesis are validated on real-world datasets, demonstrating increased robustness, flexibility, and generality of perception and state estimation systems in unstructured and challenging environments.
More information