I am interested in developing algorithms for 3D visual perception. It includes 3D reconstruction of the objects in the scene and the visual perception in the moving world. While such processing is trivial to the human visual system, most of the sophisticated computer vision algorithms come nowhere close in terms of performance and are unable to do the processing online.
I am working on the project AirCap, where the goal is to develop a 3D shape and motion capture system in outdoor scenarios using multiple UAVs. Each UAV is equipped with an RGB camera and onboard computation capabilities to process the camera input. I am interested in developing shape and pose estimation algorithms which can run on each UAV’s computation unit with minimal inter-UAV communication. The algorithm should take advantage of multi-view RGB input and should provide feedback to the UAV’s flight controller for the best formation planning of the UAVs. I am also working at the software workshop where I am porting the derivative calculation of the SMPL model to the OpenCL to harness the power of parallel GPU computing.
I have completed my Master studies in Neural Information Processing from the International Max Planck Research School of Cognitive and Systems Neuroscience, University of Tuebingen. Before that, I have worked in Samsung R&D Institute Bangalore, India for two years. I have done my Bachelors in Electronics and Communication Engineering from IIT(BHU) Varanasi, India.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems