I am interested in understanding and modeling aspects of human behavior using the modalities of vision and language.
My current work is aimed at understanding embodied human behavior e.g., semantic understanding of human movement, learning from natural interactions, etc.
In the past, I worked on computational models for human interactions that are typical when interacting with other humans e.g., humor, narrative. I also worked on understanding human interactions with AI, with the goal of creating better human-AI teams.
My work spans the areas of AI, machine learning, computer vision, natural language processing, crowdsourcing and cognitive science.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems