|
Gyanig Kumar
I'm a Master's student in Computer Science at the University of Colorado Boulder, focusing on Human-Robot Interactions and Computer Vision research. Currently, I am advised by Prof. Bradley Hayes and Prof. Alessandro Roncone, working on in-context learning, VLM frameworks and multimodal interaction for human intent recognition.
I am developing a Vision-Language Framework that aligns human gaze (explicit cues) to enable adaptive and efficient robotic task planning in shared autonomy settings. In the past year, I worked on Active Preference Learning (APreL) for improving rewards using human-in-the-loop scenarios. I worked with various robotic platforms including 7-DoF manipulator robots (Sawyer) and Quadrupedal Robot (UnitreeGo1).
I started my research journey as a Research Assistant at the Indian Institute of Science (IISc) Bangalore under supervision of Prof. Pradipta Biswas. I developed gaze-tracking systems for automotive heads-up displays and applied Inverse Reinforcement Learning (IRL) to improve user intent prediction in robotic tasks such as pick-and-place and human-robot collaboration.
Over the years, I have developed proficiency in Gaze Estimation, Object Detection & Tracking, Collaborative Robotics, Self-Supervised Learning, XR-device development, and Reinforcement Learning. I've published papers at top-tier conferences including ACM THRI, ACM IUI and IEEE ICRA.
Email /
CV /
Scholar /
Github /
LinkedIn
I am applying for a full-time PhD position in the field of Human-Robot Interactions. Please reach out to me if our research interests align.
|