Bridging the gap between

Mind & Senses
for Robotics.

I'm a Master's student in CS at CU Boulder, advised by Prof. Bradley Hayes and Prof. Alessandro Roncone. I build Vision-Language frameworks that align human gaze to enable adaptive robotic task planning in shared autonomy settings.

Previously a Research Assistant at IISc Bangalore (Prof. Pradipta Biswas), where I developed gaze-tracking systems for automotive HUDs and applied Inverse RL for human intent prediction. Published at ACM THRI · ACM IUI · IEEE ICRA.

Gyanig Kumar

From the Lab.

All Logs

News & Publications

2025

2024

2022

Investigating IRL during Rapid Aiming Movement in XR and HRI

ACM THRI, 2025
THRI 2025 Paper
Proprio Multimodal HRI

Explores how Inverse Reinforcement Learning can improve system understanding of human intent during rapid task execution in both virtual and collaborative robotic settings.

Comparing CV models for low resource dataset to develop MR assembly assistant

Discover Robotics, 2025
Discover Robotics 2025 Paper
Vision Learning

Evaluates efficiency of object detection computer vision pipelines on limited datasets for deploying mixed-reality manual assembly assistants on head-mounted displays.

Multimodal Target Prediction for Rapid Human-Robot Interaction

29th ACM IUI, 2024
IUI 2024 Paper
Gaze Multimodal

Combines explicit gaze cues with implicit behavioral signals to improve prediction accuracy and enable intuitive collaboration in rapid pick-and-place tasks.

Enhanced HRC with Intent Prediction using Deep-IRL

IEEE ICRA, 2024
ICRA 2024 Paper
Vision Learning

Demonstrates how learning from human demonstrations via Deep Inverse Reinforcement Learning enhances robot understanding of human preferences.

Image Translation GAN Models to Improve Object Detection in Low-Resource Domains

ICVTTS, 2024
ICVTTS 2024 Paper
Vision Learning

Explores generative adversarial networks to synthesize artificial data bridging the reality gap and addressing scarcity in domains underrepresenting certain scenarios.

Augmented reality and deep learning based system for assisting assembly

JMUI, 2024
JMUI 2024 Paper
Vision Mixed Reality

Fuses contextual augmentations with computer vision inferences to provide interactive spatial anchoring guides during complex industrial manual assembly workflows.

Efficient Interaction with Automotive HUDs using Appearance-based Gaze

14th AutomotiveUI, 2022
Auto UI Paper
Gaze Vision

Evaluates how appearance-based gaze tracking can enable safer and more intuitive control of HUD elements while minimizing driver distraction.

Beyond the Lab

View all hobbies

Music & Photography

When I'm not writing papers or debugging ROS nodes, I spend my time exploring soundscapes, live music, and the Rocky Mountains.

Explore Hobbies
Landscape Mountain Hiking