The Challenge
Moving from reinforcement learning (RL) black-boxes to classical control, this project involved building a fully transparent autonomous stack. The goal was to navigate a scaled track using only front-facing camera data, processing everything in real-time on the vehicle's onboard compute.
System Design
The Software Pipeline
- Perception Node: Raw images are pre-processed (Gaussian blur, color filtering) and transformed via an Inverse Perspective Mapping (IPM) to get a top-down view of the lanes.
- Path Planning: Center-line extraction using a sliding-window algorithm to calculate the goal waypoint relative to the vehicle's current pose.
- Control Node: A cascaded PID controller (Proportional-Integral-Derivative) handles steering corrections. I focused on tuning the Kd parameter to suppress oscillations during high-speed exits.
Key Performance Metrics
The system achieved a consistent lap completion rate of 98% in simulation and successfully transitioned to the physical hardware with minimal "sim-to-real" gap. By replacing the default RL inference with a deterministic ROS 2 pipeline, we achieved a 40% reduction in steering latency.