Autonomous Mobile Robot for Human Detection and Red Object Tracking (aka “The Bull”)
Nov. 2025 - Dec. 2025
An autonomous mobile robot that detects humans, recognizes red objects, and follows a target in real time using stereo vision and onboard AI.
Demo Video
Project Overview
An autonomous mobile robot that detects humans using onboard neural networks, verifies the presence of red color, and follows the target using 3D spatial perception and proportional motion control.
System Architecture
- Perception – OAK-D Pro (DepthAI)
- Runs a neural network onboard to detect humans, generate 2D bounding boxes, and compute full 3D (X, Y, Z) positions using stereo depth.
- Color Validation – OpenCV
- Converts RGB frames to HSV and applies dual red ranges to compute a red pixel ratio; triggers tracking only if red exceeds a 10% threshold.
- ROS2 Integration – Topic-Based Pipeline
- Publishes spatial detections to /oak/nn/spatial_detections, filters for confident human detections (≥65%), and outputs validated targets to a custom /matador topic.
- Motion Control – Proportional Controller
- Transforms detected pose into base_link frame using tf2, computes distance and heading error, and generates clamped linear/angular velocity commands.
What I Did
- Integrated the OAK-D camera with ROS2 and implemented detection logic for tracking humans
- Implemented coordinate frame transformations (tf2) and converted 3D pose data into velocity commands
- Developed color recognition and red ratio thresholding by using OpenCV and filtering HSV values
- Tuned the proportional controller, including velocity clamping and deadzones for stability
Possible Improvements
- Train or fine-tune a model capable of recognizing partial human features (e.g., legs) to improve detection at low-angles
- Implement obstacle avoidance using LiDAR and path planning (e.g., A*) instead of assuming a clear path
- Improve color detection with spatial filtering (connected components or bounding-box-based color checks) instead of global pixel ratio