Hand Sign Detection Model
This project represents a complete ML engineering journey, evolving from experimental hand detection to a production-ready system. Built on YOLOv8 architecture and trained on a custom dataset of 1,740 manually curated images, the model achieves 96% accuracy in distinguishing between hands, arms, and non-hand objects. The system features real-time inference at 30+ FPS, seamless web integration via Vercel AI SDK, and deployment on HuggingFace Spaces for universal access.
Project journey
ML-Visions Foundation
Started with ML-Visions FinetuneWorkshop: Halloween Hand Detection project - a 15-minute workshop for fine-tuning YOLOv8 to build a binary hand detection classifier with automated dataset creation
Pipeline Framework Development
Abstracted learnings into the ML Training Pipeline project - a flexible machine learning training pipeline supporting multiple models (YOLO, TensorFlow, PyTorch) with deployment options to Hugging Face and RunPod
Project Initialization
Started hand-sign-detection with cleaned ML template
Dataset Integration
Integrated 867 images from ml-visions, expanded to 1,344
Three-Class Evolution
Added arm detection for better hand distinction
Production Deployment
Deployed to HuggingFace with Vercel AI SDK integration