This project demonstrates the use of YOLOv8 for detecting American Sign Language (ASL) gestures. It includes a Flask application that allows you to test the ASL detection model using image uploads or live webcam feeds. You can also retrain the YOLOv8 model with a custom dataset.
The application
directory contains all the components needed to run the Flask application for ASL detection.
application/
├── app.py # The main Flask application file.
├── requirements.txt # List of dependencies required for the project.
├── templates/ # Directory containing HTML templates.
│ ├── index.html # Main template for the web interface.
├── static/ # Directory for static files like uploaded images.
│ ├── styles.css # CSS file for styling the web interface.
│ └── uploads/ # Directory where uploaded images are stored.
├── models/ # Directory containing the trained YOLOv8 model.
│ └── yolov8n.pt # The YOLOv8 model file used for ASL detection.
├── train/ # Directory containing the training script.
│ └── train.py # Script to train the YOLOv8 model on a new dataset.
└── README.md # This documentation file.
The training directory is used for training the YOLOv8 model with your own dataset.
training/
├── dataset/ # All the dataset files including train, valid, test splits, and the data.yaml file.
│ ├── train/ # Training images.
│ ├── valid/ # Validation images.
│ ├── test/ # Test images.
│ └── data.yaml # Dataset configuration file.
├── runs/ # Directory to store training runs and results.
│ └── exp/ # Example of a training run folder with results.
├── test.jpg # An example image used for testing.
├── train.ipynb # Jupyter notebook for training and experimenting with the YOLOv8 model.
└── yolov8n.pt # Pre-trained YOLOv8 model used for initializing training.
video1.mp4
video2.mp4
Before you begin, ensure you have the following installed on your system:
- Python 3.8+
- Flask
- PyTorch
- YOLOv8 dependencies
You can install all necessary packages by navigating to the application
directory and running:
cd application
python -m virtualenv venv
venv\Scripts\activate
pip install -r requirements.txt
The dataset used for training the YOLOv8 model in this project can be downloaded from the following link:
If you want to use your own dataset for training:
-
Organize Your Dataset:
- Create subdirectories for
train
,valid
, andtest
inside thetraining/dataset
directory. - Place your training images in the
train
directory, validation images in thevalid
directory, and test images in thetest
directory.
- Create subdirectories for
-
Update
data.yaml
:- Modify the
data.yaml
file in thetraining/dataset
directory to reflect the paths to your dataset. - Ensure the
train
,val
, andtest
fields indata.yaml
correctly point to the respective directories.
- Modify the
-
Proceed with Training:
- Once your dataset is set up and the
data.yaml
file is configured, you can proceed with training the YOLOv8 model as described in the Training the Model section.
- Once your dataset is set up and the
To test ASL detection using the my pre-trained YOLOv8 model:
- Start the Flask Application:
Navigate to the application directory and run the Flask server:
python app.py
- Access the Web Interface: Open your web browser and go to http://127.0.0.1:5000/. From here, you can upload an image or use the live webcam feature to detect ASL gestures.
Roboflow provided us with a platform for dataset annotation.
Ultralytics, for their pre-trained YOLOv8 weights.
This project is licensed under the MIT License. See the LICENSE file for details.
Feel free to contribute to this project by opening issues or submitting pull requests. Happy coding!