Skip to content

uwrov/noaa_ai_25

Repository files navigation

Download bass video here

To prepare the "frames" directory on a Linux machine:

./split_mp4.sh {path to video} frames

If you are not on a Linux machine, create the “frames” directory manually and run:

ffmpeg -i "{path to video}" -q:v 1 "frames/frame_%06d.jpg"

We recommend the use of a virtual environment to store dependencies. We use Conda. The core dependencies are ultralytics for the YOLO model and optuna for hyperparameter optimization.

conda create -n noaa_ai python=3.10
conda activate noaa_ai
pip install ultralytics optuna

To start collecting data:

python label_yolo.py

We provide our 100 annotated frames in the bass_dataset directory, already split into training and validation sets.

To launch hyperparameter optimization:

python optimize_yolo.py

To begin the fine-tuning process:

python finetune_yolo.py

If you ran your own hyperparameter optimization, ensure you have input the best ones in the hyperparams dictionary. We provide our best configuration by default.

To evaluate and get the output video with labels:

python evaluate_yolo.py

If you trained your own model, ensure MODEL_PATH points to that model. We provide our best-performing model weights as best.pt.

About

The code for our winning MATE ROV / NOAA Ocean Exploration AI and Video Challenge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors