Train a cone detector for Formula Student Driverless.
I am building this at the FS Driverless team at Linköping University. I maintain it because useful tooling should not stay trapped inside one team.
This is a small training pipeline around Ultralytics YOLO (for now). It downloads and preprocesses FSOCO out of the box. If your team already has a YOLO dataset, use that instead.
No notebooks. No clickops. Run the command and train the model.
- trains a YOLO cone detector
- logs metrics and prediction images during training
- exports ONNX after training
- ships with an FSOCO pipeline so you can get a baseline fast
- uses Hydra configs, so most changes are one command-line override
- Python 3.12
uv
Install dependencies:
uv syncRun a small sanity-check training job:
uv run -m core.trainThat uses FSOCO in debug mode. It is meant to prove the pipeline works.
Artifacts go to outputs/<run_name>/.
Weights end up in outputs/<run_name>/ultralytics_files/weights/.
If ONNX export is enabled, exported models are written after training finishes.
The default dataset config is in configs/dataset/fsoco.yaml.
For a full FSOCO run, set:
debug_mode: falseYou will probably also want to change the training knobs in configs/trainer/ultralytics.yaml:
args:
epochs: 100
imgsz: 640
batch: 16You can also override them from the command line:
uv run -m core.train trainer.args.epochs=100 trainer.args.batch=32 model.weights=yolo11s.ptThis is Hydra. The command line is the UI.
If your dataset is already in YOLO format, you do not need to touch the code.
Put your data here, like this:
data/myteam/preprocessed/
dataset.yaml
images/train/
images/val/
labels/train/
labels/val/
A minimal dataset.yaml looks like this:
path: data/myteam/preprocessed
train: images/train
val: images/val
names:
0: blue_cone
1: yellow_cone
2: orange_cone
3: large_orange_cone
4: unknown_coneThen point the pipeline at it:
uv run -m core.train dataset.preprocessed_dir=data/myteam/preprocessedThat works because the pipeline skips download and preprocessing when preprocessed_dir already contains:
dataset.yamlimages/train/
If your classes are different, change class_map and class_colors in configs/dataset/fsoco.yaml so they match your dataset.yaml.
Keep the mapping consistent. The repo is not magic.
If you want to keep your team config separate, copy configs/dataset/fsoco.yaml to a new file and change:
preprocessed_dirclass_mapclass_colorsdebug_mode
The default config uses five classes:
- blue_cone
- yellow_cone
- orange_cone
- large_orange_cone
- unknown_cone
If your team uses three classes, use three classes.
Just keep dataset.yaml, class_map, and labels aligned.
WandB is enabled by default in configs/config.yaml.
If you want local-only logs, set:
$env:WANDB_MODE='offline'
uv run -m core.trainMLflow is optional.
Connection details live in .env.
Start from .env.example:
MLFLOW_TRACKING_URI=
MLFLOW_TRACKING_TOKEN=
MLFLOW_TRACKING_USERNAME=
MLFLOW_TRACKING_PASSWORD=If you enable the GitLab MLflow logger, the tracking URI is read from MLFLOW_TRACKING_URI.
After a run, look here:
outputs/<run_name>/train.logoutputs/<run_name>/ultralytics_files/weights/best.ptoutputs/<run_name>/ultralytics_files/weights/last.pt
The WandB logger also logs side-by-side ground truth vs prediction images from validation samples.
configs/ Hydra configs
configs/dataset/ dataset configs
configs/trainer/ training configs
configs/logger/ logging configs
core/data/ dataset logic
core/trainers/ trainer backends
core/loggers/ logger integrations
core/metrics/ metric extraction
core/train.py training entrypoint
If your data is not already in YOLO format, copy core/data/fsoco.py and make your own dataset adapter.
That is the place to handle download, conversion, cropping, relabeling, whatever your data needs.
- install with
uv sync - run
uv run -m core.trainto make sure the pipeline works - point
dataset.preprocessed_dirat your YOLO dataset - align
class_mapwith your labels - train
That is it.
