An end-to-end system for:
- Person detection (YOLOv8 / YOLOv9 / YOLOx alternative) trained or fine‑tuned on CrowdHuman + custom retail/bakery frames
- Multi-object tracking (ByteTrack) for persistent IDs
- Queue timing via a polygon ROI
- Real-time dashboard (Streamlit) with metrics & charts
- Optional heatmap & congestion alerts
python -m venv .venv
# Windows PowerShell
.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txtCPU-only (optional):
$env:CUDA_VISIBLE_DEVICES=""project/
dataset/
detection/
CrowdHuman/
images/train images/val
labels/train labels/val
custom_bakery/
images/train images/val
annotations/ (optional raw txt labels)
tracking/
MOT17/
custom_bakery_vids/
videos/
frames/
models/
outputs/
src/
detection/
tracking/
analytics/
dashboard/
data/
scripts/
- Download from: https://www.crowdhuman.org/download.html
- Convert annotations to YOLO (use
scripts/convert_crowdhuman_to_yolo.py). - Place images in:
project/dataset/detection/CrowdHuman/images/trainproject/dataset/detection/CrowdHuman/images/valLabels in:project/dataset/detection/CrowdHuman/labels/trainproject/dataset/detection/CrowdHuman/labels/val
- Put raw queue/counter videos in
project/dataset/tracking/custom_bakery_vids/videos - Extract frames:
python scripts/extract_frames.py --videos_dir project/dataset/tracking/custom_bakery_vids/videos \ --out_dir project/dataset/tracking/custom_bakery_vids/frames --every_n 12
- Annotate a subset (person only) and place labeled images + .txt files into:
project/dataset/detection/custom_bakery/images/trainproject/dataset/detection/custom_bakery/images/val
Edit data/custom.yaml so paths match actual folders. Single class: person.
python project/dataset/detection/train_yolo.py \
--data data/custom.yaml \
--model yolov8s.pt \
--epochs 50 \
--imgsz 640 \
--batch 16 \
--project runs/detect \
--name crowdhuman_bakery \
--device cpuCopy result:
copy runs\detect\crowdhuman_bakery\weights\best.pt project\models\detector.ptpython project/src/tracking/draw_roi.py --video project/dataset/tracking/custom_bakery_vids/videos/your_video.mp4 --out project/src/tracking/roi.jsonpython project/src/tracking/track_and_time.py ^
--model project/models/detector.pt ^
--source project/dataset/tracking/custom_bakery_vids/videos/your_video.mp4 ^
--roi project/src/tracking/roi.json ^
--out_dir project/outputs/session1 ^
--conf 0.3 ^
--device cpu ^
--heatmap FalseOutputs: analytics.csv, timers.jsonl, annotated.mp4.
streamlit run project/src/dashboard/app.pyEnter model path, video source, ROI file, click Start.
python project/src/analytics/aggregate_metrics.py ^
--timers project/outputs/session1/timers.jsonl ^
--analytics_csv project/outputs/session1/analytics.csv ^
--out project/outputs/session1/summary.csv- Use
yolov8n.ptfor faster inference. - Reduce
--imgszto 512 or 416. - Use
--skip_frames 1to process every other frame. - Consider exporting to ONNX/OpenVINO for further speed:
yolo export model=project/models/detector.pt format=onnx
Capture:
- FPS & latency
- Queue length trend chart
- Average & max waiting times
- Annotated video sample
- Business value slide (staff allocation, reduced wait)
| Issue | Fix |
|---|---|
| Low FPS | Smaller model (yolov8n), smaller image size |
| Missed detections | Lower --conf to 0.25 or fine-tune more epochs |
| ID flicker | Increase tracker buffer or lower track_thresh in ByteTrack args |
| No labels found | Verify .txt filename matches image filename |
- Multi-camera fusion (merge IDs by appearance embedding)
- Predict future wait times using queue length + service rate
- Employee performance zone detection (add
counter_areaclass)
Happy building!
The ROI file produced by draw_roi.py uses:
{
"label": "queue_area",
"polygon": [[x1, y1], [x2, y2], ...],
"frame_size": [width, height]
}Polygon points are in image pixel coordinates (origin top-left). Ensure at least 3 points.
This section wires the pipeline together so you can run end-to-end quickly and verify that the project is correctly set up.
From the repository root:
# 1) Verify structure
python scripts/verify_structure.py;
# 2) (Optional) Convert CrowdHuman annotations to YOLO
python scripts/convert_crowdhuman_to_yolo.py --help;
# 3) Train detector
python project/dataset/detection/train_yolo.py `
--data data/custom.yaml `
--model yolov8s.pt `
--epochs 50 `
--imgsz 640 `
--batch 16 `
--project runs/detect `
--name crowdhuman_bakery `
--device cpu;
# 4) Evaluate detector
python project/src/detection/eval_yolo.py `
--model project/models/detector.pt `
--data data/custom.yaml `
--device cpu `
--save_json True `
--out_dir project/outputs/eval;
# 5) Batch tracking over all custom bakery videos
python project/src/tracking/offline_batch_process.py `
--model project/models/detector.pt `
--videos_dir project/dataset/tracking/custom_bakery_vids/videos `
--roi project/src/tracking/roi.json `
--device cpu `
--conf 0.3 `
--skip_frames 1;
# 6) Launch dashboard
streamlit run project/src/dashboard/app.pyAdjust --device to 0 (or similar) to use GPU.
- CPU: pass
--device cputotrain_yolo.py,eval_yolo.py,track_and_time.py, andoffline_batch_process.py. - GPU: pass
--device 0(or1, etc.) if you have CUDA; leave--device ""to let ultralytics auto-select. - For quick checks on low-end machines, combine
--device cpuwith:- A smaller model (e.g.,
yolov8n.pt) - Lower
--imgsz(512 or 416) --skip_frames 1in tracking scripts.
- A smaller model (e.g.,
If you do not yet have real videos or labels, you can still verify the analytics stack:
python project/src/tracking/synthetic_feed.py `
--out_dir project/outputs/synthetic `
--frames 200 `
--width 640 `
--height 360 `
--num_people 5;This generates:
project/outputs/synthetic/analytics.csvproject/outputs/synthetic/timers.jsonlproject/outputs/synthetic/annotated.mp4
You can point the dashboard to these files to inspect charts and the annotated synthetic video.
To export your trained detector to ONNX:
python scripts/export_to_onnx.py `
--model project/models/detector.pt `
--format onnxThe script verifies that the .onnx file exists and prints a success message. From there you can integrate with ONNX runtimes or other deployment targets.
Minimal tests are included to verify imports and structure:
pip install pytest
pytestThese tests:
- Import key project modules (
tests/test_imports.py). - Assert that core directories exist (
tests/test_structure.py).
If these pass, the baseline project structure and module wiring are correct.