Skip to content

ayamekni/ai_camera_challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bakery Queue Analytics — End-to-End Project

An end-to-end system for:

  • Person detection (YOLOv8 / YOLOv9 / YOLOx alternative) trained or fine‑tuned on CrowdHuman + custom retail/bakery frames
  • Multi-object tracking (ByteTrack) for persistent IDs
  • Queue timing via a polygon ROI
  • Real-time dashboard (Streamlit) with metrics & charts
  • Optional heatmap & congestion alerts

1. Environment Setup

python -m venv .venv
# Windows PowerShell
.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt

CPU-only (optional):

$env:CUDA_VISIBLE_DEVICES=""

2. Folder Tree (Expected)

project/
  dataset/
    detection/
      CrowdHuman/
        images/train  images/val
        labels/train  labels/val
      custom_bakery/
        images/train  images/val
        annotations/ (optional raw txt labels)
    tracking/
      MOT17/
      custom_bakery_vids/
        videos/
        frames/
  models/
  outputs/
  src/
    detection/
    tracking/
    analytics/
    dashboard/
data/
scripts/

3. Data Preparation

CrowdHuman

  1. Download from: https://www.crowdhuman.org/download.html
  2. Convert annotations to YOLO (use scripts/convert_crowdhuman_to_yolo.py).
  3. Place images in:
    • project/dataset/detection/CrowdHuman/images/train
    • project/dataset/detection/CrowdHuman/images/val Labels in:
    • project/dataset/detection/CrowdHuman/labels/train
    • project/dataset/detection/CrowdHuman/labels/val

Custom Bakery / Retail Substitute

  • Put raw queue/counter videos in project/dataset/tracking/custom_bakery_vids/videos
  • Extract frames:
    python scripts/extract_frames.py --videos_dir project/dataset/tracking/custom_bakery_vids/videos \
      --out_dir project/dataset/tracking/custom_bakery_vids/frames --every_n 12
  • Annotate a subset (person only) and place labeled images + .txt files into:
    • project/dataset/detection/custom_bakery/images/train
    • project/dataset/detection/custom_bakery/images/val

4. Data YAML

Edit data/custom.yaml so paths match actual folders. Single class: person.

5. Train YOLO

python project/dataset/detection/train_yolo.py \
  --data data/custom.yaml \
  --model yolov8s.pt \
  --epochs 50 \
  --imgsz 640 \
  --batch 16 \
  --project runs/detect \
  --name crowdhuman_bakery \
  --device cpu

Copy result:

copy runs\detect\crowdhuman_bakery\weights\best.pt project\models\detector.pt

6. Define ROI

python project/src/tracking/draw_roi.py --video project/dataset/tracking/custom_bakery_vids/videos/your_video.mp4 --out project/src/tracking/roi.json

7. Tracking & Timing

python project/src/tracking/track_and_time.py ^
  --model project/models/detector.pt ^
  --source project/dataset/tracking/custom_bakery_vids/videos/your_video.mp4 ^
  --roi project/src/tracking/roi.json ^
  --out_dir project/outputs/session1 ^
  --conf 0.3 ^
  --device cpu ^
  --heatmap False

Outputs: analytics.csv, timers.jsonl, annotated.mp4.

8. Dashboard

streamlit run project/src/dashboard/app.py

Enter model path, video source, ROI file, click Start.

9. Aggregate Analytics

python project/src/analytics/aggregate_metrics.py ^
  --timers project/outputs/session1/timers.jsonl ^
  --analytics_csv project/outputs/session1/analytics.csv ^
  --out project/outputs/session1/summary.csv

10. Performance Tips (CPU)

  • Use yolov8n.pt for faster inference.
  • Reduce --imgsz to 512 or 416.
  • Use --skip_frames 1 to process every other frame.
  • Consider exporting to ONNX/OpenVINO for further speed:
    yolo export model=project/models/detector.pt format=onnx

11. Pitch / Demo Checklist

Capture:

  • FPS & latency
  • Queue length trend chart
  • Average & max waiting times
  • Annotated video sample
  • Business value slide (staff allocation, reduced wait)

12. Troubleshooting

Issue Fix
Low FPS Smaller model (yolov8n), smaller image size
Missed detections Lower --conf to 0.25 or fine-tune more epochs
ID flicker Increase tracker buffer or lower track_thresh in ByteTrack args
No labels found Verify .txt filename matches image filename

13. Optional Enhancement Ideas

  • Multi-camera fusion (merge IDs by appearance embedding)
  • Predict future wait times using queue length + service rate
  • Employee performance zone detection (add counter_area class)

Happy building!

14. ROI JSON Schema (Reference)

The ROI file produced by draw_roi.py uses:

{
  "label": "queue_area",
  "polygon": [[x1, y1], [x2, y2], ...],
  "frame_size": [width, height]
}

Polygon points are in image pixel coordinates (origin top-left). Ensure at least 3 points.

15. Automation & Verification

This section wires the pipeline together so you can run end-to-end quickly and verify that the project is correctly set up.

15.1 PowerShell One-Liner (Full Flow)

From the repository root:

# 1) Verify structure
python scripts/verify_structure.py;

# 2) (Optional) Convert CrowdHuman annotations to YOLO
python scripts/convert_crowdhuman_to_yolo.py --help;

# 3) Train detector
python project/dataset/detection/train_yolo.py `
  --data data/custom.yaml `
  --model yolov8s.pt `
  --epochs 50 `
  --imgsz 640 `
  --batch 16 `
  --project runs/detect `
  --name crowdhuman_bakery `
  --device cpu;

# 4) Evaluate detector
python project/src/detection/eval_yolo.py `
  --model project/models/detector.pt `
  --data data/custom.yaml `
  --device cpu `
  --save_json True `
  --out_dir project/outputs/eval;

# 5) Batch tracking over all custom bakery videos
python project/src/tracking/offline_batch_process.py `
  --model project/models/detector.pt `
  --videos_dir project/dataset/tracking/custom_bakery_vids/videos `
  --roi project/src/tracking/roi.json `
  --device cpu `
  --conf 0.3 `
  --skip_frames 1;

# 6) Launch dashboard
streamlit run project/src/dashboard/app.py

Adjust --device to 0 (or similar) to use GPU.

15.2 CPU vs GPU Selection

  • CPU: pass --device cpu to train_yolo.py, eval_yolo.py, track_and_time.py, and offline_batch_process.py.
  • GPU: pass --device 0 (or 1, etc.) if you have CUDA; leave --device "" to let ultralytics auto-select.
  • For quick checks on low-end machines, combine --device cpu with:
    • A smaller model (e.g., yolov8n.pt)
    • Lower --imgsz (512 or 416)
    • --skip_frames 1 in tracking scripts.

15.3 Synthetic Smoke Test (No Real Data)

If you do not yet have real videos or labels, you can still verify the analytics stack:

python project/src/tracking/synthetic_feed.py `
  --out_dir project/outputs/synthetic `
  --frames 200 `
  --width 640 `
  --height 360 `
  --num_people 5;

This generates:

  • project/outputs/synthetic/analytics.csv
  • project/outputs/synthetic/timers.jsonl
  • project/outputs/synthetic/annotated.mp4

You can point the dashboard to these files to inspect charts and the annotated synthetic video.

15.4 ONNX Export Helper

To export your trained detector to ONNX:

python scripts/export_to_onnx.py `
  --model project/models/detector.pt `
  --format onnx

The script verifies that the .onnx file exists and prints a success message. From there you can integrate with ONNX runtimes or other deployment targets.

15.5 Pytest Sanity Checks

Minimal tests are included to verify imports and structure:

pip install pytest
pytest

These tests:

  • Import key project modules (tests/test_imports.py).
  • Assert that core directories exist (tests/test_structure.py).

If these pass, the baseline project structure and module wiring are correct.

About

End-to-end queue analytics system for bakeries using computer vision. YOLO-based person detection, ByteTrack multi-object tracking, polygon-based queue timing, and a real-time Streamlit dashboard with metrics, heatmaps, and congestion alerts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors