Catch PyTorch training slowdowns early, while the job is still running.
Quickstart • Compare Runs • How to Read Output • FAQ • Use with W&B / MLflow • Issues
TraceML is an open-source tool for catching PyTorch training slowdowns early, so bad runs do not quietly waste costly compute.
It gives you lightweight step-level signals while the job is still running, so you can quickly tell whether the slowdown looks input-bound, compute-bound, wait-heavy, imbalanced across ranks, or memory-related.
Use TraceML when you want a fast answer before reaching for a heavyweight profiler.
⭐ If TraceML helps you, please consider starring the repo.
Upcoming rename: TraceML will transition to TraceOpt in a future release. For now, the active package remains
traceml-aiand Python imports remaintraceml. The future PyPI package nametraceopt-aiis now in place as we prepare the migration.
Install:
pip install traceml-aiInitialize TraceML and wrap your training step:
import traceml
traceml.init()
for batch in dataloader:
with traceml.trace_step(model):
optimizer.zero_grad(set_to_none=True)
outputs = model(batch["x"])
loss = criterion(outputs, batch["y"])
loss.backward()
optimizer.step()Run:
traceml run train.pyDuring training, TraceML opens a live terminal view alongside your logs.
At the end of the run, it prints a compact summary you can review or share.
Start with traceml run train.py. Most users do not need watch or deep first.
For custom training loops, manual and selective instrumentation are available in the Quickstart.
Use the default workflow when you want live step-aware diagnosis during training plus the end-of-run summary.
traceml run train.pyUse summary mode when you mainly want the structured final summary for logging into W&B or MLflow.
traceml run train.py --mode=summaryThen call traceml.final_summary() near the end of your script.
TraceML also writes canonical summary artifacts for the run, including final_summary.json, which is the intended machine-readable output for downstream logging and later run comparison.
If you have final_summary.json from two runs, compare them directly:
traceml compare run_a.json run_b.jsonTraceML writes both a structured compare JSON and a compact text report.
See docs/user_guide/compare.md.
TraceML helps answer questions like:
- Is the run input-bound, compute-bound, wait-heavy, or memory-constrained?
- Are some distributed ranks slower than others?
- Is memory usage drifting upward over time?
- Where is time showing up across dataloader, forward, backward, and optimizer phases?
It is designed to help you decide quickly whether a run looks healthy or whether it is worth digging deeper.
TraceML adds fixed per-step instrumentation overhead, so the relative cost is highest when training steps are very short. In larger or distributed workloads, that fixed cost is amortized over a longer end-to-end step. In our early DDP benchmarks, TraceML did not produce a measurable slowdown beyond normal run-to-run variation.
Use TraceML when training feels:
- slower than expected
- unstable from step to step
- imbalanced across distributed ranks
- fine in dashboards but still underperforming
Start with TraceML when you need a fast answer in the terminal.
Reach for torch.profiler once you know where to dig deeper.
TraceML is designed to work alongside tools like W&B, MLflow, and TensorBoard, not replace them.
Use experiment trackers for dashboards, artifacts, and team reporting. Use TraceML for live bottleneck diagnosis, structured final summaries, and simple run-to-run comparison from saved TraceML summary JSON files.
See Use TraceML with W&B / MLflow.
Works today:
- single GPU
- single-node DDP/FSDP
Next:
- multi-node training support
- Quickstart
- Compare Runs
- Examples
- How to Read TraceML Output
- FAQ
- Use TraceML with W&B / MLflow
- Hugging Face integration
- PyTorch Lightning integration
Need a lighter zero-code first look or a deeper follow-up run? See the Quickstart and FAQ for watch and deep.
If TraceML helped you catch a slowdown, please open an issue and include:
- hardware / CUDA / PyTorch versions
- single GPU or multi-GPU
- whether you used
run,watch, ordeep - the end-of-run summary
- a minimal repro if possible
GitHub issues: https://github.com/traceopt-ai/traceml/issues
Email: support@traceopt.ai
Contributions are welcome, especially:
- reproducible slowdown cases
- bug reports
- docs improvements
- integrations
- examples
Apache 2.0. See LICENSE.
TraceOpt is a trademark of OptAI UG (haftungsbeschränkt).

