Fix weights/opt memory estimation#4687
Conversation
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/claude review |
There was a problem hiding this comment.
LGTM — the decomposition into TP-sharded, replicated, and routed-expert parameter categories is correct, and using separate data-parallel sizes for the distributed optimizer byte calculation is the right fix.
One suggestion: compute_weight_and_optimizer_memory has no unit tests today, and this PR rewrites most of its arithmetic. A parameterized unit test that constructs a mock args namespace and asserts known outputs for a few configurations (dense-only, MoE with ETP/EP > 1, distributed optimizer on/off, untied embeddings with PP=1) would protect against future regressions and make reviewing easier. The PR checklist also has the test boxes unchecked — worth adding before merging.
|
/ok to test 60d595e |
What does this PR do ?
Fix the weight and opt memory estimation computation. Previously it counted MoE routed expert parameters together with regular transformer parameters, so per-rank weight and optimizer memory did not account for expert tensor parallelism, expert parallelism, or expert data parallelism.
This change separates regular TP-sharded parameters, replicated non-expert parameters, and routed expert parameters. Routed experts now use ETP/EP for parameter ownership and EDP for distributed optimizer sharding, while shared experts remain on regular TP.
Issue tracking
Fixes #4050
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.