Skip to content

Fix weights/opt memory estimation#4687

Open
YangFei1990 wants to merge 6 commits intoNVIDIA:mainfrom
YangFei1990:fix_memory_util
Open

Fix weights/opt memory estimation#4687
YangFei1990 wants to merge 6 commits intoNVIDIA:mainfrom
YangFei1990:fix_memory_util

Conversation

@YangFei1990
Copy link
Copy Markdown
Contributor

What does this PR do ?

Fix the weight and opt memory estimation computation. Previously it counted MoE routed expert parameters together with regular transformer parameters, so per-rank weight and optimizer memory did not account for expert tensor parallelism, expert parallelism, or expert data parallelism.

This change separates regular TP-sharded parameters, replicated non-expert parameters, and routed expert parameters. Routed experts now use ETP/EP for parameter ownership and EDP for distributed optimizer sharding, while shared experts remain on regular TP.

Issue tracking

Fixes #4050

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@YangFei1990 YangFei1990 requested review from a team as code owners May 7, 2026 23:46
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft May 7, 2026 23:47
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 7, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 7, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@YangFei1990 YangFei1990 marked this pull request as ready for review May 7, 2026 23:48
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team May 7, 2026 23:48
@YangFei1990
Copy link
Copy Markdown
Contributor Author

/claude review

Copy link
Copy Markdown
Contributor

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM — the decomposition into TP-sharded, replicated, and routed-expert parameter categories is correct, and using separate data-parallel sizes for the distributed optimizer byte calculation is the right fix.

One suggestion: compute_weight_and_optimizer_memory has no unit tests today, and this PR rewrites most of its arithmetic. A parameterized unit test that constructs a mock args namespace and asserts known outputs for a few configurations (dense-only, MoE with ETP/EP > 1, distributed optimizer on/off, untied embeddings with PP=1) would protect against future regressions and make reviewing easier. The PR checklist also has the test boxes unchecked — worth adding before merging.

@YangFei1990
Copy link
Copy Markdown
Contributor Author

/ok to test 60d595e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[QUESTION] MoE layer theoretical memory calculation needs to account for ETP/EDP.

3 participants