Skip to content

Fix issue where parameter groups with different min/max LRs get overridden at checkpoint load time#4705

Open
jstjohn wants to merge 3 commits intoNVIDIA:mainfrom
jstjohn:jstjohn/fix_dist_op_pg_lr_override
Open

Fix issue where parameter groups with different min/max LRs get overridden at checkpoint load time#4705
jstjohn wants to merge 3 commits intoNVIDIA:mainfrom
jstjohn:jstjohn/fix_dist_op_pg_lr_override

Conversation

@jstjohn
Copy link
Copy Markdown
Contributor

@jstjohn jstjohn commented May 8, 2026

What does this PR do ?

Adds some missing parameter scheduler keys (like min/max lr) to the distributed optimizer load so that those PGs do not get settings from other PGs overwriting them.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

…ving the same key at dist op load time which leads to different LRs on checkpoint resumption

Signed-off-by: John St John <jstjohn@nvidia.com>
@jstjohn jstjohn requested review from a team as code owners May 8, 2026 19:26
@jstjohn jstjohn requested a review from cspades May 8, 2026 19:26
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft May 8, 2026 19:26
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 8, 2026

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 8, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@jstjohn jstjohn marked this pull request as ready for review May 8, 2026 19:27
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team May 8, 2026 19:27
jstjohn added 2 commits May 8, 2026 12:34
Signed-off-by: John St John <jstjohn@nvidia.com>
Signed-off-by: John St John <jstjohn@nvidia.com>
@cspades
Copy link
Copy Markdown
Member

cspades commented May 9, 2026

To summarize, I believe the main problem this PR solves is that we have a hard-coded set of optimizer states that are used to find commonalities between parameter group states that include entries not in the hard-coded set.

This means, if you have a checkpoint that saves more than the hard-coded set:

# Checkpoint Param Group State Values
(a, b, c, d, e, f, g, ...)

they end up being hashed into a look-up table and overwriting each others values:

# Post-Hash
# param_group_identifier_keys = ('wd_mult', 'lr_mult', 'is_expert_parallel', 'is_decoupled_lr')
(a, b, c, d) 

when we need to consider ALL states, not just the previous 4:

# Current MLM Optimizer State
{'is_expert_parallel': False, 'default_config': True, 'wd_mult': 1.0, 'lr_mult': 1.0, 'is_decoupled_lr': False, 'max_lr': 0.00015, 'min_lr': 1e-05, 'lr': 0.0, 'bias_correction': True, 'betas': (0.9, 0.95), 'eps': 1e-08, 'weight_decay': 0.1}

Wouldn't having a union of all state values (as the common key) in both the initialized optimizer and loaded checkpoint be more comprehensive?

# Checkpoint State
(a, b, c, d, None, e, None, None)
# Optimizer State
(a, b, c, d, e, None, f, g)
# Intersection Size: 4

where we find the checkpoint state that matches the most entries in any particular parameter group and choose that state as the one to load into this parameter group of the optimizer?

@yuzhongw-nvidia @deepakn94 This code was previously touched by a rather old ADLR PR, am I completely off the dot here or what?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants