Fix issue where parameter groups with different min/max LRs get overridden at checkpoint load time#4705
Fix issue where parameter groups with different min/max LRs get overridden at checkpoint load time#4705jstjohn wants to merge 3 commits intoNVIDIA:mainfrom
Conversation
…ving the same key at dist op load time which leads to different LRs on checkpoint resumption Signed-off-by: John St John <jstjohn@nvidia.com>
|
This PR has been automatically converted to draft because all PRs must start as drafts. When you are ready for review, click Ready for Review to begin the review process. This will:
See the contribution guide for more details. |
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Signed-off-by: John St John <jstjohn@nvidia.com>
Signed-off-by: John St John <jstjohn@nvidia.com>
|
To summarize, I believe the main problem this PR solves is that we have a hard-coded set of optimizer states that are used to find commonalities between parameter group states that include entries not in the hard-coded set. This means, if you have a checkpoint that saves more than the hard-coded set: they end up being hashed into a look-up table and overwriting each others values: when we need to consider ALL states, not just the previous 4: Wouldn't having a union of all state values (as the common key) in both the initialized optimizer and loaded checkpoint be more comprehensive? where we find the checkpoint state that matches the most entries in any particular parameter group and choose that state as the one to load into this parameter group of the optimizer? @yuzhongw-nvidia @deepakn94 This code was previously touched by a rather old ADLR PR, am I completely off the dot here or what? |
What does this PR do ?
Adds some missing parameter scheduler keys (like min/max lr) to the distributed optimizer load so that those PGs do not get settings from other PGs overwriting them.
Issue tracking
For PRs from open-source community contributors:
Linked issue:
Contribution process
Pre-checks
Code review
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.
Step 1: Mark PR as "Ready for Review"
.github/CODEOWNERS.Final Review might get declined if these requirements are not fulfilled.
Step 2: Final Review
For PRs that change
megatron/core, once all expert reviewers have approved, theFinal Reviewlabel is applied automatically and final reviewers are assigned.For PRs outside
megatron/core, this step is skipped.Step 3: Approved
Once all required reviewers have approved, the
Approvedlabel is applied automatically.Merge
Any member of mcore-engineers will be able to merge your PR.
For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.