Skip to content

fix issue https://nvbugspro.nvidia.com/bug/6098588#3696

Draft
weijiac0619 wants to merge 1 commit into
mainfrom
weijia/fix_gptoss_downproj
Draft

fix issue https://nvbugspro.nvidia.com/bug/6098588#3696
weijiac0619 wants to merge 1 commit into
mainfrom
weijia/fix_gptoss_downproj

Conversation

@weijiac0619
Copy link
Copy Markdown
Contributor

What does this PR do ?

fix issue https://nvbugspro.nvidia.com/bug/6098588

Changelog

  • Add specific line by line info of high level changes in this PR.

GitHub Actions CI

See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 5, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@weijiac0619 weijiac0619 marked this pull request as draft May 5, 2026 19:42
Comment thread src/megatron/bridge/models/gpt_oss/gpt_oss_bridge.py Outdated
Comment thread src/megatron/bridge/models/conversion/model_bridge.py Outdated
@claude
Copy link
Copy Markdown
Contributor

claude Bot commented May 5, 2026

Review Summary

This PR simplifies the transpose_on_export logic in _accumulate_grouped_export by removing the adaptive shape-check and always transposing when the flag is set. It adds transpose_on_export = True as a class attribute on GPTOSSMLPDownProjMapping so the down-proj expert weights are always transposed on export.

Potential bug -- bias transpose regression: GPTOSSMLPDownProjMapping is reused for both weight and bias expert mappings (lines 173-179 and 190-196 in gpt_oss_bridge.py). The class-level transpose_on_export = True means stacked bias tensors (shape [num_experts, hidden]) will also be transposed to [hidden, num_experts], which is incorrect. The removed adaptive logic in model_bridge.py implicitly protected against this. See inline comments for a suggested fix (guard on merged.ndim >= 3).

PR metadata: The title references an internal bug tracker URL. Consider [ckpt] fix: description format.

Docstring nit: The docstring says For square down_proj -- the transpose is needed regardless of whether the matrix is square.

Missing test coverage: No unit tests for the GPT-OSS MoE export (megatron-to-HF) path.

Suggested test cases: No perf tests impacted.

@weijiac0619 weijiac0619 force-pushed the weijia/fix_gptoss_downproj branch from 99b9808 to a235112 Compare May 6, 2026 01:33
@yaoyu-33 yaoyu-33 added the area:model Model implementations and HF bridge logic label May 7, 2026
Signed-off-by: weijiac <weijiac@NVIDIA.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:model Model implementations and HF bridge logic

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants