Skip to content

[model] fix: Enhance MTP expert weight format detection for Qwen3.6#3740

Open
Yangruipis wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
redai-infra:fix/wuhuan/qwen3.6_bridge
Open

[model] fix: Enhance MTP expert weight format detection for Qwen3.6#3740
Yangruipis wants to merge 2 commits intoNVIDIA-NeMo:mainfrom
redai-infra:fix/wuhuan/qwen3.6_bridge

Conversation

@Yangruipis
Copy link
Copy Markdown

@Yangruipis Yangruipis commented May 7, 2026

What does this PR do ?

Qwen3.6-35B-A3B mtp expert's weights are packed (https://huggingface.co/Qwen/Qwen3.6-35B-A3B/blob/main/model.safetensors.index.json#L1034-L1035) which is different from Qwen3.5-35B-A3B (https://huggingface.co/Qwen/Qwen3.5-35B-A3B/blob/main/model.safetensors.index.json#L345-L1117).

Both models share the same HF architectures (Qwen3_5MoeForConditionalGeneration) and model_type (qwen3_5_moe), so they are routed to the same bridge. Today the bridge hard-codes the per-expert MTP mapping (mtp.layers.*.mlp.experts.*.gate_proj.weight etc.). On Qwen3.6,
none of those HF keys exist, so every routed MTP expert weight is silently skipped during HF→Megatron load and remains randomly initialized — model_bridge.py only emits a WARNING: Can't find ... in hf_keys and continues. With mtp_loss_scaling_factor=0.1 this leaks noise
gradients into training.

This PR detects the MTP expert storage format from the HF state at provider-build time and selects the correct mapping pair, keeping Qwen3.5 working while fixing Qwen3.6.

reproduction

torchrun --nproc_per_node=8 examples/conversion/hf_megatron_roundtrip_multi_gpu.py  --hf-model-id Qwen/Qwen3.6-35B-A3B --tp 1 --ep 8
image

Changelog

  • src/megatron/bridge/models/qwen_vl/qwen35_vl_bridge.py
    • Qwen35VLMoEBridge.provider_bridge: peek hf_pretrained.state.source.get_all_keys() when mtp_num_layers > 0 and set self._mtp_experts_packed = True iff mtp.layers.0.mlp.experts.gate_up_proj is present (Qwen3.6); otherwise False (Qwen3.5, current behavior).
    • Qwen35VLMoEBridge.mapping_registry: branch the MTP routed-expert mapping on self._mtp_experts_packed:
      • packed → FusedGatedExpertMapping(linear_fc1 ↔ mtp.layers.*.mlp.experts.gate_up_proj) + FusedExpertMapping(linear_fc2 ↔ mtp.layers.*.mlp.experts.down_proj, transpose_on_export=True) (mirrors the main-decoder mapping).
      • per-expert → existing GatedMLPMapping + AutoMapping over mtp.layers.*.mlp.experts.*.{gate,up,down}_proj.weight.
    • Update the stale comment block above the MTP mappings — MTP MoE format is not always per-expert; it depends on the checkpoint.

GitHub Actions CI

See the CI section in the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Verified by loading both Qwen3.5-35B-A3B and Qwen3.6-35B-A3B HF checkpoints via the bridge: pre-fix Qwen3.6 prints WARNING: Can't find the following HF parameters in hf_keys: ['mtp.layers.0.mlp.experts.{i}.gate_proj.weight', ...] for every local routed expert across all EP
    ranks; post-fix both checkpoints load cleanly with no MTP warnings.
  • Detection falls back to the per-expert path when hf_pretrained is a bare PretrainedConfig (config-only export). This preserves current behavior; a future PR could thread format detection through the export path as well.
  • Related to # (issue)

Yangruipis added 2 commits May 7, 2026 22:45
Add support for detecting MTP expert weight format for Qwen3.5 and Qwen3.6 models.

Signed-off-by: 杨睿 <yangruipis@163.com>
Refactor MTP expert weight format detection for clarity and maintainability.

Signed-off-by: 杨睿 <yangruipis@163.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 7, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@Yangruipis Yangruipis changed the title Fix/wuhuan/qwen3.6 bridge [model] fix: Enhance MTP expert weight format detection for Qwen3.6 May 7, 2026
@Yangruipis
Copy link
Copy Markdown
Author

@yaoyu-33 could you please take a review? thanks

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the waiting-on-maintainers Waiting on maintainers to respond label May 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-request waiting-on-maintainers Waiting on maintainers to respond

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants