Skip to content

[TVMScript][Relax] Use tir.SizeVar for shape variables#18833

Closed
tqchen wants to merge 2 commits intoapache:mainfrom
tqchen:sizevar-tvmscript
Closed

[TVMScript][Relax] Use tir.SizeVar for shape variables#18833
tqchen wants to merge 2 commits intoapache:mainfrom
tqchen:sizevar-tvmscript

Conversation

@tqchen
Copy link
Member

@tqchen tqchen commented Feb 26, 2026

TVMScript parser now uses tir.SizeVar (known non-negative) instead of tir.Var for symbolic variables defined in size contexts (R.Tensor shapes, R.Shape), while keeping tir.Var for R.Prim arguments. This aligns TVMScript parsing with Relax transformations that call Analyzer::MarkGlobalNonNegValue, fixing shape inference differences between parsing and transformation passes

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tqchen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the TVMScript parser by introducing a more precise handling of symbolic variables. It ensures that variables representing sizes in R.Tensor and R.Shape contexts are correctly typed as tir.SizeVar, while general primitive arguments default to tir.Var. This refinement resolves inconsistencies in shape inference between parsing and Relax transformation passes, leading to more accurate structural information and improved compiler behavior.

Highlights

  • Symbolic Variable Typing: The TVMScript parser now distinguishes between tir.Var and tir.SizeVar for symbolic variables, using tir.SizeVar for variables defined in size contexts (e.g., R.Tensor shapes, R.Shape).
  • Improved Shape Inference Consistency: This change aligns TVMScript parsing with Relax transformations, fixing discrepancies in shape inference that previously existed between parsing and transformation passes.
  • New get_symbolic_size_vars Methods: New get_symbolic_size_vars methods were introduced across various StructInfoProxy subclasses to explicitly identify symbolic variables that represent sizes.
  • Context-Aware Variable Creation: The parser's logic for collecting symbolic variables from function parameters was updated to conditionally create tir.Var or tir.SizeVar based on whether the variable is identified as a size variable.
  • Enhanced Test Coverage: New tests were added to verify the correct inference of tir.SizeVar for shape variables and tir.Var for R.Prim values, including cases where R.Prim values become tir.SizeVar due to other arguments.
  • Bug Fix: An xfail marker was removed from an existing test, indicating a resolution for a bug related to implicit bounds not being provided during parsing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/tvm/script/parser/relax/entry.py
    • Added get_symbolic_size_vars method to StructInfoProxy and its relevant subclasses (FuncStructInfoProxy, TupleStructInfoProxy, PrimStructInfoProxy) to identify symbolic variables representing sizes.
    • Removed the get_symbolic_vars method from ObjectProxy as it was redundant.
  • python/tvm/script/parser/relax/parser.py
    • Modified collect_symbolic_var_from_params to collect both general symbolic variables and a separate set of symbolic size variables.
    • Updated the logic to create tir.SizeVar for variables identified as size variables and tir.Var otherwise, based on the collected sets.
  • tests/python/relax/test_tvmscript_parser.py
    • Removed pytest.mark.xfail from test_function_symbolic_variables_are_annotated, indicating a bug fix.
    • Added test_symbolic_shape_variables_are_size_var to verify that symbolic variables inferred from shapes are correctly identified as tir.SizeVar.
    • Added test_symbolic_variables_from_prim_value_may_be_negative to confirm that symbolic variables from R.Prim values are tir.Var by default.
    • Added test_other_arguments_may_cause_prim_value_to_define_size_var to test scenarios where R.Prim values become tir.SizeVar if they also appear in R.Shape contexts.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the use of tir.SizeVar for symbolic variables in shape contexts within the TVMScript parser. This is a great improvement as it aligns the parser's shape inference with later Relax transformation passes, fixing inconsistencies. The changes in entry.py to introduce get_symbolic_size_vars and in parser.py to use it for creating tir.SizeVar are logical and well-implemented. The new tests are comprehensive and validate the distinction between shape variables (tir.SizeVar) and general symbolic variables (tir.Var) from R.Prim.

I have one concern regarding the un-xfailing of test_function_symbolic_variables_are_annotated, which I've detailed in a specific comment. It seems the test might pass for incorrect reasons.

TVMScript parser now uses `tir.SizeVar` (known non-negative) instead of
`tir.Var` for symbolic variables defined in size contexts (R.Tensor shapes,
R.Shape), while keeping `tir.Var` for R.Prim arguments. This aligns
TVMScript parsing with Relax transformations that call
`Analyzer::MarkGlobalNonNegValue`, fixing shape inference differences
between parsing and transformation passes (Issue apache#16877).
The test_function_symbolic_variables_are_annotated test uses
strided_slice(A, [0], [0], [extent-1]). With extent >= 0 (shape
variable), extent=0 is valid making extent-1=-1, which triggers
the negative-index clamping logic. Using assume_inbound=True avoids
this since the test is about shape inference, not boundary checking.
@tqchen
Copy link
Member Author

tqchen commented Feb 27, 2026

Closing: the SizeVar approach is too fragile — too many passes create new symbolic Var objects (fuse_ops, lambda_lift, canonicalize_bindings, bind_symbolic_vars, etc.) without preserving SizeVar, causing StructuralEqual mismatches. The existing local kernel shape reasoning (MarkGlobalNonNegValue during transformations) is sufficient.

@tqchen tqchen closed this Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants