Add low-memory mode for batch polynomial opening#327
Conversation
Summary of ChangesHello @hczphn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces a low-memory mode for batch polynomial opening, which effectively trades computation for reduced peak memory usage. It also includes significant optimizations for SRS loading through buffered I/O and parallel deserialization of uncompressed points. While the core logic for the low-memory mode is sound, there are issues regarding test isolation and portability in the utility functions, and a potential security/correctness risk in the SRS loading implementation due to the use of unchecked point parsing.
| E::G1Affine::from_uncompressed_unchecked(&uncompressed) | ||
| .into_option() | ||
| .ok_or(serdes::SerdeError::DeserializeError) |
There was a problem hiding this comment.
Using from_uncompressed_unchecked skips critical validation checks, such as verifying that the points are actually on the curve. While this is faster, it introduces a risk if the SRS file is corrupted or tampered with. Since the SRS is a foundational security component, it is safer to use from_uncompressed which performs these checks, especially since the uncompressed format already avoids the most expensive part (square root computation).
| E::G1Affine::from_uncompressed_unchecked(&uncompressed) | |
| .into_option() | |
| .ok_or(serdes::SerdeError::DeserializeError) | |
| E::G1Affine::from_uncompressed(&uncompressed) | |
| .into_option() | |
| .ok_or(serdes::SerdeError::DeserializeError) |
| let srs_path = format!("/tmp/kzg_srs_{}.bin", n_input_vars); | ||
| expander_pcs_init_with_srs_path::<FieldConfig, PCS>(n_input_vars, mpi_config, Some(&srs_path)) |
There was a problem hiding this comment.
Hardcoding a path in /tmp with a fixed filename is problematic for several reasons:
- Portability:
/tmpis not available on all operating systems (e.g., Windows). - Test Isolation: Multiple tests running in parallel or multiple users running tests on the same machine will conflict over the same file. If a test leaves a corrupted file, subsequent tests will fail.
- Regression: The docstring for
expander_pcs_init_testing_onlystates it should 'always regenerate SRS', but this change enables caching by default.
Consider passing None to expander_pcs_init_with_srs_path to maintain the 'always regenerate' behavior for this helper, or use a more robust temporary file mechanism if caching is desired for specific tests.
| let srs_path = format!("/tmp/kzg_srs_{}.bin", n_input_vars); | |
| expander_pcs_init_with_srs_path::<FieldConfig, PCS>(n_input_vars, mpi_config, Some(&srs_path)) | |
| expander_pcs_init_with_srs_path::<FieldConfig, PCS>(n_input_vars, mpi_config, None) |
f4c9a82 to
71777e7
Compare
When the `low-memory` feature is enabled, `prover_merge_points` moves tilde_gs into sumcheck (instead of cloning) and recomputes them afterwards for g_prime construction. This eliminates one full copy of the polynomial data at the cost of redundant computation. - Add `low-memory` feature flag to poly_commit - Add `low_memory: bool` parameter to `prover_merge_points` - Callers default to `cfg!(feature = "low-memory")` - Extract `build_tilde_gs` helper to avoid code duplication - No behavior change when feature is not enabled Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The compiler cannot statically verify that tilde_gs is not moved in both if/else branches. Wrap in Option so take() explicitly transfers ownership in the low_memory path, while as_ref() borrows in the default path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
bf48823 to
f9dcdf9
Compare
Summary
low-memoryCargo feature topoly_commitcrateprover_merge_pointsmovestilde_gsinto sumcheck (zero clone) and recomputes them for g_prime, reducing peak memory by eliminating one full copy of the polynomial databuild_tilde_gshelper to avoid code duplication between the two pathsDesign
The optimization trades compute time for memory:
low-memoryoff): clone tilde_gs into sumcheck_poly, keep original for g_prime — same as current behaviorlow-memoryon): move tilde_gs into sumcheck_poly, recompute from polys + eq_t_i for g_prime — peak memory reduced by ~size(tilde_gs)Usage:
The
low_memoryparameter is also exposed as aboolonprover_merge_pointsfor callers that want runtime control.Test plan
cargo test -p poly_commit(default, no feature)cargo test -p poly_commit --features low-memory🤖 Generated with Claude Code