feat: Added cuda version selection to uv build.#433
feat: Added cuda version selection to uv build.#433BlueCrescent wants to merge 5 commits intomainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This pull request adds support for different CUDA versions in uv installation by moving PyTorch from a required dependency to an optional dependency with multiple CUDA-specific variants (cpu, cu126, cu128, cu130). The PR configures uv to use different PyTorch index URLs based on the selected CUDA variant.
Changes:
- Moved torch from required dependencies to optional dependencies with CUDA version variants
- Added uv-specific configuration for CUDA version conflicts and PyTorch index sources
- Updated README installation instructions to reflect the new CUDA version selection
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| pyproject.toml | Removed torch from dependencies, added CUDA-specific torch extras (cpu, cu126, cu128, cu130), and configured uv for handling PyTorch installation from different index URLs |
| README.md | Updated installation commands to include CUDA version selection syntax for uv |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
pyproject.toml
Outdated
| cpu = ["torch>=2.10,<2.11.0"] | ||
| cu126 = ["torch>=2.10,<2.11.0"] | ||
| cu128 = ["torch>=2.10,<2.11.0"] | ||
| cu130 = ["torch>=2.10,<2.11.0"] |
There was a problem hiding this comment.
The torch dependency has been moved from the main dependencies to optional dependencies, which means torch is no longer installed by default. Users who install the package without specifying one of the CUDA extras (cpu, cu126, cu128, cu130) will not get PyTorch installed. This is a breaking change that should be documented in the CHANGELOG_DEV.md (which the PR description indicates as incomplete).
Consider whether this is the intended behavior. If users must explicitly choose a CUDA version, this should be clearly communicated in the documentation and changelog.
| curl -LsSf https://astral.sh/uv/install.sh | sh | ||
|
|
||
| uv sync | ||
| uv sync --extra [cpu|cu126|cu128|cu130] # Get CUDA version via nvidia-smi |
There was a problem hiding this comment.
The README shows installation commands using bracket notation [cpu|cu126|cu128|cu130], but this is not valid shell syntax. Users cannot literally type this command and expect it to work. The command should be clarified to show that users need to choose ONE option, for example:
- uv sync --extra cpu
- uv sync --extra cu126
- uv sync --extra cu128
- uv sync --extra cu130
Consider using a different notation or providing explicit examples rather than shell pipe syntax inside brackets.
|
|
||
| # For developers: use [tests,linting] and install pre-commit hooks | ||
| uv sync --extra tests --extra linting | ||
| uv sync --extra [cpu|cu126|cu128|cu130] --extra tests --extra linting |
There was a problem hiding this comment.
The README shows installation commands using bracket notation [cpu|cu126|cu128|cu130], but this is not valid shell syntax. Users cannot literally type this command and expect it to work. The command should be clarified to show that users need to choose ONE option, for example:
- uv sync --extra cpu --extra tests --extra linting
- uv sync --extra cu126 --extra tests --extra linting
Consider using a different notation or providing explicit examples rather than shell pipe syntax inside brackets.
pyproject.toml
Outdated
| cpu = ["torch>=2.10,<2.11.0"] | ||
| cu126 = ["torch>=2.10,<2.11.0"] | ||
| cu128 = ["torch>=2.10,<2.11.0"] | ||
| cu130 = ["torch>=2.10,<2.11.0"] |
There was a problem hiding this comment.
The flash-attn dependency (line 27) requires torch to be installed, but torch has been moved to optional dependencies. This could cause installation failures for users who don't specify one of the CUDA extras, or for users installing via pip rather than uv.
Consider either:
- Moving flash-attn to the optional dependencies as well, with each CUDA extra including it
- Adding a note in the installation documentation that one of the CUDA extras must be specified
- Ensuring the package can't be installed without specifying a CUDA extra
|
|
||
| # Install PyTorch, psutil, Ninja and Flash Attention | ||
| pip install "torch<2.11.0" | ||
| pip install "torch<2.11.0" # Or appropriate version for your CUDA setup. |
There was a problem hiding this comment.
The comment suggests using "appropriate version for your CUDA setup" but the version constraint "torch<2.11.0" doesn't specify how to select the CUDA version. With the new changes, users should be directed to use the appropriate PyTorch index URL for their CUDA version when installing manually, or they should install with extras like in Option 1. Consider updating this comment to align with the new CUDA version selection approach.
| pip install "torch<2.11.0" # Or appropriate version for your CUDA setup. | |
| # For PyTorch, select the correct index URL for your CUDA/CPU setup from https://pytorch.org/get-started/locally/ | |
| pip install --index-url https://download.pytorch.org/whl/cu121 "torch<2.11.0" |
…optional dependencies
…ting PreTrainedModel.
What does this PR do?
Support different CUDA versions in uv installation.
General Changes
Checklist before submitting final PR
python tests/tests.py)CHANGELOG_DEV.md)