You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using pixi --version.
Reproducible example
Given something like this in the pyproject.toml
[tool.pixi.pypi-dependencies]
torch = { version = "==2.5.1+cu121" }
I get this error message:
failed to resolve pypi dependencies
╰─▶ Because there is no version of torch==2.5.1+cu121 and you require torch==2.5.1+cu121, we can conclude that your requirements are unsatisfiable.
My current work around is to comment out torch gpu related packages in my toml, then activate the environment via conda and pip install them thereafter. This is a bit annoying, and also means I cannot have an accurate version of my pixi.lock reflecting pytorch
Issue description
N / A
Expected behavior
torch==2.5.1+cu121 should be properly installed
The text was updated successfully, but these errors were encountered:
This works for me to get torch from pypi pixi.toml
[project]
channels = ["conda-forge"]
name = "torch-py"platforms = ["linux-64"]
# Pytorch requires an extra index
[pypi-options]
extra-index-urls = ["https://download.pytorch.org/whl/cu121"]
# Use a later libc version than the default
[system-requirements]
libc = "2.35"
[pypi-dependencies]
torch = { version = "==2.5.1+cu121" }
[dependencies]
python = ">=3.10, <3.13"numpy = "*"
pyproject.toml
[project]
name = "torch-py"dependencies = [
"torch==2.5.1+cu121",
]
requires-python = ">=3.10, <3.13"
[tool.pixi]
project.channels = ["https://fast.prefix.dev/conda-forge"]
project.platforms = ["linux-64"]
# Pytorch requires an extra indexpypi-options.extra-index-urls = ["https://download.pytorch.org/whl/cu121"]
# Use a later libc version than the defaultsystem-requirements.libc = "2.35"# Install conda dependenciesdependencies = { numpy = ">=2.1, <3" }
❯ pixi run python -c "import torch; print(torch.cuda.is_available())"
True
Checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pixi, using
pixi --version
.Reproducible example
Given something like this in the pyproject.toml
I get this error message:
My current work around is to comment out torch gpu related packages in my toml, then activate the environment via conda and pip install them thereafter. This is a bit annoying, and also means I cannot have an accurate version of my pixi.lock reflecting pytorch
Issue description
N / A
Expected behavior
torch==2.5.1+cu121 should be properly installed
The text was updated successfully, but these errors were encountered: