-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DEV: pin CUDA variant for PyTorch #186
Conversation
pyproject.toml
Outdated
# Note: JAX and PyTorch automatically install CUDA variants | ||
# thanks to the `system-requirements` below. | ||
[tool.pixi.feature.cuda-backends] | ||
system-requirements = { cuda = "12" } | ||
|
||
[tool.pixi.feature.cuda-backends.target.linux-64.dependencies] | ||
cupy = "*" | ||
jaxlib = { version = "*", build = "cuda12*" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@traversaro could I double check that this is correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit tricky. It is correct that having system-requirements = { cuda = "12" }
and jax
and pytorch
as dependencies will result in CUDA-enabled jaxlib
and pytorch
being installed, as in both cases the CUDA variant have higher priority (see https://github.com/conda-forge/pytorch-cpu-feedstock/blob/46274b8459ee640a9f90a90d75ac931b770673ed/recipe/meta.yaml#L5-L10 and https://github.com/conda-forge/jaxlib-feedstock/blob/8d785af5387be376037f0210fc6dc2f5da95613c/recipe/meta.yaml#L4-L6). However, have a constraint on the build string ensures that this will always be true in the future.
Let's make an example. If you remove the jaxlib = { version = "*", build = "cuda*" }
constraint, and let's pretend that a new jaxlib 0.100.0 (this is a made up number) is released. Let's also pretend that jaxlib 0.100.0 was released without cuda-version==12 compatible builds, for example as cuda 12 was dropped and only cuda 13 is supported, or as there was a regression in cuda builds for 0.100.0, that were marked as broken while cpu builds were still available. In that case, if you do not have jaxlib = { version = "*", build = "cuda*" }
, if you refresh the lockfile the solver will just silently install a cpu-only version of jaxlib, while an error will be printed if jaxlib = { version = "*", build = "cuda*" }
is present. In a nutshell I find having jaxlib = { version = "*", build = "cuda*" }
a bit more robust, even if someone could disagree as it is not super-clear if build string structure is part of the public interface of the conda packages. The long term clean solution for this problem is probably flags (see conda/ceps#111 and https://prefix.dev/blog/ceps_2025), but that still needs to be discussed at the CEP-level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for example as cuda 12 was dropped and only cuda 13 is supported [...] the solver will just silently install a cpu-only version of jaxlib
This is a strong argument against this PR. I'll make it explicit again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lucascolley ready for review again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the fast help as always @traversaro !! much appreciated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Guido
* DEV: remove redundant CUDA pins for JAX * Revert JAX and make torch explicit too
No description provided.