-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
[Build/CI] Fix CUDA 11.8 build #17679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 28 commits
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
fe02909
[Build/CI] Disable moe_permute_unpermute kernels on CUDA 11.8
tlrmchlsmth 2700c8d
Unblock 11.8 build for testing
tlrmchlsmth 961c060
naming
tlrmchlsmth c6d8786
Merge branch 'main' into fix_permute_build
tlrmchlsmth d009766
revert
tlrmchlsmth 865f611
try to fix 11.8 flashinfer issues
tlrmchlsmth add6cc3
fixup?
tlrmchlsmth 9716bf4
potential fix for 11.8
LucasWilkinson 5fb0487
Merge remote-tracking branch 'nm/lwilkinson/fix-11.8-build' into fix_…
tlrmchlsmth ece9ced
Merge branch 'main' into fix_permute_build
tlrmchlsmth b4c526c
revert cuda.py changes
tlrmchlsmth c691b4a
[Build/CI] Disable moe_permute_unpermute kernels on CUDA 11.8
tlrmchlsmth 4225c43
Unblock 11.8 build for testing
tlrmchlsmth e7d92a2
naming
tlrmchlsmth d87632d
revert
tlrmchlsmth 9172dbb
try to fix 11.8 flashinfer issues
tlrmchlsmth b50fadc
fixup?
tlrmchlsmth a00364f
potential fix for 11.8
LucasWilkinson 69c3687
revert cuda.py changes
tlrmchlsmth f6aab1b
Merge branch 'main' into fix_permute_build
tlrmchlsmth 27bfbbc
Merge branch 'fix_permute_build' of https://github.com/vllm-project/v…
tlrmchlsmth 1218067
Fixup
tlrmchlsmth dd74a07
respect flashinfer_use_aot arg
tlrmchlsmth f156e06
FLASHINFER_ENABLE_SM90=0 env
tlrmchlsmth dce957f
dont install dev dependency for 11.8
LucasWilkinson 729310b
add missing binding
LucasWilkinson 02614b8
Merge remote-tracking branch 'origin/main' into fix_permute_build
LucasWilkinson d6d9fa8
remove turning off AOT
LucasWilkinson 3bbecc6
fix registration
LucasWilkinson File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -263,8 +263,11 @@ if [ "$TARGETPLATFORM" != "linux/arm64" ]; then \ | |
export TORCH_CUDA_ARCH_LIST='7.5 8.0 8.9 9.0 10.0+PTX'; \ | ||
else \ | ||
export TORCH_CUDA_ARCH_LIST='7.5 8.0 8.9 9.0+PTX'; \ | ||
fi && \ | ||
export FLASHINFER_ENABLE_AOT=1; \ | ||
fi; \ | ||
CUDA_MAJOR="${CUDA_VERSION%%.*}"; \ | ||
if [ "$CUDA_MAJOR" -lt 12 ]; then \ | ||
export FLASHINFER_ENABLE_SM90=0; \ | ||
fi; \ | ||
uv pip install --system --no-build-isolation "git+https://github.com/flashinfer-ai/flashinfer@21ea1d2545f74782b91eb8c08fd503ac4c0743fc" ; \ | ||
fi | ||
COPY examples examples | ||
|
@@ -275,7 +278,7 @@ RUN --mount=type=cache,target=/root/.cache/uv \ | |
. /etc/environment && \ | ||
uv pip list | ||
|
||
# Although we build Flashinfer with AOT mode, there's still | ||
# Even when we build Flashinfer with AOT mode, there's still | ||
# some issues w.r.t. JIT compilation. Therefore we need to | ||
# install build dependencies for JIT compilation. | ||
# TODO: Remove this once FlashInfer AOT wheel is fixed | ||
|
@@ -303,8 +306,11 @@ RUN --mount=type=cache,target=/root/.cache/uv \ | |
uv pip install --system --no-build-isolation "git+https://github.com/state-spaces/[email protected]" | ||
|
||
# install development dependencies (for testing) | ||
RUN --mount=type=cache,target=/root/.cache/uv \ | ||
uv pip install --system -r requirements/dev.txt | ||
RUN --mount=type=cache,target=/root/.cache/uv \ | ||
CUDA_MAJOR="${CUDA_VERSION%%.*}"; \ | ||
if [ "$CUDA_MAJOR" -ge 12 ]; then \ | ||
uv pip install --system -r requirements/dev.txt; \ | ||
fi | ||
|
||
# install development dependencies (for testing) | ||
RUN --mount=type=cache,target=/root/.cache/uv \ | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does it mean we don't want to support these models or we would like to add some fallback logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, these functions aren't called anywhere yet, outside of tests. @CalebDu is this right?
These kernels are an optimization added in #14568. IIUC, this would be used in conjunction with the CUTLASS kernels, which also need CUDA 12.0. Otherwise we can fall back to the triton MoE kernels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, customized permute/unpermute kernel only are called in test now. But I have no idea why these kernel are incompatible to cuda 11.8.