-
Notifications
You must be signed in to change notification settings - Fork 6k
[tests] Changes to the torch.compile()
CI and tests
#11508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@DN6 a gentle ping. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good 👍🏽 Thanks. Minor comments.
@@ -96,23 +92,8 @@ def test_gradient_checkpointing_is_applied(self): | |||
expected_set = {"HunyuanVideoTransformer3DModel"} | |||
super().test_gradient_checkpointing_is_applied(expected_set=expected_set) | |||
|
|||
@require_torch_gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would make sense to add these decorators to the top of the Mixin no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These decorators are present in the mixin:
diffusers/tests/models/test_modeling_common.py
Lines 1762 to 1765 in b5c2050
@require_torch_gpu | |
@require_torch_2 | |
@is_torch_compile | |
@slow |
@@ -188,7 +188,7 @@ jobs: | |||
group: aws-g4dn-2xlarge | |||
|
|||
container: | |||
image: diffusers/diffusers-pytorch-compile-cuda | |||
image: diffusers/diffusers-pytorch-cuda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small nit. We can remove the -k "compile" in the test runner step.
Thanks for the reviews, @DN6! Hope to make our compile CI better and better. |
What does this PR do?
Regarding the final point, I think we already test torch.compile() support for popular models and doing it at the pipeline-level makes little sense to me. But no strong opinions.