-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Enable ONNX export of GPU loaded SVD/SVD-XT UNet models #6562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -397,6 +397,8 @@ def forward( | |||
|
||||
# broadcast to batch dimension in a way that's compatible with ONNX/Core ML | ||||
batch_size, num_frames = sample.shape[:2] | ||||
if torch.is_tensor(num_frames): | ||||
num_frames = num_frames.item() | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will results in There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, diffusers/src/diffusers/pipelines/stable_video_diffusion/pipeline_stable_video_diffusion.py Line 339 in a1cb106
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @echarlaix you're right, with the suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @echarlaix pinging to follow up on this. I ran the ONNX export of the UNET model using the above fix. The export runs successfully. However, the model fails ONNXRuntime Inference with the error below
I'd appreciate your input on how we can move this PR along. At the moment, the ONNX export for the SpatioTemporal UNET is broken. There are 2 ways to enable the export
Option 1 sets There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Did you try to cast There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @echarlaix There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @echarlaix following up on this |
||||
timesteps = timesteps.expand(batch_size) | ||||
|
||||
t_emb = self.time_proj(timesteps) | ||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can
num_frames
ever be a tensor? Can you give an example?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@patrickvonplaten
num_frames
is created as a CPU tensor during the tracing step of the ONNX export. I have also provided a script to reproduce this behavior in the comment below