Skip to content

Commit 2b432ac

Browse files
a-r-r-o-wNerogar
authored andcommittedJan 15, 2025
Fix hunyuan video attention mask dim (#10454)
* fix * add coauthor Co-Authored-By: Nerogar <[email protected]> --------- Co-authored-by: Nerogar <[email protected]>
1 parent 263b973 commit 2b432ac

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed
 

Diff for: ‎src/diffusers/models/transformers/transformer_hunyuan_video.py

+1
Original file line numberDiff line numberDiff line change
@@ -721,6 +721,7 @@ def forward(
721721

722722
for i in range(batch_size):
723723
attention_mask[i, : effective_sequence_length[i], : effective_sequence_length[i]] = True
724+
attention_mask = attention_mask.unsqueeze(1) # [B, 1, N, N], for broadcasting across attention heads
724725

725726
# 4. Transformer blocks
726727
if torch.is_grad_enabled() and self.gradient_checkpointing:

0 commit comments

Comments
 (0)
Please sign in to comment.