Skip to content

Commit 7747b58

Browse files
a-r-r-o-wNerogar
andauthored
Fix hunyuan video attention mask dim (#10454)
* fix * add coauthor Co-Authored-By: Nerogar <[email protected]> --------- Co-authored-by: Nerogar <[email protected]>
1 parent d9d94e1 commit 7747b58

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

src/diffusers/models/transformers/transformer_hunyuan_video.py

+1
Original file line numberDiff line numberDiff line change
@@ -721,6 +721,7 @@ def forward(
721721

722722
for i in range(batch_size):
723723
attention_mask[i, : effective_sequence_length[i], : effective_sequence_length[i]] = True
724+
attention_mask = attention_mask.unsqueeze(1) # [B, 1, N, N], for broadcasting across attention heads
724725

725726
# 4. Transformer blocks
726727
if torch.is_grad_enabled() and self.gradient_checkpointing:

0 commit comments

Comments
 (0)