Skip to content

Commit c17e677

Browse files
fyabcwuisawesome
authored andcommitted
[Bugfix] Fix distributed bug in Qwen2.5-VL & Qwen2.5-Omni (vllm-project#16907)
1 parent 57c25f8 commit c17e677

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

vllm/model_executor/models/qwen2_5_vl.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -198,9 +198,8 @@ def forward(self, x: torch.Tensor):
198198

199199
def all_gather_interleave(local_tensor, hidden_size: int, tp_size: int):
200200
"""All-gather the input tensor interleavely across model parallel group."""
201-
import torch.distributed as dist
202201
gathered_tensors = [torch.zeros_like(local_tensor) for _ in range(tp_size)]
203-
dist.all_gather(gathered_tensors, local_tensor)
202+
parallel_state.get_tp_group().all_gather(gathered_tensors, local_tensor)
204203

205204
gathered_tensors_split = [
206205
torch.split(tensor, hidden_size // tp_size, -1)

0 commit comments

Comments
 (0)