Skip to content

Commit 6bcd5d8

Browse files
QubitiumMu Huai
authored and
Mu Huai
committed
Fix non-contiguous input passed to Marlin kernel (vllm-project#15319)
Signed-off-by: Mu Huai <[email protected]>
1 parent 3fb4a0b commit 6bcd5d8

File tree

1 file changed

+4
-0
lines changed
  • vllm/model_executor/layers/quantization/kernels/mixed_precision

1 file changed

+4
-0
lines changed

vllm/model_executor/layers/quantization/kernels/mixed_precision/marlin.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,10 @@ def apply_weights(self,
115115
layer: torch.nn.Module,
116116
x: torch.Tensor,
117117
bias: Optional[torch.Tensor] = None) -> torch.Tensor:
118+
# marlin requires contiguous memory layout
119+
# prefix caching may cause x to be non-contiguous
120+
x = x.contiguous() # no-op if already contiguous
121+
118122
c = self.config
119123
w_q, w_s, w_zp, w_gidx = self._get_weight_params(layer)
120124

0 commit comments

Comments
 (0)