Skip to content

Commit d20e261

Browse files
authored
Fix non-contiguous input passed to Marlin kernel (#15319)
1 parent f622dbc commit d20e261

File tree

1 file changed

+4
-0
lines changed
  • vllm/model_executor/layers/quantization/kernels/mixed_precision

1 file changed

+4
-0
lines changed

vllm/model_executor/layers/quantization/kernels/mixed_precision/marlin.py

+4
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,10 @@ def apply_weights(self,
115115
layer: torch.nn.Module,
116116
x: torch.Tensor,
117117
bias: Optional[torch.Tensor] = None) -> torch.Tensor:
118+
# marlin requires contiguous memory layout
119+
# prefix caching may cause x to be non-contiguous
120+
x = x.contiguous() # no-op if already contiguous
121+
118122
c = self.config
119123
w_q, w_s, w_zp, w_gidx = self._get_weight_params(layer)
120124

0 commit comments

Comments
 (0)