Skip to content

Commit aa375dc

Browse files
authored
[Bugfix] Missing quant_config in deepseek embedding layer (#12836)
1 parent 433c4a4 commit aa375dc

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/model_executor/models/deepseek_v2.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -581,7 +581,8 @@ def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
581581
self.embed_tokens = VocabParallelEmbedding(
582582
config.vocab_size,
583583
config.hidden_size,
584-
)
584+
quant_config=quant_config,
585+
prefix=f"{prefix}.embed_tokens")
585586
else:
586587
self.embed_tokens = PPMissingLayer()
587588

0 commit comments

Comments
 (0)