Skip to content

Commit e4a0a04

Browse files
committed
[Bugfix] Fix profiling.py
Signed-off-by: zh Wang <[email protected]>
1 parent 027b204 commit e4a0a04

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/offline_inference/profiling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ def get_output_len_generator() -> Generator[int, Any, Any]:
235235
assert isinstance(sampling_params.max_tokens, int)
236236

237237
prompt_token_ids = torch.randint(
238-
llm.llm_engine.model_config.get_vocab_size(),
238+
llm.get_tokenizer().vocab_size,
239239
size=(prompt_len, )).tolist()
240240

241241
llm.llm_engine.add_request(

0 commit comments

Comments
 (0)