Skip to content

[Bug]: qwen2-vl 7b, on vllm 0.8.1 & 0.8.2, sometimes (not deterministically but depends on data) I got: ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders #15764

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
panjiacheng opened this issue Mar 30, 2025 · 21 comments · May be fixed by #16229
Labels
bug Something isn't working

Comments

@panjiacheng
Copy link

Your current environment

completions: List[RequestOutput] = self.inference_engine.generate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 1072, in inner return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 465, in generate outputs = self._run_engine(use_tqdm=use_tqdm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1375, in _run_engine step_outputs = self.llm_engine.step() ^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/llm_engine.py", line 220, in step outputs = self.engine_core.get_output() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 167, in get_output return self.engine_core.step() ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 195, in step output = self.model_executor.execute_model(scheduler_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 77, in execute_model output = self.collective_rpc("execute_model", ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 2255, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model output = self.model_runner.execute_model(scheduler_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1000, in execute_model inputs_embeds = self.model.get_input_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1303, in get_input_embeddings inputs_embeds = merge_multimodal_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 449, in merge_multimodal_embeddings return _merge_multimodal_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings raise ValueError( ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders

🐛 Describe the bug

I have:

enforce_eager: false
enable_chunked_prefill: false

But still got the "ValueError" thing.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@panjiacheng panjiacheng added the bug Something isn't working label Mar 30, 2025
@panjiacheng
Copy link
Author

#15185 is a similar issue (but on Qwen2.5-VL)

@Isotr0py
Copy link
Collaborator

Can you provide the prompt and image to reproduce this bug?

@DefTruth
Copy link
Contributor

i got the same error

@DefTruth
Copy link
Contributor

seems we have to disable chunked-prefill on V0 mode, V1 work fine with chunked-prefill, but V0 failed.

ERROR 03-31 16:21:41 [engine.py:160] ValueError('Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders')
ERROR 03-31 16:21:41 [engine.py:160] Traceback (most recent call last):
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 03-31 16:21:41 [engine.py:160]     self.run_engine_loop()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 221, in run_engine_loop
ERROR 03-31 16:21:41 [engine.py:160]     request_outputs = self.engine_step()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 247, in engine_step
ERROR 03-31 16:21:41 [engine.py:160]     raise e
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 230, in engine_step
ERROR 03-31 16:21:41 [engine.py:160]     return self.engine.step()
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 1430, in step
ERROR 03-31 16:21:41 [engine.py:160]     outputs = self.model_executor.execute_model(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 139, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     output = self.collective_rpc("execute_model",
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 03-31 16:21:41 [engine.py:160]     answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2313, in run_method
ERROR 03-31 16:21:41 [engine.py:160]     return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 420, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     output = self.model_runner.execute_model(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 03-31 16:21:41 [engine.py:160]     return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1770, in execute_model
ERROR 03-31 16:21:41 [engine.py:160]     hidden_or_intermediate_states = model_executable(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-31 16:21:41 [engine.py:160]     return self._call_impl(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-31 16:21:41 [engine.py:160]     return forward_call(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1080, in forward
ERROR 03-31 16:21:41 [engine.py:160]     inputs_embeds = self.get_input_embeddings_v0(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1015, in get_input_embeddings_v0
ERROR 03-31 16:21:41 [engine.py:160]     inputs_embeds = merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160]     return _merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160]     raise ValueError(
ERROR 03-31 16:21:41 [engine.py:160] ValueError: Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders

@DarkLight1337
Copy link
Member

Yes, chunked prefill is not supported on V0. V1 should work fine though.

@panjiacheng
Copy link
Author

btw, I also tested it by switching to V0. V0 works fine, so the issue is with V1.

@DarkLight1337
Copy link
Member

Can you show the error log?

@panjiacheng
Copy link
Author

panjiacheng commented Apr 1, 2025

    completions: List[RequestOutput] = self.inference_engine.generate(
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 1072, in inner
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 465, in generate
    outputs = self._run_engine(use_tqdm=use_tqdm)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 1375, in _run_engine
    step_outputs = self.llm_engine.step()
                   ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/llm_engine.py", line 220, in step
    outputs = self.engine_core.get_output()
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 167, in get_output
    return self.engine_core.step()
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 195, in step
    output = self.model_executor.execute_model(scheduler_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 77, in execute_model
    output = self.collective_rpc("execute_model",
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
    answer = run_method(self.driver_worker, method, args, kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/utils.py", line 2255, in run_method
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model
    output = self.model_runner.execute_model(scheduler_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1000, in execute_model
    inputs_embeds = self.model.get_input_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1303, in get_input_embeddings
    inputs_embeds = merge_multimodal_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 449, in merge_multimodal_embeddings
    return _merge_multimodal_embeddings(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
    raise ValueError(
ValueError: Attempted to assign 1369 + 1369 = 2738 multimodal tokens to 2739 placeholders

@DarkLight1337
Copy link
Member

Possibly related to #15677

@benchislett
Copy link
Contributor

I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?

@panjiacheng
Copy link
Author

Update: after switching to V0, it can run for longer without such errors. But after some time, I still got the error:

  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1379, in forward
    inputs_embeds = self.get_input_embeddings_v0(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/qwen2_vl.py", line 1317, in get_input_embeddings_v0
    inputs_embeds = merge_multimodal_embeddings(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
    return _merge_multimodal_embeddings(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tiger/.local/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
    raise ValueError(
ValueError: Attempted to assign 1369 + 1369 + 1369 + 1369 = 5476 multimodal tokens to 5477 placeholders

@panjiacheng
Copy link
Author

I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal?

@benchislett I double checked and made sure that the input doesn't contain accidentally added <|image_pad|>. Actually, if there are such tokens, it won't pass other checks in the code.

@panjiacheng
Copy link
Author

@DarkLight1337 @Isotr0py Hi guys, I understand that this issue might be specific to Qwen and might be hard to fix. Rather than locating the issue in the code and fixing it, is there a workaround like: if such cases is encountered, vllm skips this data point and continue to inference? (rather than directly failing). Many thanks!

@panjiacheng
Copy link
Author

Updates:
I figured that this might have something to do with special tokens being generated. I'm working on a fix but setting a small list of "bad_words" can cause CUDA OOM (#15976).

@FerryHuang
Copy link

Any fix or workaround until now? Fix here (#16229) seems to be not completed yet.

@DarkLight1337
Copy link
Member

You can set top_p to avoid sampling the image tokens

@xsank
Copy link
Contributor

xsank commented May 8, 2025

i have met the same bug in the version 0.8.5, let me see how to fix this.

@whitelok
Copy link

whitelok commented May 8, 2025 via email

@xsank
Copy link
Contributor

xsank commented May 12, 2025

i have met the same bug in the version 0.8.5, let me see how to fix this.

my problem have been solved, it is a request bug. some content added an extra 'image_pad' token... 0.8.5 works well.

@theophilegervet
Copy link

theophilegervet commented May 13, 2025

@panjiacheng @FerryHuang @xsank
Have you found a fix?

  • I'm using vllm==0.8.2 with V1 and still see this issue
  • I'm sure the inputs don't contain any extra "<|image_pad|>"
  • This seems due to generated tokens and happens probabilistically, which makes it hard to reproduce
  • None of the attempts below to avoid generating image tokens fix the issue
img_id = 151655
sampling_params=SamplingParams(
    bad_words=["<|image_pad|>"],  # doesn't work
    stop_token_ids=[img_id],  # doesn't work
    logit_bias={  # doesn't work
        img_id: -100.0,
   },
),

@DarkLight1337 Any idea?

@DarkLight1337
Copy link
Member

I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants