-
-
Notifications
You must be signed in to change notification settings - Fork 7.6k
[Bug]: qwen2-vl 7b, on vllm 0.8.1 & 0.8.2, sometimes (not deterministically but depends on data) I got: ValueError: Attempted to assign 702 = 702 multimodal tokens to 703 placeholders #15764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
#15185 is a similar issue (but on Qwen2.5-VL) |
Can you provide the prompt and image to reproduce this bug? |
i got the same error |
seems we have to disable chunked-prefill on V0 mode, V1 work fine with chunked-prefill, but V0 failed. ERROR 03-31 16:21:41 [engine.py:160] ValueError('Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders')
ERROR 03-31 16:21:41 [engine.py:160] Traceback (most recent call last):
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 158, in start
ERROR 03-31 16:21:41 [engine.py:160] self.run_engine_loop()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 221, in run_engine_loop
ERROR 03-31 16:21:41 [engine.py:160] request_outputs = self.engine_step()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 247, in engine_step
ERROR 03-31 16:21:41 [engine.py:160] raise e
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 230, in engine_step
ERROR 03-31 16:21:41 [engine.py:160] return self.engine.step()
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 1430, in step
ERROR 03-31 16:21:41 [engine.py:160] outputs = self.model_executor.execute_model(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 139, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] output = self.collective_rpc("execute_model",
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 03-31 16:21:41 [engine.py:160] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2313, in run_method
ERROR 03-31 16:21:41 [engine.py:160] return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 420, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] output = self.model_runner.execute_model(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 03-31 16:21:41 [engine.py:160] return func(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1770, in execute_model
ERROR 03-31 16:21:41 [engine.py:160] hidden_or_intermediate_states = model_executable(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-31 16:21:41 [engine.py:160] return self._call_impl(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-31 16:21:41 [engine.py:160] return forward_call(*args, **kwargs)
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1080, in forward
ERROR 03-31 16:21:41 [engine.py:160] inputs_embeds = self.get_input_embeddings_v0(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_5_vl.py", line 1015, in get_input_embeddings_v0
ERROR 03-31 16:21:41 [engine.py:160] inputs_embeds = merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 455, in merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160] return _merge_multimodal_embeddings(
ERROR 03-31 16:21:41 [engine.py:160] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 371, in _merge_multimodal_embeddings
ERROR 03-31 16:21:41 [engine.py:160] raise ValueError(
ERROR 03-31 16:21:41 [engine.py:160] ValueError: Attempted to assign 5460 = 5460 multimodal tokens to 5099 placeholders |
Yes, chunked prefill is not supported on V0. V1 should work fine though. |
btw, I also tested it by switching to V0. V0 works fine, so the issue is with V1. |
Can you show the error log? |
|
Possibly related to #15677 |
I have seen this occur when sending random inputs to the model, one might accidentally include the <|image|> token in the random distribution leading to errors. If not this, maybe there is an issue with V1 chunked prefill for multimodal? |
Update: after switching to V0, it can run for longer without such errors. But after some time, I still got the error:
|
@benchislett I double checked and made sure that the input doesn't contain accidentally added <|image_pad|>. Actually, if there are such tokens, it won't pass other checks in the code. |
@DarkLight1337 @Isotr0py Hi guys, I understand that this issue might be specific to Qwen and might be hard to fix. Rather than locating the issue in the code and fixing it, is there a workaround like: if such cases is encountered, vllm skips this data point and continue to inference? (rather than directly failing). Many thanks! |
Updates: |
Any fix or workaround until now? Fix here (#16229) seems to be not completed yet. |
You can set |
i have met the same bug in the version 0.8.5, let me see how to fix this. |
I have recived you mail, thanks.
I'll reply you soon.
|
my problem have been solved, it is a request bug. some content added an extra 'image_pad' token... 0.8.5 works well. |
@panjiacheng @FerryHuang @xsank
img_id = 151655
sampling_params=SamplingParams(
bad_words=["<|image_pad|>"], # doesn't work
stop_token_ids=[img_id], # doesn't work
logit_bias={ # doesn't work
img_id: -100.0,
},
), @DarkLight1337 Any idea? |
I think it may be because the multimodal embeddings are merged into the text embeddings before sampling is done. So none of the sampling parameters can avoid this problem. The fix is still WIP |
Your current environment
🐛 Describe the bug
I have:
But still got the "ValueError" thing.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: