Skip to content

Change logprobs to use int64 datatype in torch.gather #14999

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

pathorn
Copy link
Contributor

@pathorn pathorn commented Mar 18, 2025

Fix crash on DeepSeek R1 where torch.gather expects an int64 tensor.

I have confirmed that this fixes the crash on V1 on revision 233ffce , but I don't understand what caused the change in behavior. I suspect this regression happened after #12721 updating torch to 2.6.0, but I don't really understand the cause.

INFO 03-15 19:58:20 [logger.py:39] Received request cmpl-aca2df6d00644a15a3b1d0cbb1491f4a-0: prompt: '', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.6, top_p=0.95, top_k=-1, min_p=0.0, seed=3030081490969633974, stop=['<|user|>'], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=8192, min_tokens=0, logprobs=1, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [], lora_request: None, prompt_adapter_request: None.

WorkerProc hit an exception: %s
Traceback (most recent call last):
  File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop
    output = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 242, in execute_model
    output = self.model_runner.execute_model(scheduler_output)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1011, in execute_model
    sampler_output = self.model.sample(
                     ^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek_v2.py", line 706, in sample
    next_tokens = self.sampler(logits, sampling_metadata)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 54, in forward
    self.gather_logprobs(raw_logprobs, num_logprobs, token_ids=sampled)
  File "/opt/venv/lib/python3.12/site-packages/vllm/v1/sample/sampler.py", line 163, in gather_logprobs
    token_logprobs = logprobs.gather(-1, token_ids)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: gather(): Expected dtype int64 for index

Signed-off-by: Patrick Reiter Horn <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Mar 18, 2025
@WoosukKwon WoosukKwon added the bug Something isn't working label Mar 18, 2025
Copy link

mergify bot commented Mar 18, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @pathorn.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 18, 2025
@WoosukKwon
Copy link
Collaborator

@pathorn Superseded by #15049. Thanks for the PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs-rebase v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants