Skip to content

[V1] Ensure using int64 for sampled token ids #15065

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 19, 2025
Merged

[V1] Ensure using int64 for sampled token ids #15065

merged 2 commits into from
Mar 19, 2025

Conversation

WoosukKwon
Copy link
Collaborator

A more fundamental solution to the bug in #14999 and #15049

Signed-off-by: Woosuk Kwon <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Mar 18, 2025
@WoosukKwon
Copy link
Collaborator Author

cc @houseroad

# with subsequent operations that may use these values as indices.
# This conversion is necessary because FlashInfer sampling operations
# return int32 (while PyTorch argmax and topk return int64).
sampled = sampled.long()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually I am debating to keep this in the gather_logprobs, since we may skip the conversion if gather_logprobs is not called. what do you think?

Copy link
Collaborator Author

@WoosukKwon WoosukKwon Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I think that's error prone. For example, other ops in the future might try to use this tensor for indexing and get the same error.
  2. The op should be very cheap, it's supposed to be a no-op for common case (no top-p or top-k) where sampled is already a long tensor. Even if it's not, sampled tensor here is pretty small, so I don't think its overhead will matter.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed with @WoosukKwon here, sampled here is pretty small even in the worst case (1024)

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 19, 2025
Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, if we see overhead, we can always optimize it :-)

@WoosukKwon WoosukKwon merged commit 05ccd0a into main Mar 19, 2025
41 of 43 checks passed
@WoosukKwon WoosukKwon deleted the ensure-int64 branch March 19, 2025 06:52
gmarinho2 pushed a commit to gmarinho2/vllm that referenced this pull request Apr 1, 2025
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
nishith-fujitsu pushed a commit to nishith-fujitsu/vllm that referenced this pull request Apr 9, 2025
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants