Skip to content

[TPU][DEBUG] Provide Env Variable To Disable Sampler #15662

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

robertgshaw2-redhat
Copy link
Collaborator

@robertgshaw2-redhat robertgshaw2-redhat commented Mar 28, 2025

SUMMARY:

  • env variable to disable sampler
  • should be merged after the triton lazy import

REVIEWER: @yaochengji

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link

mergify bot commented Mar 28, 2025

⚠️ The sha of the head commit of this PR conflicts with #15656. Mergify cannot evaluate rules on this PR. ⚠️

Signed-off-by: Robert Shaw <[email protected]>
@mergify mergify bot added v1 tpu Related to Google TPUs labels Mar 28, 2025
@@ -343,5 +342,7 @@ def stateless_destroy_torch_distributed_process_group(
Destroy ProcessGroup returned by
stateless_init_torch_distributed_process_group().
"""
# Lazy import for non-CUDA backends.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: there is a separate PR for this that will merge separarely

Robert Shaw added 2 commits March 28, 2025 02:17
Signed-off-by: Robert Shaw <[email protected]>
Signed-off-by: Robert Shaw <[email protected]>
Copy link
Collaborator

@yaochengji yaochengji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this support, Robert!

I think we should also change the logic in the dummy run to make it actually pre-compile for compute_logits_no_sampler?

@robertgshaw2-redhat
Copy link
Collaborator Author

Thanks for adding this support, Robert!

I think we should also change the logic in the dummy run to make it actually pre-compile for compute_logits_no_sampler?

I will double check, but I thought the dummy run calls

Thanks for adding this support, Robert!

I think we should also change the logic in the dummy run to make it actually pre-compile for compute_logits_no_sampler?

I will check in the morning, but I thought the dummy run calls execute_model so it should be covered.

kv_caches=self.kv_caches,
inputs_embeds=inputs_embeds,
)
selected_token_ids = self.model.compute_logits_no_sampler(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious to know what the gain in execution time is.
There' s a path in sample_from_hidden when all_greedy=True to do just that.
This would reveal the introduced overhead.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps its because we wrap with torch.compile?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's about 5%

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comparing with all_greedy=True set in request right

@mgoin mgoin changed the title [TPU][DEBUG] Provide Env Varibale To Disable Sampler [TPU][DEBUG] Provide Env Variable To Disable Sampler Mar 28, 2025
@yaochengji
Copy link
Collaborator

I will check in the morning, but I thought the dummy run calls execute_model so it should be covered.

I'm afraid not, dummy_run doesn't call execute_model, the corresponding code is at https://github.com/vllm-project/vllm/blob/main/vllm/v1/worker/tpu_model_runner.py#L803

@robertgshaw2-redhat
Copy link
Collaborator Author

I will check in the morning, but I thought the dummy run calls execute_model so it should be covered.

I'm afraid not, dummy_run doesn't call execute_model, the corresponding code is at https://github.com/vllm-project/vllm/blob/main/vllm/v1/worker/tpu_model_runner.py#L803

You're right. Ill post up a fix.

Copy link

mergify bot commented Apr 1, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @robertgshaw2-redhat.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 1, 2025
@yaochengji
Copy link
Collaborator

@robertgshaw2-redhat kindly ping about this PR.

Copy link
Contributor

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to avoid compiling sample_from_hidden altogether in capture_model

@NickLucche
Copy link
Contributor

NickLucche commented Apr 11, 2025

We can close this one too @robertgshaw2-redhat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-rebase tpu Related to Google TPUs v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants