-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
vLLM is returning the prompt with the result #1043
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If you use api_server then in vllm.entrypoints.api_server.py remove prompt from
|
It should be controlled by a flag in the request. |
I added echo = false while sending the request, but I got an error. I think echo is not implemented? |
I will try this |
@viktor-ferenczi Is it implemented? Or can we prepare a PR? |
这个问题解决了吗?我现在也碰到这个问题了,在requests发请求的时候加参数提示没有这个参数 |
这个是加上还是去掉prompt? |
Bumps [xgrammar](https://github.com/mlc-ai/xgrammar) from 0.1.11 to 0.1.18. Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Michał Kuligowski <[email protected]>
How to prevent vLLM from returning the prompt back with the result?
The text was updated successfully, but these errors were encountered: