-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
[Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model #8405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 16 commits
87b6352
5355659
68cd89d
d17f006
2d7d9d4
12352e7
11bed0d
8a8b840
00c5da2
882c764
12b1035
ed5b3fd
ea2c089
36ad5d0
cf981c0
647db0d
064ca1f
106909c
e242501
0a5ddf4
1db530d
dc94a22
3048233
a2f938f
4b619a2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
{%- if messages[0]["role"] == "system" %} | ||
{%- set system_message = messages[0]["content"] %} | ||
{%- set loop_messages = messages[1:] %} | ||
{%- else %} | ||
{%- set loop_messages = messages %} | ||
{%- endif %} | ||
|
||
{%- if not tools is defined %} | ||
{%- set tools = none %} | ||
{%- endif %} | ||
|
||
{{- bos_token }} | ||
{%- if system_message is defined %} | ||
{{- "<|im_start|>system\n" + system_message + "<|im_end|>\n" }} | ||
{%- endif %} | ||
|
||
{%- if tools is not none %} | ||
{{- "<|im_start|>system name=<|plugin|>\n[" }} | ||
{%- for tool in tools %} | ||
{{- tool.function|tojson }} | ||
{%- if not loop.last %} | ||
{{- ", " }} | ||
{%- else %} | ||
{{- "]" }} | ||
{%- endif %} | ||
{%- endfor %} | ||
{{- "<|im_end|>\n" }} | ||
{%- endif %} | ||
|
||
{%- for message in loop_messages %} | ||
{%- if message["role"] == "user" %} | ||
{{- "<|im_start|>user\n" + message["content"] + "<|im_end|>\n"}} | ||
{%- elif message.tool_calls is defined and message.tool_calls is not none %} | ||
{%- set content = message["content"] if message["content"] else "" %} | ||
{{- "<|im_start|>assistant\n" + content }} | ||
{%- for tool_call in message.tool_calls %} | ||
{%- set function=tool_call.function %} | ||
{{- "<|action_start|><|plugin|>\n" }} | ||
{{- '{"name": "' + function.name + '", '}} | ||
{{- '"arguments": ' + function.arguments|tojson + '}' }} | ||
{{- "<|action_end|>" }} | ||
{%- endfor %} | ||
{{- "<|im_end|>\n" }} | ||
{%- elif message["role"] == "assistant" %} | ||
{{- "<|im_start|>assistant\n" + message["content"] + "<|im_end|>\n"}} | ||
{%- elif message["role"] == "tool_results" or message["role"] == "tool" or message["role"] == "function" %} | ||
{%- if message.content is defined and message.content.content is defined %} | ||
{%- set content = message.content.content %} | ||
{%- else %} | ||
{%- set content = message.content %} | ||
{%- endif %} | ||
{{- "<|im_start|>environment name=<|plugin|>\n" + content|string + "<|im_end|>\n" }} | ||
{%- else %} | ||
{{- raise_exception("Only user and assistant and tool_results and tool and function roles are supported, with the exception of an initial optional system message!") }} | ||
{%- endif %} | ||
{%- endfor %} | ||
|
||
{%- if add_generation_prompt %} | ||
{{- '<|im_start|>assistant\n' }} | ||
{%- endif %} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,6 +12,7 @@ | |
from vllm.engine.arg_utils import AsyncEngineArgs, nullable_str | ||
from vllm.entrypoints.openai.serving_engine import (LoRAModulePath, | ||
PromptAdapterPath) | ||
from vllm.entrypoints.openai.tool_parsers import ToolParserManager | ||
from vllm.utils import FlexibleArgumentParser | ||
|
||
|
||
|
@@ -171,16 +172,27 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser: | |
"Enable auto tool choice for supported models. Use --tool-call-parser" | ||
"to specify which parser to use") | ||
|
||
valid_tool_parsers = ToolParserManager.tool_parsers.keys() | ||
parser.add_argument( | ||
"--tool-call-parser", | ||
type=str, | ||
choices=["mistral", "hermes"], | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we still want to specify the choices of It would be good for people to know which tool call parsers are available by default, and this makes sure that the expected values get into the auto-generated documentation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if special a choices in the maybe we can given the default choices to the help information. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think what I'm trying to say is that you could keep the choices of Current state: if self.enable_auto_tools:
try:
self.tool_parser = ToolParserManager.get_tool_parser(tool_parser)
except Exception as e:
raise TypeError("Error: --enable-auto-tool-choice requires tool_parser:'{tool_parser}' which has not been registered") from e Possible changes: # if a plugin is not specified; we can do this already
if self.enable_auto_tools and not self.tool_parser_plugin:
plugin_name = tool_parser # one of the options from the CLI argument, e.g. hermes or mistral
# if a plugin is specified - this may require some refactoring to get the tool parser plugin loaded in serving chat
elif self.enable_auto_tools and self.tool_parser_plugin:
# get the name of the plugin loaded from `--tool-parser-plugin`
plugin_name = get_plugin_name_somehow_from_loaded_plugin()
# handle additional cases here
try:
self.tool_parser = ToolParserManager.get_tool_parser(plugin_name)
except Exception as e:
raise TypeError("You must specify a valid value for --tool-call-parser OR a value tool parser plugin" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. but In my design, a plugin can register any number of tool parsers into vllm, and user can use I added some documents in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ohhhh, I see. Hmm. I'm not sure what the best pattern would be for the arguments here, then. @DarkLight1337 @mgoin do y'all have any thoughts? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we can use metavar instead of choices to display help information.
the help will look like this:
and move the plugin import and tool call parser check to run_server to check the invalid tool call parser name quickly.(before the model loads).
error info look like this:
|
||
metavar="{" + ",".join(valid_tool_parsers) + "} or name registered in " | ||
"--tool-parser-plugin", | ||
default=None, | ||
help= | ||
"Select the tool call parser depending on the model that you're using." | ||
" This is used to parse the model-generated tool call into OpenAI API " | ||
"format. Required for --enable-auto-tool-choice.") | ||
|
||
parser.add_argument( | ||
"--tool-parser-plugin", | ||
type=str, | ||
default="", | ||
help= | ||
"Special the tool parser plugin write to parse the model-generated tool" | ||
" into OpenAI API format, the name register in this plugin can be used " | ||
"in --tool-call-parser.") | ||
|
||
sydnash marked this conversation as resolved.
Show resolved
Hide resolved
|
||
parser = AsyncEngineArgs.add_cli_args(parser) | ||
|
||
parser.add_argument('--max-log-len', | ||
|
Uh oh!
There was an error while loading. Please reload this page.