-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
[Model] Support Skywork-R1V #15397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Support Skywork-R1V #15397
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
fd29e4a
to
6a8f898
Compare
Thanks for implementing this! Can you apply the fix #15086 to this model so it can work correctly in V1? |
Also, please add this model to the following files so CI can pass:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, with a quick glance, this model is quite similar to original InternVL2 models...
If the difference is minor, I prefer to inherit from existing InternVL2 implementation instead of making a highly similar code copy...
ff0d3b2
to
e84cb1e
Compare
Can you merge in the changes from latest main and apply the changes from #15443? |
30e87c9
to
19a139e
Compare
This pull request has merge conflicts that must be resolved before it can be |
I think the code looks pretty good now. Finally, can you add this model to the List of Supported Models page in the docs as well? |
And also add it to |
I have added these information. |
Signed-off-by: jiacai.liu <[email protected]>
There is a trailing comma after chat template field in https://huggingface.co/Skywork/Skywork-R1V-38B/blob/main/tokenizer_config.json which prevents the JSON from being loaded |
It should be fixed right now. |
I can successfully run the example script, thanks for your effort and let's get this merged! |
I can force merge |
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]> Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]> Signed-off-by: xinyuxiao <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]> Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]>
Signed-off-by: jiacai.liu <[email protected]> Co-authored-by: jiacai.liu <[email protected]> Signed-off-by: Mu Huai <[email protected]>
fix vllm's support for skyworkr1v SkyworkAI/Skywork-R1V#6
FIX #15186