-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
[Model] Support VLMs with transformers backend #13754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Thanks for working on this! The main difficulty of supporting VLMs is not the model implementation itself, but rather the preprocessing code - vLLM V1 in particular requires precise tracking of the placeholder tokens. I see how generalizing
cc @ywang96 |
@DarkLight1337 thanks for review! Yeah, checking on more involved models is a good idea to verify all edge cases are covered, will do so. A few clarifications before that:
|
Yes, we currently support mixed-modality (non-interleaved) inputs and plan to eventually support interleaved-modality inputs as well.
We assume that the tokens have only gone through the tokenizer. So, placeholder tokens still have to be inserted into the input tokens. It's fine if we leave this unsolved for now - we can fall back to detokenizing the tokens back into text before passing them through HF processor. |
Thanks for the PR @zucchini-nlp! I'm a bit occupied at the moment but will take a first pass later tonight. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have no say in this but that I am excited! 🚀
This pull request has merge conflicts that must be resolved before it can be |
This pull request has merge conflicts that must be resolved before it can be |
This pull request has merge conflicts that must be resolved before it can be |
This PR adds support for multimodal models in Transformers backend. As a start I tested with vanilla LLaVA using demo scripts from the documentation. The generated outputs matched with VLLM outputs.
For this branch to work, we first need a few changes from
transformers
starting from huggingface/transformers#36367. Currently I want to ask for feedback, if this aligns with how VLLM sees thingscc @Isotr0py @ArthurZucker