Skip to content

Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) #7860

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 56 commits into from
Sep 5, 2024

Conversation

Manikandan-Thangaraj-ZS0321
Copy link
Contributor

@Manikandan-Thangaraj-ZS0321 Manikandan-Thangaraj-ZS0321 commented Aug 26, 2024

FILL IN THE PR DESCRIPTION HERE

Hi Folks, This PR is completed based on the 7168. This @andoorve PR includes the changes needed for the Add remaining model PP support, This PR 7168 appears to be out of date and lacks the most recent changes.

From the PR, I have included the changes needed for only InternVL2 model based upon the Architecture InternVLChatModel. On Including these changes I have been able to perform Distributed Inference and Serving for the InternVL2-8B on Multi-Node Multi-GPU (tensor parallel plus pipeline parallel inference) setup. However I am facing issue on running the InternVL2-26B and InternVL2-40B. This issue that I am facing is
'''
File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in init │·······
self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config) │·······
File "/usr/local/lib/python3.10/dist-packages/vllm/transformers_utils/tokenizer.py", line 103, in get_tokenizer │·······
tokenizer = AutoTokenizer.from_pretrained( │·······
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/tokenization_auto.py", line 913, in from_pretrained │·······
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] │·······
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 732, in getitem │·······
model_type = self._reverse_config_mapping[key.name] │·······
KeyError: 'InternVLChatConfig'
'''
Does anyone have any idea in this please let me know, what should I change in these

Partial fix to #7684

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

Thanks for implementing this, can you add tests to verify the model's behavior under PP setting?

@Manikandan-Thangaraj-ZS0321
Copy link
Contributor Author

Manikandan-Thangaraj-ZS0321 commented Aug 27, 2024

Hi @DarkLight1337 , I don't have the instance for testing it right now, It might take me a day or two, For now I have added the InternVL2-8B in tests/distributed/test_pipeline_parallel.py for testing in Multi-Node Multi-GPU setup, If anyone is willing to test it out feel free to do so.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more comments.

@youkaichao I am not that familiar with implementing PP, so it would be great if you could take a look as well!

@Isotr0py
Copy link
Collaborator

@Manikandan-Thangaraj-ZS0321 I ran the added test and seems that the test is broken:

================================================================ test session starts =================================================================
platform linux -- Python 3.10.14, pytest-8.3.2, pluggy-1.5.0 -- /opt/conda/envs/vllm/bin/python3.10
cachedir: .pytest_cache
rootdir: /kaggle/working/vllm
configfile: pyproject.toml
plugins: asyncio-0.24.0, buildkite-test-collector-0.1.8, forked-1.6.0, anyio-4.4.0, rerunfailures-14.0, shard-0.1.2, typeguard-4.3.0
asyncio: mode=strict, default_loop_scope=None
collected 1 item                                                                                                                                     
Running 1 items in this shard: tests/distributed/test_pipeline_parallel.py::test_compare_tp[1-2-1-1-OpenGVLab/InternVL2-8B-ray]

tests/distributed/test_pipeline_parallel.py::test_compare_tp[1-2-1-1-OpenGVLab/InternVL2-8B-ray] Fork a new process to run a test 12370
Fork a new process to run a test 0
WARNING 08-27 07:17:35 config.py:1604] Casting torch.bfloat16 to torch.float16.
INFO 08-27 07:17:35 config.py:952] Chunked prefill is enabled with max_num_batched_tokens=512.
WARNING 08-27 07:17:35 config.py:329] Async output processing can not be enabled with pipeline parallel
INFO 08-27 07:17:36 weight_utils.py:236] Using model weights format ['*.safetensors']
Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'google.auth'
INFO 08-27 07:17:41 api_server.py:440] vLLM API server version 0.5.5
INFO 08-27 07:17:41 api_server.py:441] args: Namespace(model_tag='OpenGVLab/InternVL2-8B', host=None, port=35627, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='OpenGVLab/InternVL2-8B', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, download_dir=None, load_format='auto', dtype='float16', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend='ray', worker_use_ray=False, pipeline_parallel_size=2, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=True, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None, dispatch_function=<function serve at 0x78e4e28445e0>)
INFO 08-27 07:17:42 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/09a4cf66-d98d-4947-b94c-039baf52dd20 for RPC Path.
INFO 08-27 07:17:42 api_server.py:161] Started engine process with PID 12450
Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'google.auth'
Error in sitecustomize; set PYTHONVERBOSE for traceback:
ModuleNotFoundError: No module named 'google.auth'
WARNING 08-27 07:17:47 config.py:1604] Casting torch.bfloat16 to torch.float16.
INFO 08-27 07:17:47 config.py:952] Chunked prefill is enabled with max_num_batched_tokens=512.
WARNING 08-27 07:17:47 config.py:329] Async output processing can not be enabled with pipeline parallel
2024-08-27 07:17:50,021 INFO worker.py:1781 -- Started a local Ray instance.
INFO 08-27 07:17:51 llm_engine.py:198] Initializing an LLM engine (v0.5.5) with config: model='OpenGVLab/InternVL2-8B', speculative_config=None, tokenizer='OpenGVLab/InternVL2-8B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=65536, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=OpenGVLab/InternVL2-8B, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=False)
WARNING 08-27 07:17:52 tokenizer.py:137] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
generation_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 652kB/s]
INFO 08-27 07:17:52 ray_gpu_executor.py:133] use_ray_spmd_worker: False
(raylet) Error in sitecustomize; set PYTHONVERBOSE for traceback:
(raylet) ModuleNotFoundError: No module named 'google.auth'
INFO 08-27 07:18:02 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 08-27 07:18:02 selector.py:116] Using XFormers backend.
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:02 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:02 selector.py:116] Using XFormers backend.
/opt/conda/envs/vllm/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_fwd")
(RayWorkerWrapper pid=12986) /opt/conda/envs/vllm/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(RayWorkerWrapper pid=12986)   @torch.library.impl_abstract("xformers_flash::flash_fwd")
(raylet) Error in sitecustomize; set PYTHONVERBOSE for traceback:
(raylet) ModuleNotFoundError: No module named 'google.auth'
/opt/conda/envs/vllm/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_bwd")
(RayWorkerWrapper pid=12986) /opt/conda/envs/vllm/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
(RayWorkerWrapper pid=12986)   @torch.library.impl_abstract("xformers_flash::flash_bwd")
INFO 08-27 07:18:05 utils.py:975] Found nccl from library libnccl.so.2
INFO 08-27 07:18:05 pynccl.py:63] vLLM is using nccl==2.20.5
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:05 utils.py:975] Found nccl from library libnccl.so.2
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:05 pynccl.py:63] vLLM is using nccl==2.20.5
INFO 08-27 07:18:05 model_runner.py:880] Starting to load model OpenGVLab/InternVL2-8B...
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:05 model_runner.py:880] Starting to load model OpenGVLab/InternVL2-8B...
INFO 08-27 07:18:06 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 08-27 07:18:06 selector.py:116] Using XFormers backend.
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:06 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:06 selector.py:116] Using XFormers backend.
INFO 08-27 07:18:06 weight_utils.py:236] Using model weights format ['*.safetensors']
(RayWorkerWrapper pid=12986) INFO 08-27 07:18:06 weight_utils.py:236] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:00<00:00, 15.49it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:00<00:00,  4.72it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:00<00:00,  5.26it/s]

INFO 08-27 07:18:52 model_runner.py:891] Loading model weights took 8.5596 GB
WARNING 08-27 07:19:04 model_runner.py:1058] Computed max_num_seqs (min(256, 512 // 3328)) to be less than 1. Setting it to the minimum value of 1.
(RayWorkerWrapper pid=12986) INFO 08-27 07:19:04 model_runner.py:891] Loading model weights took 8.5596 GB
(RayWorkerWrapper pid=12986) WARNING 08-27 07:19:04 model_runner.py:1058] Computed max_num_seqs (min(256, 512 // 3328)) to be less than 1. Setting it to the minimum value of 1.
WARNING 08-27 07:19:05 tokenizer.py:137] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
(RayWorkerWrapper pid=12986) WARNING 08-27 07:19:05 tokenizer.py:137] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465] Error executing method determine_num_available_blocks. This might cause deadlock in distributed execution.
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465] Traceback (most recent call last):
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/worker/worker_base.py", line 457, in execute_method
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return executor(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return func(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/worker/worker.py", line 222, in determine_num_available_blocks
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     self.model_runner.profile_run()
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return func(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/worker/model_runner.py", line 1098, in profile_run
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     self.execute_model(model_input, kv_caches, intermediate_tensors)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return func(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/worker/model_runner.py", line 1420, in execute_model
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     hidden_or_intermediate_states = model_executable(
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/model_executor/models/internvl.py", line 468, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     hidden_states = self.language_model.model(input_ids,
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/model_executor/models/internlm2.py", line 263, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     hidden_states, residual = layer(
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/model_executor/models/internlm2.py", line 199, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     hidden_states = self.attention(
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/model_executor/models/internlm2.py", line 145, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self._call_impl(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/opt/conda/envs/vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return forward_call(*args, **kwargs)
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/attention/layer.py", line 98, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     return self.impl.forward(query,
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]   File "/kaggle/working/vllm/vllm/attention/backends/xformers.py", line 574, in forward
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465]     assert key.shape[0] == num_prefill_tokens + num_decode_tokens
(RayWorkerWrapper pid=12986) ERROR 08-27 07:19:07 worker_base.py:465] AssertionError

I'm not familiar with the pp implementation, so I'm not sure if it's related to the pp implementation for internlm2 backbone.

@youkaichao
Copy link
Member

cc @andoorve

and also @ywang96 for pp support with VLM

@youkaichao
Copy link
Member

cc @andoorve

@Manikandan-Thangaraj-ZS0321
Copy link
Contributor Author

Manikandan-Thangaraj-ZS0321 commented Sep 4, 2024

@DarkLight1337 @Isotr0py @andoorve , Can you review it now?. I also found out issue in (buildkite/fastcheck/pr/amd-docker-build-image)CI docker image build - docker build --build-arg max_jobs=16 --tag -f Dockerfile.rocm --progress plain . && docker push - tag name is missing in it, the build is failing because of that

@Manikandan-Thangaraj-ZS0321
Copy link
Contributor Author

Anything Pending on my end to fix?

@DarkLight1337
Copy link
Member

Can you merge the latest changes from main so that the CI can be run again?

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 5, 2024
@DarkLight1337
Copy link
Member

To avoid OOM, it may be necessary to also use tensor parallel to split the model across GPUs.

@Manikandan-Thangaraj-ZS0321
Copy link
Contributor Author

I have increased the TP_SIZE from 1 to 2, let's check on that

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PP tests finally pass now. Thanks for implementing this!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 5, 2024 10:52
@DarkLight1337 DarkLight1337 merged commit 8685ba1 into vllm-project:main Sep 5, 2024
51 checks passed
@Manikandan-Thangaraj-ZS0321
Copy link
Contributor Author

Thanks a lot @DarkLight1337 and Everyone, Happy to contribute for this 😄

dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request Sep 12, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants