Skip to content

[Bug]: RuntimeError: Failed to infer device type with v0.7.2 #12847

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
imangohari1 opened this issue Feb 6, 2025 · 32 comments · Fixed by #12963
Closed
1 task done

[Bug]: RuntimeError: Failed to infer device type with v0.7.2 #12847

imangohari1 opened this issue Feb 6, 2025 · 32 comments · Fixed by #12963
Labels
bug Something isn't working

Comments

@imangohari1
Copy link

Your current environment

The output of `python collect_env.py`
INFO 02-06 18:50:45 __init__.py:194] No platform detected, vLLM is running on UnspecifiedPlatform
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.39

Python version: 3.12.3 (main, Nov  6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               224
On-line CPU(s) list:                  0-223
Vendor ID:                            GenuineIntel
BIOS Vendor ID:                       Intel(R) Corporation
Model name:                           Intel(R) Xeon(R) Platinum 8480+
BIOS Model name:                      Intel(R) Xeon(R) Platinum 8480+  CPU @ 2.0GHz
BIOS CPU family:                      179
CPU family:                           6
Model:                                143
Thread(s) per core:                   2
Core(s) per socket:                   56
Socket(s):                            2
Stepping:                             6
CPU(s) scaling MHz:                   25%
CPU max MHz:                          3800.0000
CPU min MHz:                          800.0000
BogoMIPS:                             4000.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            5.3 MiB (112 instances)
L1i cache:                            3.5 MiB (112 instances)
L2 cache:                             224 MiB (112 instances)
L3 cache:                             210 MiB (2 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-55,112-167
NUMA node1 CPU(s):                    56-111,168-223
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.8.0
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-dali-cuda120==1.44.0
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-modelopt==0.21.0
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvimgcodec-cu12==0.3.0.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvidia-pyindex==1.0.9
[pip3] onnx==1.17.0
[pip3] optree==0.13.1
[pip3] pynvml==11.4.1
[pip3] pytorch-triton==3.0.0+72734f086
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchaudio==2.5.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.1
[pip3] transformers==4.48.2
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3.dev10+g467a96a5
vLLM Build Flags:
CUDA Archs: 7.0 7.5 8.0 8.6 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-55,112-167    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NVIDIA_VISIBLE_DEVICES=all
CUBLAS_VERSION=12.6.4.1
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=7.0 7.5 8.0 8.6 9.0+PTX
NCCL_VERSION=2.23.4
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.6.3.004
PYTORCH_VERSION=2.6.0a0+df5bbc0
PYTORCH_BUILD_NUMBER=0
CUDNN_FRONTEND_VERSION=1.8.0
CUDNN_VERSION=9.6.0.74
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/torch/lib:/usr/local/lib/python3.12/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=126674149
CUDA_DRIVER_VERSION=560.35.05
PYTORCH_BUILD_VERSION=2.6.0a0+df5bbc0
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=24.12
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1

🐛 Describe the bug

Using the main branch on commit 467a96a5415dc896170cecc0bb83d9c49c2f3c5e, we are facing RuntimeError: Failed to infer device type. The details of the env/test is given below.

Note

This issue does NOT happen using 0.7.1 release. On the same machine, same container, changing the installation to pip install vllm (or pip install https://github.com/vllm-project/vllm/releases/download/v0.7.1/vllm-0.7.1-cp38-abi3-manylinux1_x86_64.whl) works fine.

Container/Setup
  • Container: nvcr.io/nvidia/pytorch:24.12-py3
  • Setup:
git clone https://github.com/vllm-project/vllm.git
cd vllm
VLLM_USE_PRECOMPILED=1 pip install --editable .
Test
python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 --model meta-llama/Llama-3.2-3B-Instruct --seed 42 -tp 1 --use-v2-block-manager --max_model_len 2048
INFO 02-06 19:18:33 __init__.py:194] No platform detected, vLLM is running on UnspecifiedPlatform
INFO 02-06 19:18:34 api_server.py:840] vLLM API server version 0.7.3.dev10+g467a96a5
INFO 02-06 19:18:34 api_server.py:841] args: Namespace(host='0.0.0.0', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Llama-3.2-3B-Instruct', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=2048, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=42, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
INFO 02-06 19:18:34 api_server.py:206] Started engine process with PID 879
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/tmp/vllm/vllm/entrypoints/openai/api_server.py", line 911, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/tmp/vllm/vllm/entrypoints/openai/api_server.py", line 875, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/vllm/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/vllm/vllm/entrypoints/openai/api_server.py", line 217, in build_async_engine_client_from_engine_args
    engine_config = engine_args.create_engine_config()
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/vllm/vllm/engine/arg_utils.py", line 1074, in create_engine_config
    device_config = DeviceConfig(device=self.device)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/vllm/vllm/config.py", line 1626, in __init__
    raise RuntimeError("Failed to infer device type")

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@imangohari1 imangohari1 added the bug Something isn't working label Feb 6, 2025
@youkaichao
Copy link
Member

looks strange. I think #12809 should solve it. the actual error should raise earlier.

@youkaichao
Copy link
Member

INFO 02-06 18:50:45 init.py:194] No platform detected, vLLM is running on UnspecifiedPlatform

this is the problem.

can you try to follow

def import_pynvml():
to see why it happens?

[pip3] nvidia-ml-py==12.570.86
[pip3] pynvml==11.4.1

since you have both pynvml and nvidia-ml-py, I think the code above should work. Not sure why it does not work for your case.

@lizongyao123
Copy link

升级pynvml到12.0.0即可。

pip install -U pynvml

@youkaichao
Copy link
Member

@lizongyao123 @imangohari1 can you install the latest release (0.7.2 has been released) and see if it happens again? I think the latest release should work even if you have pynvml < 12

@ghosthamlet
Copy link

@lizongyao123 @imangohari1 can you install the latest release (0.7.2 has been released) and see if it happens again? I think the latest release should work even if you have pynvml < 12

I have same problem: No platform detected, vLLM is running on UnspecifiedPlatform
vllm 0.7.2
Run import_pynvml()
RuntimeError: You are using a deprecated pynvmlpackage. Please uninstallpynvml or upgrade to at least version 12.0. See https://pypi.org/project/pynvml for more information.

After ungrade pynvml to 12.0.0
Fix it.

@zimoqingfeng
Copy link

@lizongyao123 @imangohari1 can you install the latest release (0.7.2 has been released) and see if it happens again? I think the latest release should work even if you have pynvml < 12

It may not work when I use vllm=0.7.2 & pynvml==11.5.3

@2catycm
Copy link

2catycm commented Feb 7, 2025

I encountered similar problem

@2catycm
Copy link

2catycm commented Feb 7, 2025

Very good issue,I also succeeded to fix this by uv pip install pynvml==12.0.0

@youkaichao
Copy link
Member

@ghosthamlet are you using vllm serve , or use vllm as a library inside a python file?

@youkaichao
Copy link
Member

If anyone can give me a reproducible example, that would be great. In addition, feel free to join https://slack.vllm.ai for quick communication.

@DarkLight1337 DarkLight1337 mentioned this issue Feb 7, 2025
1 task
@imangohari1
Copy link
Author

imangohari1 commented Feb 7, 2025

@youkaichao

If anyone can give me a reproducible example, that would be great. In addition, feel free to join https://slack.vllm.ai for quick communication.

I have included a reproducer in the #12847 (comment).
Have you tried this?
I am gonna try to join the slack channel. I can't join the slack channel due to internal policy restrictions :/

@imangohari1
Copy link
Author

@lizongyao123 @imangohari1 can you install the latest release (0.7.2 has been released) and see if it happens again? I think the latest release should work even if you have pynvml < 12

@youkaichao I redid the test with 0.7.2 release with pynvml 11.4.1, and still fails.

Note

The only workaround right now is to update pynvml to 12.0.0 with pip install -U pynvml

@youkaichao
Copy link
Member

@imangohari1 thanks! I find the root cause, the docker image nvcr.io/nvidia/pytorch:24.12-py3 includes pynvml==11.4.1

@imangohari1
Copy link
Author

@imangohari1 thanks! I find the root cause, the docker image nvcr.io/nvidia/pytorch:24.12-py3 includes pynvml==11.4.1

Thanks, Two points here:

NOTE: CUDA Forward Compatibility mode ENABLED.
  Using CUDA 12.8 driver version 570.86.10 with kernel driver version 565.57.01.
  See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.

root@8a2c3a5bdaba:# pip list| grep pyvnml
pynvml                     11.4.1
  • why the v0.7.1 works fine with the pyvnml==11.4.1 version and only >v0.7.2 having issue?

@qmzznbxhl
Copy link

升级pynvml到12.0.0即可。

pip install -U pynvml

升级pynvml到12.0.0即可。

pip install -U pynvml

same error,really work by update “pip install pynvml==12.0.0”

@MangoFF
Copy link

MangoFF commented Feb 8, 2025

It works for me, “pip install pynvml==12.0.0”

@Stonesjtu
Copy link

@youkaichao should we pin the pynvml version in requirements.txt for next bugfix release?
The same error occurs for the newest pypi 0.7.2 version

@youkaichao
Copy link
Member

I found the culprint:

when pynvml < 12 and nvidia-ml-py are both installed, although vllm handles it well, pytorch will use a simple import pynvml which will import the wrong version (the first one, unofficial one).

To avoid the issue completely, I opened #12963 .

@youkaichao
Copy link
Member

@Stonesjtu we cannot pin pynvml==12.0 since that will make vllm conflict with other libraries. please check #12977

@imangohari1
Copy link
Author

@youkaichao
Thanks for the update.
I can see that the issue of undefied platform is resolved but now I am getting another issue:

ERROR 02-10 16:02:30 registry.py:307] /usr/local/lib/python3.12/dist-packages/flash_attn_2_cuda.cpython-312-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

Should I open a separate ticket for this?

@imangohari1
Copy link
Author

@youkaichao Thanks for the update. I can see that the issue of undefied platform is resolved but now I am getting another issue:

ERROR 02-10 16:02:30 registry.py:307] /usr/local/lib/python3.12/dist-packages/flash_attn_2_cuda.cpython-312-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

Should I open a separate ticket for this?

I opened a new issue for this. #13047

@josephzucc
Copy link

Hi, i just download vllm on a ubuntu linux 11feb 11:32 spanish hour, and it just happen the same error.

@dcrockwell
Copy link

dcrockwell commented Feb 12, 2025

Exact same issue with 0.7.3.dev94+g985b4a2b.d20250212.neuron216 compiled from source for neuron on an inferentia instance.

$ vllm serve meta-llama/Llama-3.1-8B-Instruct --served-model-name ceto --max-model-len 2048  --tensor-parallel-size 12 --gpu-memory-utilization=0.9
INFO 02-12 14:03:53 __init__.py:194] No platform detected, vLLM is running on UnspecifiedPlatform
INFO 02-12 14:03:53 api_server.py:840] vLLM API server version 0.7.3.dev94+g985b4a2b.d20250212
INFO 02-12 14:03:53 api_server.py:841] args: Namespace(subparser='serve', model_tag='meta-llama/Llama-3.1-8B-Instruct', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Llama-3.1-8B-Instruct', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=2048, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=12, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['ceto'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function serve at 0x7f1128dc77f0>)
INFO 02-12 14:03:53 api_server.py:206] Started engine process with PID 11349
Traceback (most recent call last):
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/scripts.py", line 204, in main
    args.dispatch_function(args)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/scripts.py", line 44, in serve
    uvloop.run(run_server(args))
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 217, in build_async_engine_client_from_engine_args
    engine_config = engine_args.create_engine_config()
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1090, in create_engine_config
    device_config = DeviceConfig(device=self.device)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/config.py", line 1630, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type
INFO 02-12 14:03:57 __init__.py:194] No platform detected, vLLM is running on UnspecifiedPlatform
ERROR 02-12 14:03:57 engine.py:389] Failed to infer device type
ERROR 02-12 14:03:57 engine.py:389] Traceback (most recent call last):
ERROR 02-12 14:03:57 engine.py:389]   File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 380, in run_mp_engine
ERROR 02-12 14:03:57 engine.py:389]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 02-12 14:03:57 engine.py:389]   File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 118, in from_engine_args
ERROR 02-12 14:03:57 engine.py:389]     engine_config = engine_args.create_engine_config(usage_context)
ERROR 02-12 14:03:57 engine.py:389]   File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1090, in create_engine_config
ERROR 02-12 14:03:57 engine.py:389]     device_config = DeviceConfig(device=self.device)
ERROR 02-12 14:03:57 engine.py:389]   File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/config.py", line 1630, in __init__
ERROR 02-12 14:03:57 engine.py:389]     raise RuntimeError("Failed to infer device type")
ERROR 02-12 14:03:57 engine.py:389] RuntimeError: Failed to infer device type
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine
    raise e
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 380, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 118, in from_engine_args
    engine_config = engine_args.create_engine_config(usage_context)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1090, in create_engine_config
    device_config = DeviceConfig(device=self.device)
  File "/opt/aws_neuronx_venv_pytorch_2_5_nxd_inference/lib/python3.10/site-packages/vllm/config.py", line 1630, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type

@vkudryk
Copy link

vkudryk commented Feb 16, 2025

Getting the same error trying to run on Gaudi HPU.

@xiayouran
Copy link

the same error on V100.

aslonnie pushed a commit to ray-project/ray that referenced this issue Feb 21, 2025
…0785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
kevin85421 pushed a commit to kevin85421/ray that referenced this issue Feb 28, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
Signed-off-by: kaihsun <[email protected]>
@rlrs
Copy link
Contributor

rlrs commented Mar 2, 2025

I'm seeing the same thing on ROCm after building from source. Will try an earlier version.

Update: amdsmi must be installed, but isn't mentioned in the installation instructions. Ensure that the version matches your ROCm version.

xsuler pushed a commit to antgroup/ant-ray that referenced this issue Mar 4, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
xsuler pushed a commit to antgroup/ant-ray that referenced this issue Mar 4, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
@youkaichao
Copy link
Member

for people who met this issue, I added lots of debug logging in #14195 . please set VLLM_LOGGING_LEVEL=DEBUG and check why a specific platform is not recognized.

park12sj pushed a commit to park12sj/ray that referenced this issue Mar 18, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
jaychia pushed a commit to jaychia/ray that referenced this issue Mar 19, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
Signed-off-by: Jay Chia <[email protected]>
jaychia pushed a commit to jaychia/ray that referenced this issue Mar 19, 2025
…y-project#50785)

Two dependencies we are resolving that requires to pin `xgrammar` and `pynvml` to specific versions. Related vllm PR/ issues

- vllm-project/vllm#13338
- vllm-project/vllm#12847

---------

Signed-off-by: Gene Su <[email protected]>
Signed-off-by: Jay Chia <[email protected]>
@geraldstanje
Copy link

geraldstanje commented Mar 20, 2025

@youkaichao i see the following in the debug log for vllm/vllm-openai:v0.8.1 image.
can i run vllm/vllm-openai:v0.8.1 with cpu only? how to set the device to cpu?

DEBUG 03-20 22:46:23 [__init__.py:28] No plugins for group vllm.platform_plugins found.
DEBUG 03-20 22:46:23 [__init__.py:35] Checking if TPU platform is available.
DEBUG 03-20 22:46:23 [__init__.py:45] TPU platform is not available because: No module named 'libtpu'
DEBUG 03-20 22:46:23 [__init__.py:53] Checking if CUDA platform is available.
DEBUG 03-20 22:46:23 [__init__.py:77] Exception happens when checking CUDA platform: NVML Shared Library Not Found
DEBUG 03-20 22:46:23 [__init__.py:94] CUDA platform is not available because: NVML Shared Library Not Found
DEBUG 03-20 22:46:23 [__init__.py:101] Checking if ROCm platform is available.
DEBUG 03-20 22:46:23 [__init__.py:115] ROCm platform is not available because: No module named 'amdsmi'
DEBUG 03-20 22:46:23 [__init__.py:123] Checking if HPU platform is available.
DEBUG 03-20 22:46:23 [__init__.py:130] HPU platform is not available because habana_frameworks is not found.
DEBUG 03-20 22:46:23 [__init__.py:141] Checking if XPU platform is available.
DEBUG 03-20 22:46:23 [__init__.py:151] XPU platform is not available because: No module named 'intel_extension_for_pytorch'
DEBUG 03-20 22:46:23 [__init__.py:159] Checking if CPU platform is available.
DEBUG 03-20 22:46:23 [__init__.py:181] Checking if Neuron platform is available.
DEBUG 03-20 22:46:23 [__init__.py:188] Neuron platform is not available because: No module named 'transformers_neuronx'
DEBUG 03-20 22:46:23 [__init__.py:196] Checking if OpenVINO platform is available.
DEBUG 03-20 22:46:23 [__init__.py:203] OpenVINO platform is not available because vLLM is not built with OpenVINO.
INFO 03-20 22:46:23 [__init__.py:260] No platform detected, vLLM is running on UnspecifiedPlatform

it looks like i need to build cpu image for vllm/vllm-openai:v0.8.1 myself - is there some doc to build the image?

@youkaichao
Copy link
Member

can i run vllm/vllm-openai:v0.8.1 with cpu only?

no.

@XYZliang
Copy link

XYZliang commented Mar 25, 2025

same in h200 with BAAI/bge-m3, but works fine with llms like QwQ:
docker-compose:

version: '3.9'

services:
  vllm-server:
    image: vllm/vllm-openai:latest
    container_name: vllm-emb
    runtime: nvidia
    environment:
      # - CUDA_VISIBLE_DEVICES=6,7
      # - VLLM_USE_V1=1
      - VLLM_LOGGING_LEVEL=DEBUG
    volumes:
      - /data/sdv1/model:/model
      - ~/.cache/huggingface:/root/.cache/huggingface
    ports:
      - "8003:8000"
    ipc: host
    entrypoint: /bin/sh
    command: 
      - -c
      - |
        pip install -U pynvml && \
        vllm serve /model/BAAI/bge-m3 \
          --port 8000 \
          --api-key ***@123 \
          --trust-remote-code \
          --seed 2024 \
          --gpu-memory-utilization 0.95 \
          --served-model-name BAAI/bge-m3
      # "--tensor-parallel-size", "2",
      # "--dtype", "bfloat16"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

docker logs:

Collecting pynvml
  Downloading pynvml-12.0.0-py3-none-any.whl.metadata (5.4 kB)
Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 (from pynvml)
  Downloading nvidia_ml_py-12.570.86-py3-none-any.whl.metadata (8.7 kB)
Downloading pynvml-12.0.0-py3-none-any.whl (26 kB)
Downloading nvidia_ml_py-12.570.86-py3-none-any.whl (44 kB)
Installing collected packages: nvidia-ml-py, pynvml
Successfully installed nvidia-ml-py-12.570.86 pynvml-12.0.0
DEBUG 03-25 19:38:26 [__init__.py:28] No plugins for group vllm.platform_plugins found.
DEBUG 03-25 19:38:26 [__init__.py:35] Checking if TPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:45] TPU platform is not available because: No module named 'libtpu'
DEBUG 03-25 19:38:26 [__init__.py:53] Checking if CUDA platform is available.
DEBUG 03-25 19:38:26 [__init__.py:77] Exception happens when checking CUDA platform: NVML Shared Library Not Found
DEBUG 03-25 19:38:26 [__init__.py:94] CUDA platform is not available because: NVML Shared Library Not Found
DEBUG 03-25 19:38:26 [__init__.py:101] Checking if ROCm platform is available.
DEBUG 03-25 19:38:26 [__init__.py:115] ROCm platform is not available because: No module named 'amdsmi'
DEBUG 03-25 19:38:26 [__init__.py:123] Checking if HPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:130] HPU platform is not available because habana_frameworks is not found.
DEBUG 03-25 19:38:26 [__init__.py:141] Checking if XPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:151] XPU platform is not available because: No module named 'intel_extension_for_pytorch'
DEBUG 03-25 19:38:26 [__init__.py:159] Checking if CPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:181] Checking if Neuron platform is available.
DEBUG 03-25 19:38:26 [__init__.py:188] Neuron platform is not available because: No module named 'transformers_neuronx'
DEBUG 03-25 19:38:26 [__init__.py:196] Checking if OpenVINO platform is available.
DEBUG 03-25 19:38:26 [__init__.py:203] OpenVINO platform is not available because vLLM is not built with OpenVINO.
INFO 03-25 19:38:26 [__init__.py:260] No platform detected, vLLM is running on UnspecifiedPlatform
DEBUG 03-25 19:38:28 [main.py:50] Setting VLLM_WORKER_MULTIPROC_METHOD to 'spawn'
DEBUG 03-25 19:38:28 [__init__.py:28] No plugins for group vllm.general_plugins found.
INFO 03-25 19:38:28 [api_server.py:977] vLLM API server version 0.8.1
INFO 03-25 19:38:28 [api_server.py:978] args: Namespace(subparser='serve', model_tag='/model/BAAI/bge-m3', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='***@123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/model/BAAI/bge-m3', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=2024, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['BAAI/bge-m3'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f3977f9ae80>)
Traceback (most recent call last):
  File "/opt/venv/bin/vllm", line 10, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 75, in main
    args.dispatch_function(args)
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 33, in cmd
    uvloop.run(run_server(args))
  File "/opt/venv/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/opt/venv/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1012, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 141, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 161, in build_async_engine_client_from_engine_args
    vllm_config = engine_args.create_engine_config(usage_context=usage_context)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1205, in create_engine_config
    device_config = DeviceConfig(device=self.device)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/config.py", line 1798, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type

docker images from docker pull vllm/vllm-openai:v0.8.1

@XYZliang
Copy link

same in h200 with BAAI/bge-m3, but works fine with llms like QwQ: docker-compose:

version: '3.9'

services:
  vllm-server:
    image: vllm/vllm-openai:latest
    container_name: vllm-emb
    runtime: nvidia
    environment:
      # - CUDA_VISIBLE_DEVICES=6,7
      # - VLLM_USE_V1=1
      - VLLM_LOGGING_LEVEL=DEBUG
    volumes:
      - /data/sdv1/model:/model
      - ~/.cache/huggingface:/root/.cache/huggingface
    ports:
      - "8003:8000"
    ipc: host
    entrypoint: /bin/sh
    command: 
      - -c
      - |
        pip install -U pynvml && \
        vllm serve /model/BAAI/bge-m3 \
          --port 8000 \
          --api-key ***@123 \
          --trust-remote-code \
          --seed 2024 \
          --gpu-memory-utilization 0.95 \
          --served-model-name BAAI/bge-m3
      # "--tensor-parallel-size", "2",
      # "--dtype", "bfloat16"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

docker logs:

Collecting pynvml
  Downloading pynvml-12.0.0-py3-none-any.whl.metadata (5.4 kB)
Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 (from pynvml)
  Downloading nvidia_ml_py-12.570.86-py3-none-any.whl.metadata (8.7 kB)
Downloading pynvml-12.0.0-py3-none-any.whl (26 kB)
Downloading nvidia_ml_py-12.570.86-py3-none-any.whl (44 kB)
Installing collected packages: nvidia-ml-py, pynvml
Successfully installed nvidia-ml-py-12.570.86 pynvml-12.0.0
DEBUG 03-25 19:38:26 [__init__.py:28] No plugins for group vllm.platform_plugins found.
DEBUG 03-25 19:38:26 [__init__.py:35] Checking if TPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:45] TPU platform is not available because: No module named 'libtpu'
DEBUG 03-25 19:38:26 [__init__.py:53] Checking if CUDA platform is available.
DEBUG 03-25 19:38:26 [__init__.py:77] Exception happens when checking CUDA platform: NVML Shared Library Not Found
DEBUG 03-25 19:38:26 [__init__.py:94] CUDA platform is not available because: NVML Shared Library Not Found
DEBUG 03-25 19:38:26 [__init__.py:101] Checking if ROCm platform is available.
DEBUG 03-25 19:38:26 [__init__.py:115] ROCm platform is not available because: No module named 'amdsmi'
DEBUG 03-25 19:38:26 [__init__.py:123] Checking if HPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:130] HPU platform is not available because habana_frameworks is not found.
DEBUG 03-25 19:38:26 [__init__.py:141] Checking if XPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:151] XPU platform is not available because: No module named 'intel_extension_for_pytorch'
DEBUG 03-25 19:38:26 [__init__.py:159] Checking if CPU platform is available.
DEBUG 03-25 19:38:26 [__init__.py:181] Checking if Neuron platform is available.
DEBUG 03-25 19:38:26 [__init__.py:188] Neuron platform is not available because: No module named 'transformers_neuronx'
DEBUG 03-25 19:38:26 [__init__.py:196] Checking if OpenVINO platform is available.
DEBUG 03-25 19:38:26 [__init__.py:203] OpenVINO platform is not available because vLLM is not built with OpenVINO.
INFO 03-25 19:38:26 [__init__.py:260] No platform detected, vLLM is running on UnspecifiedPlatform
DEBUG 03-25 19:38:28 [main.py:50] Setting VLLM_WORKER_MULTIPROC_METHOD to 'spawn'
DEBUG 03-25 19:38:28 [__init__.py:28] No plugins for group vllm.general_plugins found.
INFO 03-25 19:38:28 [api_server.py:977] vLLM API server version 0.8.1
INFO 03-25 19:38:28 [api_server.py:978] args: Namespace(subparser='serve', model_tag='/model/BAAI/bge-m3', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='***@123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/model/BAAI/bge-m3', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=2024, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.95, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['BAAI/bge-m3'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f3977f9ae80>)
Traceback (most recent call last):
  File "/opt/venv/bin/vllm", line 10, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 75, in main
    args.dispatch_function(args)
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 33, in cmd
    uvloop.run(run_server(args))
  File "/opt/venv/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/opt/venv/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1012, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 141, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 161, in build_async_engine_client_from_engine_args
    vllm_config = engine_args.create_engine_config(usage_context=usage_context)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1205, in create_engine_config
    device_config = DeviceConfig(device=self.device)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/venv/lib/python3.12/site-packages/vllm/config.py", line 1798, in __init__
    raise RuntimeError("Failed to infer device type")
RuntimeError: Failed to infer device type

docker images from docker pull vllm/vllm-openai:v0.8.1

I think I have found the problem, I have upgraded my docker, and now I need to remove the colon in docker-compose.

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

Everything is normal now.

@Clement25
Copy link

I solved by manually designating the device type in initialization

llm = LLM('model_name', device=cuda)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.