Skip to content

[Bug]: RuntimeError: No CUDA GPUs are available in transformers v4.48.0 or above when running Ray RLHF example #13597

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
ArthurinRUC opened this issue Feb 20, 2025 · 11 comments
Assignees
Labels
bug Something isn't working ray anything related with ray

Comments

@ArthurinRUC
Copy link

Your current environment

The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 11.4.0
Clang version: 3.4.2 (tags/RELEASE_34/dot2-final)
CMake version: version 3.30.5
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB

Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.0.5
/usr/lib64/libcudnn_adv_infer.so.8.0.5
/usr/lib64/libcudnn_adv_train.so.8.0.5
/usr/lib64/libcudnn_cnn_infer.so.8.0.5
/usr/lib64/libcudnn_cnn_train.so.8.0.5
/usr/lib64/libcudnn_ops_infer.so.8.0.5
/usr/lib64/libcudnn_ops_train.so.8.0.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.5
/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                128
On-line CPU(s) list:   0-127
Thread(s) per core:    2
Core(s) per socket:    32
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 106
Model name:            Intel(R) Xeon(R) Platinum 8338C CPU @ 2.60GHz
Stepping:              6
CPU MHz:               3500.000
CPU max MHz:           3500.0000
CPU min MHz:           800.0000
BogoMIPS:              5200.00
Virtualization:        VT-x
L1d cache:             48K
L1i cache:             32K
L2 cache:              1280K
L3 cache:              49152K
NUMA node0 CPU(s):     0-31,64-95
NUMA node1 CPU(s):     32-63,96-127
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] gpytorch==1.13
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pynvml==12.0.0
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==3.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformer-engine-torch==1.11.0
[pip3] transformers==4.48.0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.1.0
[conda] gpytorch                  1.13                     pypi_0    pypi
[conda] numpy                     1.26.3                   pypi_0    pypi
[conda] nvidia-cublas-cu12        12.4.5.8                 pypi_0    pypi
[conda] nvidia-cuda-cupti-cu12    12.4.127                 pypi_0    pypi
[conda] nvidia-cuda-nvrtc-cu12    12.4.127                 pypi_0    pypi
[conda] nvidia-cuda-runtime-cu12  12.4.127                 pypi_0    pypi
[conda] nvidia-cudnn-cu12         9.1.0.70                 pypi_0    pypi
[conda] nvidia-cufft-cu12         11.2.1.3                 pypi_0    pypi
[conda] nvidia-curand-cu12        10.3.5.147               pypi_0    pypi
[conda] nvidia-cusolver-cu12      11.6.1.9                 pypi_0    pypi
[conda] nvidia-cusparse-cu12      12.3.1.170               pypi_0    pypi
[conda] nvidia-ml-py              12.560.30                pypi_0    pypi
[conda] nvidia-nccl-cu12          2.21.5                   pypi_0    pypi
[conda] nvidia-nvjitlink-cu12     12.4.127                 pypi_0    pypi
[conda] nvidia-nvtx-cu12          12.4.127                 pypi_0    pypi
[conda] pynvml                    12.0.0                   pypi_0    pypi
[conda] pyzmq                     26.2.0                   pypi_0    pypi
[conda] sentence-transformers     3.2.1                    pypi_0    pypi
[conda] torch                     2.5.1                    pypi_0    pypi
[conda] torchaudio                2.5.1                    pypi_0    pypi
[conda] torchvision               0.20.1                   pypi_0    pypi
[conda] transformer-engine-torch  1.11.0                   pypi_0    pypi
[conda] transformers              4.48.0                   pypi_0    pypi
[conda] transformers-stream-generator 0.0.5                    pypi_0    pypi
[conda] triton                    3.1.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0	GPU1	GPU2	GPU3	GPU4	GPU5	GPU6	GPU7	NIC0	NIC1	NIC2	NIC3	NIC4	NIC5	NIC6	NIC7	NIC8	NIC9	NIC10	CPU Affinity	NUMA Affinity
GPU0	 X 	NV8	NV8	NV8	NV8	NV8	NV8	NV8	PXB	PXB	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE	0-31,64-95	0
GPU1	NV8	 X 	NV8	NV8	NV8	NV8	NV8	NV8	PXB	PXB	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE	0-31,64-95	0
GPU2	NV8	NV8	 X 	NV8	NV8	NV8	NV8	NV8	NODE	NODE	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	NODE	0-31,64-95	0
GPU3	NV8	NV8	NV8	 X 	NV8	NV8	NV8	NV8	NODE	NODE	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	NODE	0-31,64-95	0
GPU4	NV8	NV8	NV8	NV8	 X 	NV8	NV8	NV8	SYS	SYS	SYS	SYS	SYS	SYS	PXB	PXB	NODE	NODE	SYS	32-63,96-127	1
GPU5	NV8	NV8	NV8	NV8	NV8	 X 	NV8	NV8	SYS	SYS	SYS	SYS	SYS	SYS	PXB	PXB	NODE	NODE	SYS	32-63,96-127	1
GPU6	NV8	NV8	NV8	NV8	NV8	NV8	 X 	NV8	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	PXB	PXB	SYS	32-63,96-127	1
GPU7	NV8	NV8	NV8	NV8	NV8	NV8	NV8	 X 	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	PXB	PXB	SYS	32-63,96-127	1
NIC0	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	 X 	PIX	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE
NIC1	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	PIX	 X 	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE
NIC2	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	NODE	NODE	 X 	PIX	NODE	NODE	SYS	SYS	SYS	SYS	NODE
NIC3	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	NODE	NODE	PIX	 X 	NODE	NODE	SYS	SYS	SYS	SYS	NODE
NIC4	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE	NODE	NODE	NODE	 X 	PIX	SYS	SYS	SYS	SYS	NODE
NIC5	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE	NODE	NODE	NODE	PIX	 X 	SYS	SYS	SYS	SYS	NODE
NIC6	SYS	SYS	SYS	SYS	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	SYS	SYS	 X 	PIX	NODE	NODE	SYS
NIC7	SYS	SYS	SYS	SYS	PXB	PXB	NODE	NODE	SYS	SYS	SYS	SYS	SYS	SYS	PIX	 X 	NODE	NODE	SYS
NIC8	SYS	SYS	SYS	SYS	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	 X 	PIX	SYS
NIC9	SYS	SYS	SYS	SYS	NODE	NODE	PXB	PXB	SYS	SYS	SYS	SYS	SYS	SYS	NODE	NODE	PIX	 X 	SYS
NIC10	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	NODE	NODE	NODE	NODE	NODE	NODE	SYS	SYS	SYS	SYS	 X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_6
  NIC5: mlx5_7
  NIC6: mlx5_8
  NIC7: mlx5_9
  NIC8: mlx5_10
  NIC9: mlx5_11
  NIC10: mlx5_bond_0

NCCL_SOCKET_IFNAME=eth0,bond0,bond4
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
NCCL_IB_GID_INDEX=3
LD_LIBRARY_PATH=/usr/local/miniconda3/lib/python3.10/site-packages/nvidia/cudnn/lib/:/usr/local/cuda-12.4/compat:/usr/mpi4_gdr/lib:/usr/local/cuda/lib64/:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NCCL_SHM_DISABLE=0
CUDA_DEVICE_MAX_CONNECTIONS=1
NCCL_IB_TC=160
NCCL_IB_DISABLE=0
NCCL_IB_HCA=^=mlx5_bond_0
VLLM_WORKER_MULTIPROC_METHOD=spawn
NCCL_IB_CUDA_SUPPORT=1
NCCL_DEBUG=INFO
NCCL_NET_GDR_LEVEL=2
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY

🐛 Describe the bug

Hi for all!

I failed to run the vLLM project RLHF example script. The code is exactly same as the vLLM docs page: https://docs.vllm.ai/en/latest/getting_started/examples/rlhf.html

The error messages are:

(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Error executing method 'init_device'. This might cause deadlock in distributed execution.
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] Traceback (most recent call last):
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]     return run_method(target, method, args, kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]     return func(*args, **kwargs)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]     torch.cuda.set_device(self.device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]   File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]     torch._C._cuda_setDevice(device)
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]   File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574]     torch._C._cuda_init()
(MyLLM pid=70946) ERROR 02-20 15:38:34 worker_base.py:574] RuntimeError: No CUDA GPUs are available
(MyLLM pid=70946) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::MyLLM.__init__() (pid=70946, ip=11.163.37.230, actor_id=202b48118215566c51057a0101000000, repr=<test_ray_vllm_rlhf.MyLLM object at 0x7fb7453669b0>)
(MyLLM pid=70946)   File "/data/cfs/workspace/test_ray_vllm_rlhf.py", line 96, in __init__
(MyLLM pid=70946)     super().__init__(*args, **kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 1051, in inner
(MyLLM pid=70946)     return fn(*args, **kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 242, in __init__
(MyLLM pid=70946)     self.llm_engine = self.engine_class.from_engine_args(
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 484, in from_engine_args
(MyLLM pid=70946)     engine = cls(
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 273, in __init__
(MyLLM pid=70946)     self.model_executor = executor_class(vllm_config=vllm_config, )
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 262, in __init__
(MyLLM pid=70946)     super().__init__(*args, **kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 51, in __init__
(MyLLM pid=70946)     self._init_executor()
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 90, in _init_executor
(MyLLM pid=70946)     self._init_workers_ray(placement_group)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 355, in _init_workers_ray
(MyLLM pid=70946)     self._run_workers("init_device")
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/executor/ray_distributed_executor.py", line 476, in _run_workers
(MyLLM pid=70946)     self.driver_worker.execute_method(sent_method, *args, **kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 575, in execute_method
(MyLLM pid=70946)     raise e
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 566, in execute_method
(MyLLM pid=70946)     return run_method(target, method, args, kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/utils.py", line 2220, in run_method
(MyLLM pid=70946)     return func(*args, **kwargs)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/worker/worker.py", line 155, in init_device
(MyLLM pid=70946)     torch.cuda.set_device(self.device)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 478, in set_device
(MyLLM pid=70946)     torch._C._cuda_setDevice(device)
(MyLLM pid=70946)   File "/usr/local/miniconda3/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
(MyLLM pid=70946)     torch._C._cuda_init()
(MyLLM pid=70946) RuntimeError: No CUDA GPUs are available

I found in transformers==4.47.1 the script could run normally. However when I tried transformers==4.48.0, 4.48.1 and 4.49.0 I got the error messages above. Then I checked pip envs with pip list and found only transformers versions are different.

I've tried to change vllm version between 0.7.0 and 0.7.2, the behavior is the same.

I make a issue in transformers repo: huggingface/transformers#36295

Related issue in Ray project: #13230

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@youkaichao
Copy link
Member

possibly it can be the serialization of MyWorker . can you try to move the MyWorker class to your module, and pass worker_cls="full.mod.name" ?

@johnny12150
Copy link

Having the same error even when transformers==4.47.1

@youkaichao
Copy link
Member

@johnny12150 what's your use case and your code? how do you use vllm? are you using rlhf?

@ArthurinRUC
Copy link
Author

ArthurinRUC commented Feb 24, 2025

possibly it can be the serialization of MyWorker . can you try to move the MyWorker class to your module, and pass worker_cls="full.mod.name" ?

I set worker_cls="vllm_test.custom.MyWorker" and it works! But I wonder how transformers version can influence vllm's behavior (and also how this modification can change the serialization of MyWorker) :)

@ruisearch42 ruisearch42 added the ray anything related with ray label Feb 24, 2025
@ArthurinRUC
Copy link
Author

Update: When I warp the demo's main logic into a main() function and use another script to call it, like this:

from test_ray_vllm_rlhf import main

main()

With worker_cls="vllm_test.custom.MyWorker", the same error RuntimeError: No CUDA GPUs are available occurs again! Any solution to that?

@johnny12150
Copy link

@johnny12150 what's your use case and your code? how do you use vllm? are you using rlhf?

Hi @youkaichao,
I am tyring to use vllm with ray to host models

@hmellor hmellor moved this to Backlog in Ray Feb 28, 2025
@hmellor hmellor added this to Ray Feb 28, 2025
@youkaichao
Copy link
Member

@ArthurinRUC I updated the script in #14185 to pass the class by string name. it should solve the problem now.

Update: When I warp the demo's main logic into a main() function and use another script to call it, like this:

might be related to ray, not sure what's exactly happening here.

@anjali-chadha
Copy link

I am running into same issue when following distributed offline inference example. However this only happens when TP>1, with TP=0, I am able to use same setup without any issues.

Created an issue with more details: #14413

@Thaurun
Copy link

Thaurun commented Mar 14, 2025

using cmd pip install transformers==4.46.2 is work for me

@richardliaw
Copy link
Collaborator

Closing this, since it seems like there are some workarounds // if you are using Ray to serve models, consider using Ray Serve: https://docs.ray.io/en/latest/serve/llm/serving-llms.html

@github-project-automation github-project-automation bot moved this from Backlog to Done in Ray Apr 3, 2025
@ArthurinRUC
Copy link
Author

ArthurinRUC commented Apr 17, 2025

I finally figure out what's wrong :) TL;DR it is a issue only related to transformers library.

In transformers>=4.48.0, a new import code from .integrations.flash_attention import flash_attention_forward was added to transformers.modeling_utils, and it triggered transformers.utils.import_utils.is_flash_attn_2_available(). In that function torch.cuda.is_available() was executed and implicitly initialized cuda context, letting subsequent CUDA_VISIBLE_DEVICES changes ineffective.

So if you run code like from transformers import PreTrainedModel or from transformers import Trainer, it will call torch.cuda.is_available() and read current CUDA_VISIBLE_DEVICES value and fix it. CUDA_VISIBLE_DEVICES changes after this import will not work.

For each vLLM RayWorker, vLLM will reset its CUDA_VISIBLE_DEVICES. In my code from transformers import Trainer executed before vLLM instance's creation, so vLLM failed to find a correct GPU device to init worker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ray anything related with ray
Projects
Status: Done
Development

No branches or pull requests

7 participants