You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using an AMD GPU (mi210 with Rocm5.7.0),it installed successfully, however, when I run the example command python run_evals_accelerate.py --model_args "pretrained=gpt2" --tasks tasks_examples/open_llm_leaderboard_tasks.txt --override_batch_size 1 --save_details --output_dir="tmp/"
I got an error like this
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor .untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. Traceback (most recent call last): File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/run_evals_accelerate.py", line 7, in <module> from lighteval.main_accelerate import CACHE_DIR, main File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/main_accelerate.py", line 9, in <module> from lighteval.evaluator import evaluate, make_results_table File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/evaluator.py", line 10, in <module> from lighteval.logging.evaluation_tracker import EvaluationTracker File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/logging/evaluation_tracker.py", line 14, in <module> from lighteval.logging.info_loggers import ( File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/logging/info_loggers.py", line 14, in <module> from lighteval.models.model_loader import ModelInfo File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/models/model_loader.py", line 5, in <module> from lighteval.models.adapter_model import AdapterModel File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/models/adapter_model.py", line 14, in <module> from peft import PeftModel File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module> from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/mapping.py", line 16, in <module> from .peft_model import ( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/peft_model.py", line 31, in <module> from .tuners import ( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/tuners/__init__.py", line 21, in <module> from .lora import LoraConfig, LoraModel File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/tuners/lora.py", line 40, in <module> import bitsandbytes as bnb File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 6, in <module> from . import cuda_setup, utils, research File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/__init__.py", line 1, in <module> from . import nn File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module> from .modules import LinearFP8Mixed, LinearFP8Global File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module> from bitsandbytes.optim import GlobalOptimManager File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module> from bitsandbytes.cextension import COMPILED_WITH_CUDA File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 13, in <module> setup.run_cuda_setup() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 121, in run_cuda_setup binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 347, in evaluate_cuda_setup cuda_version_string = get_cuda_version() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 317, in get_cuda_version major, minor = map(int, torch.version.cuda.split(".")) AttributeError: 'NoneType' object has no attribute 'split'
It seems that AMD GPUs are not supported? Is there any method to workaround? Thanks.
The text was updated successfully, but these errors were encountered:
Hi!
Thanks for your feedback, and trying out lighteval!
We have not tested it using AMD GPUs, however it seems that one of our optional dependencies (bitsandbytes) does not support it well at the moment.
Can you retry without this dependency?
I'm using an AMD GPU (mi210 with Rocm5.7.0),it installed successfully, however, when I run the example command
python run_evals_accelerate.py --model_args "pretrained=gpt2" --tasks tasks_examples/open_llm_leaderboard_tasks.txt --override_batch_size 1 --save_details --output_dir="tmp/"
I got an error like this
/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor .untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. INFO:absl:Using default tokenizer. Traceback (most recent call last): File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/run_evals_accelerate.py", line 7, in <module> from lighteval.main_accelerate import CACHE_DIR, main File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/main_accelerate.py", line 9, in <module> from lighteval.evaluator import evaluate, make_results_table File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/evaluator.py", line 10, in <module> from lighteval.logging.evaluation_tracker import EvaluationTracker File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/logging/evaluation_tracker.py", line 14, in <module> from lighteval.logging.info_loggers import ( File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/logging/info_loggers.py", line 14, in <module> from lighteval.models.model_loader import ModelInfo File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/models/model_loader.py", line 5, in <module> from lighteval.models.adapter_model import AdapterModel File "/group/ossdphi_algo_scratch_04/fuweiy/LLM/eval/lighteval/src/lighteval/models/adapter_model.py", line 14, in <module> from peft import PeftModel File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module> from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/mapping.py", line 16, in <module> from .peft_model import ( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/peft_model.py", line 31, in <module> from .tuners import ( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/tuners/__init__.py", line 21, in <module> from .lora import LoraConfig, LoraModel File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/peft/tuners/lora.py", line 40, in <module> import bitsandbytes as bnb File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 6, in <module> from . import cuda_setup, utils, research File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/__init__.py", line 1, in <module> from . import nn File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/nn/__init__.py", line 1, in <module> from .modules import LinearFP8Mixed, LinearFP8Global File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/research/nn/modules.py", line 8, in <module> from bitsandbytes.optim import GlobalOptimManager File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/optim/__init__.py", line 6, in <module> from bitsandbytes.cextension import COMPILED_WITH_CUDA File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 13, in <module> setup.run_cuda_setup() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 121, in run_cuda_setup binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 347, in evaluate_cuda_setup cuda_version_string = get_cuda_version() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 317, in get_cuda_version major, minor = map(int, torch.version.cuda.split(".")) AttributeError: 'NoneType' object has no attribute 'split'
It seems that AMD GPUs are not supported? Is there any method to workaround? Thanks.
The text was updated successfully, but these errors were encountered: