-
Notifications
You must be signed in to change notification settings - Fork 575
Apple M1 - autotrain setup warning - The installed version of bitsandbytes was compiled without GPU support. #278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I would also like to know about the possibilities of using mac m2 for autotrain. Thanks |
@neoneye, @QueryType, Did you try running it on cpu only though? |
I received a response from Abhishek Thakur, he says M2 is not yet supported. So hoping it come through. I do not know exactly how to run only on CPU. |
I didn't continue with autotrain on macOS. Instead I ended up using axolotl for training on a hefty gpu in the cloud. |
@neoneye, @QueryType , Thanks for your prompt reply. Is there any way to do fine tuning on a mac. Its ok for me if we GPU is not utilised. @neoneye, The place that I work at has a lot of confidential data and they are not willing to give their data to cloud providers. Any idea how I can work with this? Thank you |
Also can we use the llama2.c repo somehow to train on mac? |
why would you want to use autotrain on mac? to finetune LLMs or something else? |
you can do the same and maybe better using autotrain. everything lies in the huggingface ecosystem. |
llm training on m1/m2 available from version 0.6.35+. please update |
This is cool. Thanks. Let me try. I agree @abhishekkrthakur it is not a good idea to train locally. However, my organisation currently requires everything to be "local" and nothing on "internet" due to IPR etc. We can break our heads against a wall but can not explain the logic to Legal. :) |
the problem is, it will take ages (provided it works). there is no int4, int8 or fp16 on m1/m2 yet. |
@abhishekkrthakur, thank you so much for replying. I am a big fan of your work, especially your yt videos. They are very well explained. as @QueryType explained, I can't use cloud solutions for the same reason. It actually feels good to know someone else is going through the same headache xD I'll try it out and let you know @abhishekkrthakur |
thank you for your kind words. what im saying is using m1/m2 will take ages to train if it works. if cloud isnt an option, it would be better to move to a local ubuntu machine with several gpus instead. |
Ah I seee. Yes, I am trying to borrow a gaming laptop from a friend. Buying GPU's is not an option for now. Although, I have access to an M2 ultra with 128gb memory and 24 cores CPU with 76 cores GPU though. Is this also not viable? |
@abhishekkrthakur I ran auto train advanced on the mac. It seems to have worked. I didn't load it in 8bits though. However I am getting these warnings,
I had a question though, these files got generated. What is the next step?
|
Uh oh!
There was an error while loading. Please reload this page.
I'm getting a warning during installation, that worries me, will autotrain be able to fine tune llama without GPU acceleration.
I investigated how to compile
bitsandbytes
with GPU acceleration for M1, and it's not yet support, see issue 252.Ideas for improvement
autotrain
indeed works on Mac with GPU acceleration.The text was updated successfully, but these errors were encountered: