Replies: 1 comment
-
I have the same issue ... I wonder if there is a way to increase the shared memory available to the model. Does a tiny LLM work? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
So I have a 16GB RAM RPi5... I was trying to run llama.cpp with Vulkan... And it sees the GPU and all...
But it shouts "not enough Shared Memory to run model" ...
Is there anything you suggest me to try?
(Maybe change the local size for the shaders ...)
If anyone wants to help or follow this please do!
Beta Was this translation helpful? Give feedback.
All reactions