Recommendations on running GPTs on Asahi - M1 Ultra?
Hi,
I wanted to run some Large Language Models locally. Something like Private GPT or Medium Article on my local Apple Silicon to enhance my privacy but also get some additional help.
Does anyone have recommendations or guides I could follow?
I don't know what's your intention.
I'm no expert or highly qualified in any way, so please correct me, but I don't know if that's the right way.
LLMs usually need lots of computing power, optimally in form of a GPU.
I use GPT4All, and when I send a prompt, I notice the temps/ fan speed and usage of my GPU turning up instantly to almost 100%. If it's a longer one, my PC sounds like a helicopter š
In terms of hosting a server, you want something barely good enough for your service, e.g. running your cloud. This results in way less power draw, which is what you want, since it runs 24/7.
Something powerful enough to run LLMs comfortably would likely draw lots of power, even an Apple Silicon.
I think, you're better off just using GPT4All on your gaming PC if you need it.
I hope I'm wrong, and that M1s draw barely any power, especially in idle.
And even if I am, they (almost) can only run MacOS, which wouldn't be a good server OS.
The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well, allowing you to run bigger models on smaller devices.
Llama.cpp was the software that users did this with originalky. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks: