!
Can we say genocide already?
I don't get why a group of users that are willing to run their own LLMs locally and do not want to relay on centralized corporations like openAI or google prefer to discuss using a centralized site like Reddit
Full image
Decent enough for a model 50 times smaller than ChatGPT. I use orca_mini_3b.
Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.