GPT4All is a free-to-use, locally running, privacy-aware large language model that is a 3GB - 8GB file that you can download and query. No GPU or internet required
gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue - GitHub - nomic-ai/gpt4all: gpt4all: an ecosystem of ope...
Wanted to share a resource I stumbled on that I can't wait to try and integrate into my projects.
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
I'm still waiting for a local autonomous AI agent with search. I don't understand why most autonomous agent projects use GPT-4 without incorporating search capabilities. Allowing the model to continuously hallucinate is not productive. Instead, it should be able to discover factual information and perform genuinely useful tasks.
Im looking forward to foss ai solutions have their breakthrough, but for now, they cant compete with proprietary Software. Except maybe stable Diffusion
Worth noting that if you want a local LLM on android MLCChat can run Vicuna-7B, RedPajama and several other models from huggingface on fairly average hardware. The interface is still basic but it's functional.
i'll probably stick to automatic1111 oobabooga (mixed up my tools) for now, seeing as they both seem to run the same models. certainly neat to see a more general userfriendly app tho
Been playing around for a couple of weeks with it, and its local server option made it really easy to use with langchain + Orca mini is amazingly fast (but need proper prompts it seems - I still need to work this out it seems :D )
oh and it even lets you see the server side chat, reaaaaally useful when you chain prompts with langchain
how does it compare to commercially available options? namely code generation, text summarization, and asking questions related to programming? I'm curious if they trained it on code.
I'm really not able to answer this precisely, since i only used commercial alternatives to play around with it... what i can say is "Nous - Vicuna" model didnt feel worse than GPT 3.5 overall (and there's a dozen other models available), just a bit slower (which depends on your computer). And the GPT4ALL team curates their list of models, and it's really comfortable considering the million models happening everyday. Also the app that keeps getting new features.
We also chose this system because self hosting is safer, in control, and free. Plus we try to only use the LLM where needed in our small project, so i'll be able to give more insight about that later I think, but overall it is more than usable.