I'm using local models. Why pay somebody else or hand them my data?
Sometimes you need to search for something and it's impossible because of SEO, however you word it. A LLM won't necessarily give you a useful answer, but it'll at least take your query at face value, and usually tell you some context around your question that'll make web search easier, should you decide to look further.
Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
Using a LLM in scripts adds a semantic layer to whatever you're trying to automate: you can process a large number of small files in a way that's hard to script, as it depends on what's inside.
Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
Some use RAG to query large amounts of information. I think it's a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I'm going to at some point. The 8B version of LLaMA 3 should be good and quick enough.
Thank you. The "jonfairbanks" github repo is exactly what I was looking for, because FUCK sending any of my data to an AI company using their APIs for them to ingest my information to sell off to others.
As far as I understand, all of them can be made to work locally (especially if your local model is served via an OpenAI-compatible API, e.g. see llama.cpp's server binary) with varying degrees of effort required.
Not comment-OP, but you could start here: !fosai@lemmy.world. the latest post is to a RAG tutorial, and there are various other resources in the sidebar.