I don't consider myself very technical. I've never taken a computer science course and don't know python. I've learned some things like Linux, the command line, docker and networking/pfSense because I value my privacy. My point is that anyone can do this, even if you aren't technical.
I tried both LM Studio and Ollama. I prefer Ollama. Then you download models and use them to have your own private, personal GPT. I access it both on my local machine through the command line but I also installed Open WebUI in a docker container so I can access it on any device on my local network (I don't expose services to the internet).
Having a private ai/gpt is pretty cool. You can download and test new models. And it is private. Yes, there are ethical concerns about how the model got the training. I'm not minimizing those concerns. But if you want your own AI/GPT assistant, give it a try. I set it up in a couple of hours, and as I said... I'm not even that technical.
Open WebUI now has a docker environment variable so you can, by default, turn off the login page. You just declare it when you’re spinning up the container and you’re good to go.
I was just talking to a member of my devops team and I was talking about this exact thing and they said "I didn't know you could attach a GPU to a container". So, yup, just stay on top of this stuff at home and you'll do fine
I'm sorry if I offended. I can't code or understand existing code and have always felt that technical people code. I guess I should expand my definition. Again, sorry that my words felt like a punch in the gut... wasn't my intention at all.
Have you found much practical use for small models yet? I love the idea that even the 1.1B tinyllama model can run on my phone, but haven't found much real world use for it yet. Llama3 8b feels better, but not much better for even emails as it's a bit dumb
I use my phone all the time, but I just use a wireguard VPN to tunnel into my home container of Open WebUI. Then I can interact with my desktop machine using a NVIDIA gpu. I'm currently testing mistral-nemo. It's pretty great but it gets a bit verbose sometimes.
I am also using open webui. Most LLMs are too verbose for me, so I created a model in open-webui with system prompt "Do not repeat the questions. Avoid giving lists as answers. Do not summarize the answer at the end. If asked a follow-up question, respond with only new information, do not repeat previously stated information." and named it No Nonsense.
Yeah, I like it too. My only issue is ollama's lack of intel support. I have been looking at issue 1590 on their GitHub. For now I have a 1050ti in a cardboard box PC with other hardware being 10+ years old and a mixed set of RAM totalling 12G. It also has a 100Mbit nic, so I can't take advantage of full internet speed when downloading models. The worst part is they can support intel, but haven't merged the solution because of an issue with the windows intel drivers. Linux is fine but I can 't have it. I wasn't planning to rant, but I already typed it so... enjoy?
Yeah, I have an NVDIA GPU and it is magic. The best part is when you are using Ollama, open a second terminal window and enter the command, watch -n 0.5 nvidia-smi and you can see your GPU usage go up and down in real-time as you ask the GPT questions. Pretty cool.
Hopefully they get the ARC folks up and running soon.
I access it both on my local machine through the command line
You really don't have to - There's GPT4ALL designed for normal users with very simple GUI
Also, with minimal command line knowledge you can install InvokeAI - probably the best UX for image generating AI on the market. Works both on Linux and Windows
It's so great that there is so much ongoing development of these types of tools out there. I'm currently using openweb ui as my GUI but I'll give your suggestion a try next week. I haven't figured out a use case for stable diffusion except for creating new content for the shitposting community on lemmy lol. But if you have any ideas, please let me know... I'd love to test it out if I have a good use case.
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
Both my avatar and channel cover are made with AI models - so this is a good start.
IMO the biggest potential is indie game dev - AI image generation is amazing for static backgrounds, character design, and with certain loras it absolutely shreds pixelart - I even saw entire workflows for building pixelart animations (I think it was for ComfyUi tho).
Also local image models are uncensored so... porn XD
I access it both on my local machine through the command line
You really don't have to - There's GPT4ALL designed for normal users with very simple GUI
Also, with minimal command line knowledge you can install InvokeAI - probably the best UX for image generating AI on the market. Works both on Linux and Windows