I'd have to say I'm very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.
Let's talk about our experiences working with different models, either known or lesser-known.
Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.
I figured I'd post this. It's a great way to get an LLM set up on your computer and is extremely easy for folks that don't have that much technical knowledge!