Llama 2 / WizardLM Megathread
Llama 2 / WizardLM Megathread
Llama 2 & WizardLM Megathread
Starting another model megathread to aggregate resources for any newcomers.
It's been awhile since I've had a chance to chat with some of these models so let me know some your favorites in the comments below.
There are many to choose from - sharing your experience could help someone else decide which to download for their use-case.
Thread Models:
Quantized Base Llama-2 Chat Models
Llama-2-7b-Chat
GPTQ
GGUF
AWQ
Llama-2-13B-chat
GPTQ
GGUF
AWQ
Llama-2-70B-chat
GPTQ
GGUF
AWQ
Quantized WizardLM Models
WizardLM-7B-V1.0+
GPTQ
GGUF
AWQ
WizardLM-13B-V1.0+
GPTQ
GGUF
AWQ
WizardLM-30B-V1.0+
GPTQ
- WizardLM-30B-uncensored-GPTQ
- WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ
- WizardLM-33B-V1.0-Uncensored-GPTQ
GGUF
- WizardLM-30B-GGUF
- WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGUF
- WizardLM-33B-V1.0-Uncensored-GGUF
AWQ
Llama 2 Resources
LLaMA 2 is a large language model developed by Meta and is the successor to LLaMA 1. LLaMA 2 is available for free for research and commercial use through providers like AWS, Hugging Face, and others. LLaMA 2 pretrained models are trained on 2 trillion tokens, and have double the context length than LLaMA 1. Its fine-tuned models have been trained on over 1 million human annotations.
Llama 2 Benchmarks
Llama 2 shows strong improvements over prior LLMs across diverse NLP benchmarks, especially as model size increases: On well-rounded language tests like MMLU and AGIEval, Llama-2-70B scores 68.9% and 54.2% - far above MTP-7B, Falcon-7B, and even the 65B Llama 1 model.
Llama 2 Tutorials
Tutorials by James Briggs (also link above) are quick, hands-on ways for you to experiment with Llama 2 workflows. See also a poor man's guide to fine-tuning Llama 2. Check out Replicate if you want to host Llama 2 with an easy-to-use API.
Did I miss any models? What are some of your favorites? Which family/foundation/fine-tuning should we cover next?