WizardLM-70B-V1.0 Released on HF
WizardLM-70B-V1.0 Released on HF
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
These are the full weights, the quants are incoming from TheBloke already, will update this post when they're fully uploaded
From the author(s):
WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.
This model is license friendly, and follows the same license with Meta Llama-2.
Next version is in training and will be public together with our new paper soon.
For more details, please refer to:
Model weight: https://huggingface.co/WizardLM/WizardLM-70B-V1.0
Demo and Github: https://github.com/nlpxucan/WizardLM
Twitter: https://twitter.com/WizardLM_AI
GGML quant posted: https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML
GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ
Me a few months ago when upgrading my computer: pff, who needs 64GB of RAM? Seems like a total waste
Me after realising you can run LLM at home: cries
4 0 ReplyTried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.
2 0 Replyagreed, it seems quite capable, i haven't tested all the way down to q2 to verify but i'm not surprised
1 0 Reply