Skip Navigation

A Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)

github.com GitHub - Maknee/minigpt4.cpp: Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)

Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML) - GitHub - Maknee/minigpt4.cpp: Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)

The main goal of minigpt4.cpp is to run minigpt4 using 4-bit quantization with using the ggml library.

https://github.com/Maknee/minigpt4.cpp

0
TechNews @radiation.party irradiated @radiation.party
BOT
[HN] Minigpt4 Inference on CPU
0 comments