ngxson/Vintern-1B-v3_5-GGUF

ggufbase_model:5CD-AI/Vintern-1B-v3_5base_model:quantized:5CD-AI/Vintern-1B-v3_5license:mitendpoints_compatibleregion:usmit
677.1K

Original model: https://huggingface.co/5CD-AI/Vintern-1B-v3_5

How to use this

Install llama.cpp

Then:

llama-server -hf ngxson/Vintern-1B-v3_5-GGUF --chat-template vicuna
DEPLOY IN 60 SECONDS

Run Vintern-1B-v3_5-GGUF on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.