mlx-community/gpt-oss-20b-MXFP4-Q8

text generationmlxmlxsafetensorsgpt_ossvllmtext-generationconversationalapache-2.0
746.4K

mlx-community/gpt-oss-20b-MXFP4-Q8

This model mlx-community/gpt-oss-20b-MXFP4-Q8 was converted to MLX format from openai/gpt-oss-20b using mlx-lm version 0.27.0.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/gpt-oss-20b-MXFP4-Q8")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
DEPLOY IN 60 SECONDS

Run gpt-oss-20b-MXFP4-Q8 on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.