stabilityai/sdxl-vae

diffusersdiffuserssafetensorsstable-diffusionstable-diffusion-diffusersarxiv:2112.10752license:mitmit
351.9K

SDXL - VAE

How to use with 🧨 diffusers

You can integrate this fine-tuned VAE decoder to your existing diffusers workflows, by including a vae argument to the StableDiffusionPipeline

from diffusers.models import AutoencoderKL
from diffusers import StableDiffusionPipeline

model = "stabilityai/your-stable-diffusion-model"
vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae")
pipe = StableDiffusionPipeline.from_pretrained(model, vae=vae)

Model

SDXL is a latent diffusion model, where the diffusion operates in a pretrained, learned (and fixed) latent space of an autoencoder. While the bulk of the semantic composition is done by the latent diffusion model, we can improve local, high-frequency details in generated images by improving the quality of the autoencoder. To this end, we train the same autoencoder architecture used for the original Stable Diffusion at a larger batch-size (256 vs 9) and additionally track the weights with an exponential moving average (EMA). The resulting autoencoder outperforms the original model in all evaluated reconstruction metrics, see the table below.

Evaluation

SDXL-VAE vs original kl-f8 VAE vs f8-ft-MSE

COCO 2017 (256x256, val, 5000 images)

ModelrFIDPSNRSSIMPSIMLinkComments
SDXL-VAE4.4224.7 +/- 3.90.73 +/- 0.130.88 +/- 0.27https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensorsas used in SDXL
original4.9923.4 +/- 3.80.69 +/- 0.141.01 +/- 0.28https://ommer-lab.com/files/latent-diffusion/kl-f8.zipas used in SD
ft-MSE4.7024.5 +/- 3.70.71 +/- 0.130.92 +/- 0.27https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckptresumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs
DEPLOY IN 60 SECONDS

Run sdxl-vae on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.