pip install mlx-whisper
import mlx_whisper result = mlx_whisper.transcribe( speech_file, path_or_hf_repo="mlx-community/whisper-large-v3-mlx")
Run this model on powerful GPU infrastructure. Deploy in 60 seconds.
Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.