Note
This project wouldn't exist without the inspiration and generous support of the incredible community at linux.do.
Local speech synthesis, editing, and transcription on Apple Silicon, running pure MLX. No cloud, no PyTorch.
| Alias | Type | Description |
|---|---|---|
fish-s2-pro |
TTS | Fish S2 Pro — dual-AR TTS, voice cloning, emotion tags |
vibevoice |
TTS | VibeVoice Large — hybrid LLM+diffusion TTS, voice cloning |
longcat |
TTS | LongCat AudioDiT — flow-matching diffusion TTS |
moss-local |
TTS | OpenMOSS TTS Local — local-attention multi-VQ TTS |
moss-ttsd |
TTS | OpenMOSS TTS Delay — delay-pattern dialogue TTS |
moss-sound-effect |
TTS | OpenMOSS Sound Effect — text-to-sound-effect generation |
step-audio |
TTS | Step-Audio-EditX — voice cloning + audio editing |
cohere-asr |
ASR | Cohere Transcribe — multilingual ASR |
- Apple Silicon Mac (M1 or later)
- Python 3.13+
pip install mlx-speechModels download automatically from HuggingFace on first use.
Python API:
import mlx_speech
# Text-to-speech
model = mlx_speech.tts.load("fish-s2-pro")
result = model.generate("Hello from mlx-speech!")
# result.waveform: mx.array, result.sample_rate: int
# Voice cloning with emotion tags
result = model.generate(
"[excited] This is amazing!",
reference_audio="reference.wav",
reference_text="Transcript of the reference audio.",
)
# Speech-to-text
asr = mlx_speech.asr.load("cohere-asr")
result = asr.generate("audio.wav")
print(result.text)
# List available models
mlx_speech.tts.list_models()
mlx_speech.asr.list_models()CLI:
# Generate speech
mlx-speech tts --model fish-s2-pro --text "Hello!" -o output.wav
# Voice cloning with emotion tags
mlx-speech tts --model fish-s2-pro \
--text "[whisper] Just between us..." \
--reference-audio ref.wav \
--reference-text "Transcript of reference." \
-o cloned.wav
# Step Audio emotion editing
mlx-speech tts --model step-audio \
--reference-audio input.wav \
--reference-text "Transcript." \
--edit-type emotion --edit-info happy \
-o happy.wav
# Sound effect generation
mlx-speech tts --model moss-sound-effect \
--text "rolling thunder with rainfall" \
--duration-seconds 8 \
-o thunder.wav
# Transcribe audio
mlx-speech asr --model cohere-asr --audio speech.wav
# Discover models
mlx-speech tts --list-models
mlx-speech asr --list-models
mlx-speech --helpLocal model paths work too:
mlx-speech tts --model models/fish_s2_pro/mlx-int8 --text "Hello!" -o output.wavNote: The
mlx-speechCLI covers the common path — basic generation, voice cloning, and editing. For advanced controls (sampling temperature, top-p/k, diffusion steps, batch JSONL, duration tuning, etc.) use the scripts inscripts/directly. Each model family has a corresponding script with the full inference surface documented indocs/.
Pre-converted MLX weights are on Hugging Face under appautomaton.
Use mlx_speech.tts.load("alias") or mlx_speech.tts.load("appautomaton/repo-name") to load them.
| Alias | HF Repo | Quant |
|---|---|---|
fish-s2-pro |
fishaudio-s2-pro-8bit-mlx | int8 |
vibevoice |
vibevoice-mlx | int8 |
longcat |
longcat-audiodit-3.5b-8bit-mlx | int8 |
moss-local |
openmoss-tts-local-mlx | int8 |
moss-ttsd |
openmoss-ttsd-mlx | int8 |
moss-sound-effect |
openmoss-sound-effect-mlx | 4-bit |
step-audio |
step-audio-editx-8bit-mlx | int8 |
cohere-asr |
cohere-asr-mlx | int8 |
Convert from upstream source weights:
python scripts/convert/fish_s2_pro.py
python scripts/convert/longcat_audiodit.py
python scripts/convert/vibevoice.py
python scripts/convert/moss_local.py
python scripts/convert/moss_ttsd.py
python scripts/convert/cohere_asr.pyEach family has a doc covering behavior, flags, and known limitations:
- Fish S2 Pro
- LongCat AudioDiT
- MossTTSLocal
- MOSS-TTSD
- MOSS-SoundEffect
- VibeVoice
- Step-Audio-EditX
- CohereASR
git clone https://github.com/appautomaton/mlx-speech.git
cd mlx-speech
uv sync
uv run pytest tests/unit/
uv run ruff check .mlx-speech/
src/mlx_speech/ library code
scripts/ conversion, generation, eval, and audit entry points
models/ local checkpoints (not in git)
tests/ unit, checkpoint, runtime, integration tests
docs/ model-family behavior guides
MIT — see LICENSE