Spaces:
Running
on
Zero
A newer version of the Gradio SDK is available:
5.49.1
title: MOSS-Speech Demo
sdk: gradio
sdk_version: 5.47.2
app_file: app.py
python_version: '3.12'
pinned: true
short_description: True Speech-to-Speech Language Model
MOSS-Speech: Towards True Speech-to-Speech Models Without Text Guidance
ι θ―»δΈζηζ¬.
π Introduction
Spoken dialogue systems often rely on cascaded pipelines that transcribe, process, and resynthesize speech, which limits expressivity and discards paralinguistic cues. MOSS-Speech directly understands and generates speech without relying on text intermediates, enabling end-to-end speech interaction while preserving tone, prosody, and emotion.
Our approach combines a modality-based layer-splitting architecture with a frozen pre-training strategy, leveraging pretrained text LLMs while extending native speech capabilities. Experiments show state-of-the-art results in spoken question answering and competitive speech-to-speech performance compared to text-guided systems.
Welcome to check out the system's demonstration video.
π Key Features
- True Speech-to-Speech Modeling: No text guidance required.
- Layer-Splitting Architecture: Integrates modality-specific layers on top of pretrained text LLM backbones.
- Frozen Pre-Training Strategy: Preserves LLM reasoning while enhancing speech understanding and generation.
- State-of-the-Art Performance: Excels in spoken question answering and speech-to-speech tasks.
- Expressive & Efficient: Maintains paralinguistic cues often lost in cascaded pipelines, such as tone, emotion, and prosody.
π Repository Contents
gradio_demo.pyβ Gradio-based web demo script for quickly experiencing speech-to-speech functionality.generation.pyβ Core generation script for producing output speech from input speech, suitable for inference and batch processing.
π οΈ Installation
# Clone the repository
git clone https://github.com/OpenMOSS/MOSS-Speech
cd MOSS-Speech
# Install dependencies
pip install -r requirements.txt
π Usage
Launch the web demo
python3 gradio_demo.py
License
- The code in this repository is released under the Apache 2.0 license.
Acknowledgements
- Qwen: We use Qwen3-8B-Instruct as the base model.
- We thank an anonymous colleague for Character Voice!
π Citation
If you use this repository or model in your research, please cite:
@article{moss_speech2025,
title={MOSS-Speech: Towards True Speech-to-Speech Models Without Text Guidance},
author={SLM Team},
institution={Shanghai Innovation Institute, Fudan University, MOSI},
year={2025},
note={Official implementation available at https://huggingface.co/fnlp/MOSS-Speech}
}
or
@misc{moss_speech2025,
author = {SLM Team},
title = {MOSS-Speech: Towards True Speech-to-Speech Models Without Text Guidance},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/OpenMOSS/MOSS-Speech}},
}