MOSS Transcribe Diarize: Accurate Transcription with Speaker Diarization
Abstract
A unified multimodal large language model for end-to-end speaker-attributed, time-stamped transcription with extended context window and strong generalization across benchmarks.
Speaker-Attributed, Time-Stamped Transcription (SATS) aims to transcribe what is said and to precisely determine the timing of each speaker, which is particularly valuable for meeting transcription. Existing SATS systems rarely adopt an end-to-end formulation and are further constrained by limited context windows, weak long-range speaker memory, and the inability to output timestamps. To address these limitations, we present MOSS Transcribe Diarize, a unified multimodal large language model that jointly performs Speaker-Attributed, Time-Stamped Transcription in an end-to-end paradigm. Trained on extensive real wild data and equipped with a 128k context window for up to 90-minute inputs, MOSS Transcribe Diarize scales well and generalizes robustly. Across comprehensive evaluations, it outperforms state-of-the-art commercial systems on multiple public and in-house benchmarks.
Community
MOSS Transcribe Diarize ๐๏ธ
We introduce MOSS Transcribe Diarize โ a unified multimodal model for Speaker-Attributed, Time-Stamped Transcription (SATS).
๐ End-to-end SATS in a single pass (transcription + speaker attribution + timestamps)
๐ง 128k context window for up to ~90-minute audio without chunking (strong long-range speaker memory)
๐ Trained on extensive in-the-wild conversations + controllable simulated mixtures (robust to overlap/noise/domain shift)
๐ Strong results on AISHELL-4 / Podcast / Movies benchmarks (best cpCER / ฮcp among evaluated systems)
Paper: [2601.01554] MOSS Transcribe Diarize: Accurate Transcription with Speaker Diarization
Homepage: https://mosi.cn/models/moss-transcribe-diarize
Online Demo: https://moss-transcribe-diarize-demo.mosi.cn
Verycool & useful work! Is the model going to be open source?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Train Short, Infer Long: Speech-LLM Enables Zero-Shot Streamable Joint ASR and Diarization on Long Audio (2025)
- JoyVoice: Long-Context Conditioning for Anthropomorphic Multi-Speaker Conversational Synthesis (2025)
- Context-Aware Whisper for Arabic ASR Under Linguistic Varieties (2025)
- Toward Conversational Hungarian Speech Recognition: Introducing the BEA-Large and BEA-Dialogue Datasets (2025)
- VALLR-Pin: Uncertainty-Factorized Visual Speech Recognition for Mandarin with Pinyin Guidance (2025)
- VocalBench-zh: Decomposing and Benchmarking the Speech Conversational Abilities in Mandarin Context (2025)
- RosettaSpeech: Zero-Shot Speech-to-Speech Translation from Monolingual Data (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper