Robust Speech Recognition via Large-Scale Weak Supervision
Paper
•
2212.04356
•
Published
•
46
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
Whisper was proposed in the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford et al. from OpenAI. The original code repository can be found here.
Whisper large-v3 has the same architecture as the previous large models except the following minor differences: