Whisper Tiny PT
This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
- Loss: 0.6077
 - Wer: 29.9844
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
 - train_batch_size: 64
 - eval_batch_size: 8
 - seed: 42
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: linear
 - lr_scheduler_warmup_steps: 500
 - training_steps: 5000
 - mixed_precision_training: Native AMP
 
Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | 
|---|---|---|---|---|
| 0.4143 | 1.04 | 500 | 0.5325 | 32.7399 | 
| 0.2693 | 3.03 | 1000 | 0.4718 | 29.4867 | 
| 0.1724 | 5.01 | 1500 | 0.4758 | 28.7218 | 
| 0.0849 | 7.0 | 2000 | 0.5070 | 29.2211 | 
| 0.0659 | 8.04 | 2500 | 0.5223 | 29.3169 | 
| 0.0539 | 10.03 | 3000 | 0.5402 | 30.1458 | 
| 0.0376 | 12.02 | 3500 | 0.5755 | 29.9995 | 
| 0.0217 | 14.0 | 4000 | 0.6067 | 29.6565 | 
| 0.0168 | 15.04 | 4500 | 0.6082 | 29.8162 | 
| 0.0205 | 17.03 | 5000 | 0.6077 | 29.9844 | 
Framework versions
- Transformers 4.26.0.dev0
 - Pytorch 1.13.0+cu116
 - Datasets 2.7.1.dev0
 - Tokenizers 0.13.2
 
- Downloads last month
 - 36
 
Evaluation results
- WER on Common Voice 11.0test set self-reported29.110
 - WER on google/fleurstest set self-reported26.360
 - WER on mozilla-foundation/common_voice_9_0test set self-reported28.680