This model was built to shorten text that is injected into LLM prompts to reduce API calling costs
Very high compression (7x+) meaning the text is 7 times smaller when sent to your LLM provider! Recommended kwargs:
- num_beams=2 || 3
- no_repeat_ngram_size=2
- min_length=20
- max_new_tokens=500 || as high as you can tolerate, 4x500 = 2000 characters. anything after 2000 is clipped

- Prompt
- Model training
- Downloads last month
- 410
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for shorecode/t5-efficient-tiny-summarizer-general-purpose-v3
Base model
google/t5-efficient-tiny-nh8Dataset used to train shorecode/t5-efficient-tiny-summarizer-general-purpose-v3
Space using shorecode/t5-efficient-tiny-summarizer-general-purpose-v3 1
Evaluation results
- f1 Score on shorecode/summary-collection-60k-rowsself-reported0.290
- Faithfullness (facebook/bart-large-cnn) on shorecode/summary-collection-60k-rowsself-reported1.710
- Summarization Compression on shorecode/summary-collection-60k-rowsself-reported7.520
- Summarization Coverage on shorecode/summary-collection-60k-rowsself-reported0.960
- Summarization Density on shorecode/summary-collection-60k-rowsself-reported8.680
- rougeL precision on shorecode/summary-collection-60k-rowsself-reported0.590
- rougeL recall on shorecode/summary-collection-60k-rowsself-reported0.310
- rougeL fmeasure on shorecode/summary-collection-60k-rowsself-reported0.410
- rouge1 precision on shorecode/summary-collection-60k-rowsself-reported0.630
- rouge1 recall on shorecode/summary-collection-60k-rowsself-reported0.330