This model was built to shorten text that is injected into LLM prompts to reduce API calling costs

Very high compression (7x+) meaning the text is 7 times smaller when sent to your LLM provider! Recommended kwargs:

  • num_beams=2 || 3
  • no_repeat_ngram_size=2
  • min_length=20
  • max_new_tokens=500 || as high as you can tolerate, 4x500 = 2000 characters. anything after 2000 is clipped
    Prompt
    Model training

https://api.wandb.ai/links/shorecode-shorecode-llc/6udfudmr

Downloads last month
410
Safetensors
Model size
15.6M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for shorecode/t5-efficient-tiny-summarizer-general-purpose-v3

Quantized
(2)
this model

Dataset used to train shorecode/t5-efficient-tiny-summarizer-general-purpose-v3

Space using shorecode/t5-efficient-tiny-summarizer-general-purpose-v3 1

Evaluation results

  • f1 Score on shorecode/summary-collection-60k-rows
    self-reported
    0.290
  • Faithfullness (facebook/bart-large-cnn) on shorecode/summary-collection-60k-rows
    self-reported
    1.710
  • Summarization Compression on shorecode/summary-collection-60k-rows
    self-reported
    7.520
  • Summarization Coverage on shorecode/summary-collection-60k-rows
    self-reported
    0.960
  • Summarization Density on shorecode/summary-collection-60k-rows
    self-reported
    8.680
  • rougeL precision on shorecode/summary-collection-60k-rows
    self-reported
    0.590
  • rougeL recall on shorecode/summary-collection-60k-rows
    self-reported
    0.310
  • rougeL fmeasure on shorecode/summary-collection-60k-rows
    self-reported
    0.410
  • rouge1 precision on shorecode/summary-collection-60k-rows
    self-reported
    0.630
  • rouge1 recall on shorecode/summary-collection-60k-rows
    self-reported
    0.330