File size: 3,471 Bytes
e924b9a bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 bbf852a b4300b7 e924b9a bbf852a ee746e1 b035a5a bbf852a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
license: mit
datasets:
- shorecode/summary-collection-200k-rows
language:
- en
base_model:
- google/t5-efficient-tiny-nh8
library_name: transformers
tags:
- summary
- summarizer
widget:
- text: Model training
output:
url: Screenshot_20251104_204645.png
metrics:
- f1
- rouge
- extractiveness
model-index:
- name: t5-efficient-tiny-summarizer-general-purpose-v2
results:
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: f1 Score
type: f1 Score
value: 0.29
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: Faithfullness (facebook/bart-large-cnn)
type: facebook/bart-large-cnn
value: 1.71
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: Summarization Compression
type: Lighteval extractiveness
value: 7.52
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: Summarization Coverage
type: Lighteval extractiveness
value: 0.96
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: Summarization Density
type: Lighteval extractiveness
value: 8.68
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rougeL precision
type: Lighteval
value: 0.59
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rougeL recall
type: Lighteval
value: 0.31
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rougeL fmeasure
type: Lighteval
value: 0.41
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rouge1 precision
type: Lighteval
value: 0.63
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rouge1 recall
type: Lighteval
value: 0.33
- task:
type: Summarization
dataset:
name: shorecode/summary-collection-60k-rows
type: shorecode/summary-collection-60k-rows
metrics:
- name: rouge1 fmeasure
type: Lighteval
value: 0.44
---
# This model was built to shorten text that is injected into LLM prompts to reduce API calling costs
Very high compression (7x+) meaning the text is 7 times smaller when sent to your LLM provider!
Recommended kwargs:
- num_beams=2 || 3
- no_repeat_ngram_size=2
- min_length=20
- max_new_tokens=500 || as high as you can tolerate, 4x500 = 2000 characters. anything after 2000 is clipped
<Gallery />
https://api.wandb.ai/links/shorecode-shorecode-llc/6udfudmr |