Spaces:
Running
Running
File size: 12,283 Bytes
bd710e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 |
"""
Evaluate compression ratio of the tokenizer.
"""
from nanochat.tokenizer import get_tokenizer, RustBPETokenizer
from nanochat.dataset import parquets_iter_batched
# Random text I got from a random website this morning
news_text = r"""
(Washington, D.C., July 9, 2025)- Yesterday, Mexicoโs National Service of Agro-Alimentary Health, Safety, and Quality (SENASICA) reported a new case of New World Screwworm (NWS) in Ixhuatlan de Madero, Veracruz in Mexico, which is approximately 160 miles northward of the current sterile fly dispersal grid, on the eastern side of the country and 370 miles south of the U.S./Mexico border. This new northward detection comes approximately two months after northern detections were reported in Oaxaca and Veracruz, less than 700 miles away from the U.S. border, which triggered the closure of our ports to Mexican cattle, bison, and horses on May 11, 2025.
While USDA announced a risk-based phased port re-opening strategy for cattle, bison, and equine from Mexico beginning as early as July 7, 2025, this newly reported NWS case raises significant concern about the previously reported information shared by Mexican officials and severely compromises the outlined port reopening schedule of five ports from July 7-September 15. Therefore, in order to protect American livestock and our nationโs food supply, Secretary Rollins has ordered the closure of livestock trade through southern ports of entry effective immediately.
โThe United States has promised to be vigilant โ and after detecting this new NWS case, we are pausing the planned port reopeningโs to further quarantine and target this deadly pest in Mexico. We must see additional progress combatting NWS in Veracruz and other nearby Mexican states in order to reopen livestock ports along the Southern border,โ said U.S. Secretary of Agriculture Brooke L. Rollins. โThanks to the aggressive monitoring by USDA staff in the U.S. and in Mexico, we have been able to take quick and decisive action to respond to the spread of this deadly pest.โ
""".strip()
# Random Korean text (to test non-English compression)
korean_text = r"""
์ ์งํ ์ฌ์ค ์์, ๊ณต์ ํ ์์ ์ ๋ํ๋ค
Herald Korea Times
ํค๋ด๋์ฝ๋ฆฌ์ํ์์ฆ๋ ์ ์น, ๊ฒฝ์ , ์ฌํ, ๋ฌธํ ๋ฑ ํ๊ตญ ์ฌํ ์ ๋ฐ์ ์ฃผ์ ์ด์๋ฅผ ์ฌ๋ ์๊ฒ ๋ค๋ฃจ๋ ์ข
ํฉ ์จ๋ผ์ธ ์ ๋ฌธ์ฌ์
๋๋ค.
์ฐ๋ฆฌ๋ ๋จ์ํ ๋ด์ค๋ฅผ ์ ๋ฌํ๋ ๊ฒ์ด ์๋๋ผ, ์ฌ์ค(Fact)์ ๊ธฐ๋ฐํ ์์ธก์ ์๊ฐ์ ๊ท ํ ์๊ฒ ์กฐ๋ช
ํ๋ฉฐ, ๋
์ ์ฌ๋ฌ๋ถ์ด ์ค์ค๋ก ํ๋จํ ์ ์๋ โ์ ๋ณด์ ๊ท ํโ์ ์ ๊ณตํฉ๋๋ค.
ํ๊ตญ ์ธ๋ก ์ ์ค๋ ๋ฌธ์ ๋ก ์ง์ ๋์ด ์จ ์ ์น์ ํธํฅ, ์ด๋
์ ์๊ณก์์ ๋ฒ์ด๋
์ค์ง ์ ์งํจ๊ณผ ๊ณต์ ํจ์ ์์น์ผ๋ก ์ผ๋ ์ธ๋ก ์ ์งํฅํฉ๋๋ค.
์ด๋ ํ์ชฝ์ ์ฃผ์ฅ๋ง์ ํ๋ํ๊ฑฐ๋ ๊ฐ์ถ์ง ์๊ณ ,
**๋ชจ๋ ์์ ์ ๋ํด โ๋ฌด์์ด ์์ ์ธ์งโ, โ๋๊ฐ ๋ฌด์์ ์ฃผ์ฅํ๋์งโ, โ์ฌ์ค์ ๋ฌด์์ธ์งโ**๋ฅผ ๋ช
ํํ ์ ๋ฌํ๋ ๋ฐ ์ง์คํฉ๋๋ค.
""".strip()
# Random piece of code
code_text = r"""
class BasicTokenizer(Tokenizer):
def __init__(self):
super().__init__()
def train(self, text, vocab_size, verbose=False):
assert vocab_size >= 256
num_merges = vocab_size - 256
# input text preprocessing
text_bytes = text.encode("utf-8") # raw bytes
ids = list(text_bytes) # list of integers in range 0..255
# iteratively merge the most common pairs to create new tokens
merges = {} # (int, int) -> int
vocab = {idx: bytes([idx]) for idx in range(256)} # int -> bytes
for i in range(num_merges):
# count up the number of times every consecutive pair appears
stats = get_stats(ids)
# find the pair with the highest count
pair = max(stats, key=stats.get)
# mint a new token: assign it the next available id
idx = 256 + i
# replace all occurrences of pair in ids with idx
ids = merge(ids, pair, idx)
# save the merge
merges[pair] = idx
vocab[idx] = vocab[pair[0]] + vocab[pair[1]]
# prints
if verbose:
print(f"merge {i+1}/{num_merges}: {pair} -> {idx} ({vocab[idx]}) had {stats[pair]} occurrences")
""".strip()
math_text = r"""
\documentclass[12pt]{article}
\usepackage{amsmath,amsthm,amssymb}
\usepackage[margin=1in]{geometry}
\newtheorem{theorem}{Theorem}
\newtheorem*{remark}{Remark}
\begin{document}
\begin{center}
{\Large A Cute Identity: The Sum of Cubes is a Square}
\end{center}
\begin{theorem}
For every integer $n \ge 1$,
\[
\sum_{k=1}^{n} k^{3} \;=\; \left(\frac{n(n+1)}{2}\right)^{2}.
\]
\end{theorem}
\begin{proof}[Proof 1 (Induction)]
Let $S(n) = \sum_{k=1}^{n} k^3$. For $n=1$, $S(1)=1=(1\cdot 2/2)^2$, so the base case holds.
Assume $S(n)=\big(\tfrac{n(n+1)}{2}\big)^2$ for some $n\ge 1$.
Then
\[
S(n+1)
= S(n) + (n+1)^3
= \left(\frac{n(n+1)}{2}\right)^2 + (n+1)^3.
\]
Factor out $(n+1)^2$:
\[
S(n+1)
= (n+1)^2\left( \frac{n^2}{4} + (n+1) \right)
= (n+1)^2\left( \frac{n^2 + 4n + 4}{4} \right)
= (n+1)^2\left( \frac{(n+2)^2}{4} \right).
\]
Thus
\[
S(n+1)=\left(\frac{(n+1)(n+2)}{2}\right)^2,
\]
which matches the claimed formula with $n$ replaced by $n+1$. By induction, the identity holds for all $n\ge 1$.
\end{proof}
\begin{proof}[Proof 2 (Algebraic telescoping)]
Recall the binomial identity
\[
(k+1)^4 - k^4 = 4k^3 + 6k^2 + 4k + 1.
\]
Summing both sides from $k=0$ to $n$ telescopes:
\[
(n+1)^4 - 0^4
= \sum_{k=0}^{n}\big(4k^3 + 6k^2 + 4k + 1\big)
= 4\sum_{k=1}^{n}k^3 + 6\sum_{k=1}^{n}k^2 + 4\sum_{k=1}^{n}k + (n+1).
\]
Using the standard sums
\[
\sum_{k=1}^{n}k = \frac{n(n+1)}{2}
\quad\text{and}\quad
\sum_{k=1}^{n}k^2 = \frac{n(n+1)(2n+1)}{6},
\]
solve for $\sum_{k=1}^{n}k^3$ to get
\[
\sum_{k=1}^{n}k^3 = \left(\frac{n(n+1)}{2}\right)^2.
\]
\end{proof}
\begin{remark}
Geometrically, the identity says: ``adding up $1^3,2^3,\dots,n^3$ builds a perfect squareโโโnamely the square of the $n$th triangular number. This is why one sometimes calls it the \emph{sum-of-cubes is a square} phenomenon.
\end{remark}
\end{document}
""".strip()
science_text = r"""
Photosynthesis is a photochemical energy transduction process in which light-harvesting pigmentโprotein complexes within the thylakoid membranes of oxygenic phototrophs absorb photons and initiate charge separation at the reaction center, driving the linear electron transport chain from water to NADPโบ via photosystem II, the cytochrome bโf complex, and photosystem I, concomitantly generating a trans-thylakoid proton motive force utilized by chloroplastic ATP synthase. The light-dependent reactions produce ATP and NADPH, which fuel the CalvinโBensonโBassham cycle in the stroma, wherein ribulose-1,5-bisphosphate is carboxylated by ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) to form 3-phosphoglycerate, subsequently reduced and regenerated through a series of enzymatic steps, enabling net assimilation of COโ into triose phosphates and ultimately carbohydrates. This process is tightly regulated by photoprotective mechanisms, redox feedback, and metabolite flux, representing a central biochemical pathway coupling solar energy capture to the biosphereโs primary productivity.
""".strip()
# The tokenizer was trained on data from earlier shards, so it has seen this data
train_docs = next(parquets_iter_batched(split="train"))
train_text = "\n".join(train_docs)
val_docs = next(parquets_iter_batched(split="val"))
val_text = "\n".join(val_docs)
all_text = [
("news", news_text),
("korean", korean_text),
("code", code_text),
("math", math_text),
("science", science_text),
("fwe-train", train_text),
]
if val_text:
all_text.append(("fwe-val", val_text))
# Try out current default compared to GPT-2 and GPT-4 tokenizers
tokenizer_results = {}
vocab_sizes = {}
for tokenizer_name in ["gpt2", "gpt4", "ours"]:
if tokenizer_name == "gpt2":
tokenizer = RustBPETokenizer.from_pretrained("gpt2") # gpt-2 base model tokenizer
elif tokenizer_name == "gpt4":
tokenizer = RustBPETokenizer.from_pretrained("cl100k_base") # gpt-4 base model tokenizer
else:
tokenizer = get_tokenizer()
vocab_sizes[tokenizer_name] = tokenizer.get_vocab_size()
tokenizer_results[tokenizer_name] = {}
for name, text in all_text:
encoded = tokenizer.encode(text)
decoded = tokenizer.decode(encoded)
assert decoded == text
encoded_bytes = text.encode('utf-8')
ratio = len(encoded_bytes) / len(encoded)
tokenizer_results[tokenizer_name][name] = {
'bytes': len(encoded_bytes),
'tokens': len(encoded),
'ratio': ratio
}
# ANSI color codes
GREEN = '\033[92m'
RED = '\033[91m'
RESET = '\033[0m'
# Print vocab sizes
print(f"\nVocab sizes:")
print(f"GPT-2: {vocab_sizes['gpt2']}")
print(f"GPT-4: {vocab_sizes['gpt4']}")
print(f"Ours: {vocab_sizes['ours']}")
def print_comparison(baseline_name, baseline_results, ours_results, all_text):
"""Print comparison table between baseline tokenizer and ours."""
print(f"\nComparison with {baseline_name}:")
print("=" * 95)
print(f"{'Text Type':<10} {'Bytes':<8} {baseline_name:<15} {'Ours':<15} {'Relative':<12} {'Better':<10}")
print(f"{'':10} {'':8} {'Tokens':<7} {'Ratio':<7} {'Tokens':<7} {'Ratio':<7} {'Diff %':<12}")
print("-" * 95)
for name, text in all_text:
baseline_data = baseline_results[name]
ours_data = ours_results[name]
# Calculate relative difference (positive means ours is better, negative means worse)
# Using tokens: fewer tokens is better, so we calculate (baseline_tokens - ours_tokens) / baseline_tokens
relative_diff = ((baseline_data['tokens'] - ours_data['tokens']) / baseline_data['tokens']) * 100
# Determine which has better compression (higher ratio = better)
if baseline_data['ratio'] > ours_data['ratio']:
baseline_color, ours_color = GREEN, RED
better = baseline_name
diff_color = RED
elif ours_data['ratio'] > baseline_data['ratio']:
baseline_color, ours_color = RED, GREEN
better = "Ours"
diff_color = GREEN
else:
baseline_color, ours_color = "", ""
better = "Tie"
diff_color = ""
print(f"{name:<10} {baseline_data['bytes']:<8} "
f"{baseline_color}{baseline_data['tokens']:<7}{RESET} "
f"{baseline_color}{baseline_data['ratio']:<7.2f}{RESET} "
f"{ours_color}{ours_data['tokens']:<7}{RESET} "
f"{ours_color}{ours_data['ratio']:<7.2f}{RESET} "
f"{diff_color}{relative_diff:+7.1f}%{RESET} "
f"{better:<10}")
# Print comparisons
print_comparison("GPT-2", tokenizer_results['gpt2'], tokenizer_results['ours'], all_text)
print_comparison("GPT-4", tokenizer_results['gpt4'], tokenizer_results['ours'], all_text)
# Log to report
from nanochat.report import get_report
lines = []
for baseline_name in ["GPT-2", "GPT-4"]:
baseline_key = baseline_name.lower().replace('-', '')
baseline_results = tokenizer_results[baseline_key]
ours_results = tokenizer_results['ours']
lines.append(f"### Comparison with {baseline_name}")
lines.append("")
lines.append("| Text Type | Bytes | " + baseline_name + " Tokens | " + baseline_name + " Ratio | Ours Tokens | Ours Ratio | Relative Diff % |")
lines.append("|-----------|-------|--------------|--------------|-------------|------------|-----------------|")
for name, text in all_text:
baseline_data = baseline_results[name]
ours_data = ours_results[name]
relative_diff = ((baseline_data['tokens'] - ours_data['tokens']) / baseline_data['tokens']) * 100
lines.append(f"| {name} | {baseline_data['bytes']} | {baseline_data['tokens']} | {baseline_data['ratio']:.2f} | {ours_data['tokens']} | {ours_data['ratio']:.2f} | {relative_diff:+.1f}% |")
lines.append("")
report_markdown = "\n".join(lines)
get_report().log(section="Tokenizer evaluation", data=[
report_markdown,
])
|