The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Column() changed from object to array in row 0
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
obj = self._get_object_parser(self.data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
self._parse()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse
self.obj = DataFrame(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__
mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
index = _extract_index(arrays)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 680, in _extract_index
raise ValueError(
ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MonTok: A Suite of Monolingual Tokenizers
This is a set of monolingual tokenizers for 98 languages. For each language, there are Unigram, BPE, and SuperBPE tokenizers, ranging in vocabulary size from around 6k to over 200k.
Training Details
Training Data
All tokenizers are trained on samples of the data used to the train the Goldfish language models.
The tokenizers were either trained on scaled or unscaled data. This refers to whether the models are trained on byte-premium-scaled data or not.
We used the Byte Premium Tool to calculate byte premiums.
The dataset size is 300MB of data. For unscaled data tokenizers, this means they were trained on 300MB of data.
For scaled data tokenizers, the tokenizers were trained on 300 times the byte premium MB of data.
Training
The BPE and Unigram tokenizers were trained using the Hugging Face implementations and the trainers in the tokenizers package.
BPE tokenizers were trained on a range of vocabularies: 6144, 8192, 16384, 32768, 49152, 65536, 81920, 98304, 114688, 262144. Unigram tokenizers were trained on a range of vocab sizes: 8192, 16384, 32768, 49152, 65536, 81920, 98304, 114688; however, some of the smaller vocabulary sizes were too small for the tokenizer to train. Therefore the smaller vocab sizes may be missing for some languages. SuperBPE tokenizers were trained on a range of vocab sizes: 65536, 81920, 98304, 114688, 131072, 262144. Furthermore, they were trained on a range of transition points, which refers to the proportion of the original vocabulary that is retained during the second phase of training. We use transition points 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9; however, some transition points did not converge for some languages.
After training the SuperBPE tokenizers, we manually fixed the decoder by setting the attribute to:
tokenizer['decoder'] = {
"type": "ByteLevel",
"add_prefix_space": True,
"trim_offsets": True,
"use_regex": True
}
There are also 'optimal vocabulary' BPE tokenizers. These are tokenizers for which we estimated the vocabulary size that would lead to a certain rate of compression. See forthcoming preprint for more details.
We found that without this, languages that use South and Southeast Asian scripts tokenized texts into mostly UNKs. Thank you to Alisa Liu for helping us address this.
How to Use
I recommend using bpe_unscaled_tokenizers, unigram_unscaled_tokenizers, and superbpe_tokenizers.
For Subword Tokenizers
To download and load individual tokenizers:
from huggingface_hub import hf_hub_download
from tokenizers import Tokenizer
file_path = hf_hub_download(
repo_id="catherinearnett/montok",
filename="bpe_unscaled_tokenizers/bpe_afr_latn_114688_300mb_unscaled.json",
repo_type="dataset"
)
tokenizer = Tokenizer.from_file(file_path)
The tokenizer names are in the format: '{tokenizer type}_{lang code}_{vocab size}_{dataset size}_{bp scaling}.json'.
Tokenizer type is bpe, unigram, or superbpe. Lang code is the ISO 369-3, followed by an underscore, followed by the ISO 15924 code.
Vocab size is one of the ones listed above. Dataset size is 300MB. BP scaling is unscaled or scaled.
For Superword Tokenizers
To download and load the tokenizer:
from huggingface_hub import hf_hub_download
from transformers import PreTrainedTokenizerFast
file_path = hf_hub_download(
repo_id="catherinearnett/montok",
filename='superbpe_tokenizers/superbpe_afr_latn_114688_300mb_unscaled_0_8/tokenizer.json',
repo_type="dataset"
)
tokenizer = PreTrainedTokenizerFast(tokenizer_file=file_path)
How to Cite:
@inproceedings{arnett2025crosslingual,
author = {Arnett, Catherine and Chang, Tyler A. and Biderman, Stella and Bergen, Benjamin K.},
title = {Explaining and Mitigating Crosslingual Tokenizer Inequities},
booktitle = {Proceedings of the Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS)},
year = {2025}
}
- Downloads last month
- 217