Datasets:
- tokenizer_robustness_completion_stem_canonical
- tokenizer_robustness_completion_stem_character_deletion
- tokenizer_robustness_completion_stem_colloquial
- tokenizer_robustness_completion_stem_compounds
- tokenizer_robustness_completion_stem_diacriticized_styling
- tokenizer_robustness_completion_stem_double_struck
-
2.46 kB
-
24.6 kB