task_categories:
- text-generation
language:
- en
tags:
- pretrain
size_categories:
- 10B<n<100B
Top 30B token SlimPajama Subset selected by the Cleanliness rater
This repository contains the dataset described in the paper Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models.
Code: https://github.com/opendatalab/Meta-rater
Dataset Description
This dataset contains the top 30B tokens from the SlimPajama-627B corpus, selected using the Cleanliness dimension of the PRRC (Professionalism, Readability, Reasoning, Cleanliness) framework. Each document in this subset is scored and filtered by a ModernBERT-based rater fine-tuned to assess the formatting, completeness, and absence of noise or irrelevant content in the text.
- Source: SlimPajama-627B Annotated Dataset
- Selection: Top 30B tokens by PRRC-Cleanliness score
- Quality metric: Cleanliness (0–5 scale, see below)
- Annotation coverage: 100% of selected subset
Dataset Statistics
- Total tokens: 30B (subset of SlimPajama-627B)
- Selection method: Top-ranked by PRRC-Cleanliness ModernBERT rater
- Domains: Same as SlimPajama (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
- Annotation: Each document has a cleanliness score (0–5)
Cleanliness Quality Metric
Cleanliness evaluates the formatting, completeness, and absence of noise or irrelevant content in the text. Higher scores indicate well-formatted, complete, and clean data, while lower scores reflect noisy, incomplete, or poorly formatted content.
- 0–1: Serious or obvious issues affecting fluency or completeness
- 2–3: Some problems, but not seriously affecting reading
- 4–5: Minor or no problems; text is clean and well-formatted
Scores are assigned by a ModernBERT model fine-tuned on Llama-3.3-70B-Instruct annotations, as described in the Meta-rater paper.
Annotation Process
- Initial annotation: Llama-3.3-70B-Instruct rated 500k+ SlimPajama samples for cleanliness
- Model training: ModernBERT fine-tuned on these annotations
- Scoring: All SlimPajama documents scored by ModernBERT; top 30B tokens selected
Citation
If you use this dataset, please cite:
@article{zhuang2025meta,
title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
journal={arXiv preprint arXiv:2504.14194},
year={2025}
}
License
This dataset is released under the same license as the original SlimPajama dataset. See the original SlimPajama repository for details.
Contact
- Project Lead: Ren Ma (maren@pjlab.org.cn)
- Corresponding Author: Conghui He (heconghui@pjlab.org.cn)
- Issues: GitHub Issues
Made with ❤️ by the OpenDataLab team