Datasets:
v1_7 update
Browse files
README.md
CHANGED
|
@@ -28,25 +28,39 @@ More information:
|
|
| 28 |
|
| 29 |
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
|
| 30 |
|
|
|
|
|
|
|
| 31 |
**2024-04-15: License Change.** We have updated the license of Dolma to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). Please see this [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for more information.
|
| 32 |
|
| 33 |
|
| 34 |
## Versions
|
| 35 |
|
| 36 |
-
At the moment, there are
|
| 37 |
|
| 38 |
| **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
|
| 39 |
|--|:--:|--|--|--|
|
| 40 |
-
| `
|
|
|
|
| 41 |
| `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
|
| 42 |
| `v1_5` | | 2023-10-31 | 6.4 TB | The version of Dolma used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
|
| 43 |
| `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
|
| 44 |
| `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
|
| 45 |
|
| 46 |
-
(Size difference between `v1_6` and previous version is due to different set of metadata included in files: we removed redundant metadata in `v1_6`.)
|
| 47 |
|
| 48 |
-
## Summary Statistics (v1.
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
| **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
|
| 52 |
|--|--|--|--|--|--|
|
|
@@ -60,6 +74,8 @@ At the moment, there are five versions of Dolma available:
|
|
| 60 |
| **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
|
| 61 |
|
| 62 |
|
|
|
|
|
|
|
| 63 |
|
| 64 |
## Download
|
| 65 |
|
|
@@ -113,3 +129,4 @@ If you use our dataset or tooling, please cite us at:
|
|
| 113 |
journal={arXiv preprint},
|
| 114 |
}
|
| 115 |
```
|
|
|
|
|
|
| 28 |
|
| 29 |
To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our [GitHub project page](https://github.com/allenai/dolma/tree/main/docs)!
|
| 30 |
|
| 31 |
+
**2024-04-17: Dolma v1.7 Release.** We have released an updated version of Dolma that we used to train our latest [OLMo 7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7) model.
|
| 32 |
+
|
| 33 |
**2024-04-15: License Change.** We have updated the license of Dolma to [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). Please see this [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for more information.
|
| 34 |
|
| 35 |
|
| 36 |
## Versions
|
| 37 |
|
| 38 |
+
At the moment, there are six versions of Dolma available:
|
| 39 |
|
| 40 |
| **Version** | **Default?** | **Release Date** | **Size** (gzip) | **Description** |
|
| 41 |
|--|:--:|--|--|--|
|
| 42 |
+
| `v1_7` | ✅ | 2024-04-15 | X.X TB | Used to train [OLMo-7B-v1.7](https://huggingface.co/allenai/OLMo-7b-v1.7). |
|
| 43 |
+
| `v1_6` | | 2024-01-31 | 5.4 TB | An update to v1.5 with some bug-fixes. |
|
| 44 |
| `v1_6-sample` | | 2024-01-31 | 16.4 GB | A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration. |
|
| 45 |
| `v1_5` | | 2023-10-31 | 6.4 TB | The version of Dolma used to train [OLMo-1B](https://huggingface.co/allenai/OLMo-1B). Roughly 3 trillion tokens. |
|
| 46 |
| `v1_5-sample` | | 2023-10-31 | 2.9 TB | A sample of roughly 1.9 trillion tokens used to train [OLMo-7B](https://huggingface.co/allenai/OLMo-7B) |
|
| 47 |
| `v1` | | 2023-08-18 | 6.0 TB | The first version of Dolma. |
|
| 48 |
|
|
|
|
| 49 |
|
| 50 |
+
## Summary Statistics (v1.7)
|
| 51 |
|
| 52 |
+
| **Source** | **Provenance** | **New?** | **Documents** (millions) | **OLMo tokens** (billions) | **Sample Proportion** | **Cutoff Date** | **Processing**
|
| 53 |
+
|--|--|--|--|--|--|--|--|
|
| 54 |
+
| Dolma's CC | [Common Crawl](https://commoncrawl.org/) via Dolma v1.6 | Updated | | 1,195.5 | 50% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
|
| 55 |
+
| Refined Web | [Refined Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Yes | | 456.4 | 100% | Feb 2023 | |
|
| 56 |
+
| StarCoder | [StarCoder](https://huggingface.co/blog/starcoder) | Yes | | 263.8 | 100% | May 2023 | No further processing |
|
| 57 |
+
| C4 | [C4](https://huggingface.co/datasets/c4) via Dolma v1.6 | Updated | | 138.4 | 50% | Apr 2019 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
|
| 58 |
+
| Reddit | [PushShift API](https://github.com/pushshift/api) | Updated | | 79.9 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
|
| 59 |
+
| Semantic Scholar | [S2AG/S2ORC](https://www.semanticscholar.org/product/api)/[peS2o](https://huggingface.co/datasets/allenai/peS2o) via Dolma v1.6 | No | 38.8 | 57.2 | 100% | Mar 2023 | Same as Dolma v1.6 |
|
| 60 |
+
| Project Gutenberg | [Project Gutenberg](https://www.gutenberg.org/) | No | 0.056 | 6.0 | 100% | Mar 2023 | Same as Dolma v1.6 |
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
## Summary Statistics (v1.6)
|
| 64 |
|
| 65 |
| **Source** | **Doc Type** | **UTF-8 bytes** (GB) | **Documents** (millions) | **Unicode words** (billions) | **Llama tokens** (billions) |
|
| 66 |
|--|--|--|--|--|--|
|
|
|
|
| 74 |
| **Total** | | **11,519** | **4,367** | **2,318** | **3,059** |
|
| 75 |
|
| 76 |
|
| 77 |
+
(Size difference between `v1_6` and `v1_5` is due to different set of metadata included in files: we removed redundant metadata in `v1_6`.)
|
| 78 |
+
|
| 79 |
|
| 80 |
## Download
|
| 81 |
|
|
|
|
| 129 |
journal={arXiv preprint},
|
| 130 |
}
|
| 131 |
```
|
| 132 |
+
|