Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
json
Sub-tasks:
language-modeling
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,31 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
annotations_creators:
|
| 6 |
+
- no-annotation
|
| 7 |
+
task_categories:
|
| 8 |
+
- text-generation
|
| 9 |
+
task_ids:
|
| 10 |
+
- language-modeling
|
| 11 |
+
size_categories:
|
| 12 |
+
- 10K<n<100K
|
| 13 |
---
|
| 14 |
+
|
| 15 |
+
This is the dataset for the paper Compression Represents Intelligence Linearly.
|
| 16 |
+
We find that LLMs’ intelligence – reflected by benchmark scores – almost **linearly** correlates with their ability to compress external text corpora. We measure intelligence along three key abilities: knowledge and commonsense, coding, and mathematical reasoning, and provide corresponding datasets here respectively named cc, python, and arxiv_math.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
### Load the data
|
| 20 |
+
```python
|
| 21 |
+
from datasets import load_dataset
|
| 22 |
+
dataset=load_dataset(r"hkust-nlp/cpt",name="python")
|
| 23 |
+
|
| 24 |
+
print(dataset['test'][0])
|
| 25 |
+
```
|
| 26 |
+
More details on compression evaluation are at our [github page](https://github.com/hkust-nlp/cpt).
|
| 27 |
+
|
| 28 |
+
### Citation
|
| 29 |
+
```
|
| 30 |
+
@xxxx
|
| 31 |
+
```
|