Update README.md
Browse files
README.md
CHANGED
|
@@ -215,7 +215,7 @@ size_categories:
|
|
| 215 |
This dataset consists of the attack samples used for the paper "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
|
| 216 |
|
| 217 |
We have two splits:
|
| 218 |
-
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](AISE-TUDelft/memtune-tuning_data)**
|
| 219 |
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
|
| 220 |
|
| 221 |
We have different splits depending on the duplication rate of the samples:
|
|
|
|
| 215 |
This dataset consists of the attack samples used for the paper "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
|
| 216 |
|
| 217 |
We have two splits:
|
| 218 |
+
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](https://huggingface.co/datasets/AISE-TUDelft/memtune-tuning_data)**
|
| 219 |
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
|
| 220 |
|
| 221 |
We have different splits depending on the duplication rate of the samples:
|