Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ This is the SFT checkpoint used for the project [RLHFlow/Online-RLHF](https://gi
|
|
| 9 |
* **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
|
| 10 |
* **Code**: https://github.com/RLHFlow/Online-RLHF
|
| 11 |
|
| 12 |
-
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [RLHFlow/RLHFlow-SFT-Dataset-ver2](https://huggingface.co/datasets/RLHFlow/RLHFlow-SFT-Dataset-ver2) for 1 epoch. We use a global batch size of 128 and a learning rate of 2e-
|
| 13 |
|
| 14 |
|
| 15 |
|
|
|
|
| 9 |
* **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
|
| 10 |
* **Code**: https://github.com/RLHFlow/Online-RLHF
|
| 11 |
|
| 12 |
+
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [RLHFlow/RLHFlow-SFT-Dataset-ver2](https://huggingface.co/datasets/RLHFlow/RLHFlow-SFT-Dataset-ver2) for 1 epoch. We use a global batch size of 128 and a learning rate of 2e-5, where we pack the samples and split them into chunks of 8192 token. See more training details at https://github.com/RLHFlow/Online-RLHF/blob/main/sft/llama3-8b-it.yaml .
|
| 13 |
|
| 14 |
|
| 15 |
|