Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
|
|
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
| 6 |
model-index:
|
|
@@ -31,7 +32,8 @@ datasets:
|
|
| 31 |
split: train
|
| 32 |
type: chatml.intel
|
| 33 |
format: "[INST] {instruction} [/INST]"
|
| 34 |
-
no_input_format: "[INST] {instruction} [/INST]"
|
|
|
|
| 35 |
dataset_prepared_path:
|
| 36 |
val_set_size: 0.05
|
| 37 |
output_dir: ./out
|
|
@@ -44,12 +46,12 @@ eval_sample_packing: false
|
|
| 44 |
wandb_project:
|
| 45 |
wandb_entity:
|
| 46 |
wandb_watch:
|
| 47 |
-
wandb_name:
|
| 48 |
wandb_log_model:
|
| 49 |
|
| 50 |
gradient_accumulation_steps: 4
|
| 51 |
micro_batch_size: 2
|
| 52 |
-
num_epochs:
|
| 53 |
optimizer: adamw_bnb_8bit
|
| 54 |
lr_scheduler: cosine
|
| 55 |
learning_rate: 0.000005
|
|
@@ -72,7 +74,7 @@ warmup_steps: 10
|
|
| 72 |
evals_per_epoch: 4
|
| 73 |
eval_table_size:
|
| 74 |
eval_max_new_tokens: 128
|
| 75 |
-
saves_per_epoch:
|
| 76 |
debug:
|
| 77 |
deepspeed:
|
| 78 |
weight_decay: 0.0
|
|
@@ -89,7 +91,7 @@ special_tokens:
|
|
| 89 |
|
| 90 |
# out
|
| 91 |
|
| 92 |
-
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on
|
| 93 |
|
| 94 |
## Model description
|
| 95 |
|
|
@@ -117,7 +119,7 @@ The following hyperparameters were used during training:
|
|
| 117 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 118 |
- lr_scheduler_type: cosine
|
| 119 |
- lr_scheduler_warmup_steps: 10
|
| 120 |
-
- training_steps:
|
| 121 |
|
| 122 |
### Training results
|
| 123 |
|
|
@@ -127,5 +129,5 @@ The following hyperparameters were used during training:
|
|
| 127 |
|
| 128 |
- Transformers 4.38.0.dev0
|
| 129 |
- Pytorch 2.2.0+cu121
|
| 130 |
-
- Datasets 2.17.
|
| 131 |
- Tokenizers 0.15.0
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
| 4 |
+
datasets: NeuralNovel/Neural-DPO
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
model-index:
|
|
|
|
| 32 |
split: train
|
| 33 |
type: chatml.intel
|
| 34 |
format: "[INST] {instruction} [/INST]"
|
| 35 |
+
no_input_format: "[INST] {instruction} [/INST]"
|
| 36 |
+
|
| 37 |
dataset_prepared_path:
|
| 38 |
val_set_size: 0.05
|
| 39 |
output_dir: ./out
|
|
|
|
| 46 |
wandb_project:
|
| 47 |
wandb_entity:
|
| 48 |
wandb_watch:
|
| 49 |
+
wandb_name:
|
| 50 |
wandb_log_model:
|
| 51 |
|
| 52 |
gradient_accumulation_steps: 4
|
| 53 |
micro_batch_size: 2
|
| 54 |
+
num_epochs: 1
|
| 55 |
optimizer: adamw_bnb_8bit
|
| 56 |
lr_scheduler: cosine
|
| 57 |
learning_rate: 0.000005
|
|
|
|
| 74 |
evals_per_epoch: 4
|
| 75 |
eval_table_size:
|
| 76 |
eval_max_new_tokens: 128
|
| 77 |
+
saves_per_epoch: 1
|
| 78 |
debug:
|
| 79 |
deepspeed:
|
| 80 |
weight_decay: 0.0
|
|
|
|
| 91 |
|
| 92 |
# out
|
| 93 |
|
| 94 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the Neural-DPO dataset with laserRMT applied.
|
| 95 |
|
| 96 |
## Model description
|
| 97 |
|
|
|
|
| 119 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 120 |
- lr_scheduler_type: cosine
|
| 121 |
- lr_scheduler_warmup_steps: 10
|
| 122 |
+
- training_steps: 134
|
| 123 |
|
| 124 |
### Training results
|
| 125 |
|
|
|
|
| 129 |
|
| 130 |
- Transformers 4.38.0.dev0
|
| 131 |
- Pytorch 2.2.0+cu121
|
| 132 |
+
- Datasets 2.17.0
|
| 133 |
- Tokenizers 0.15.0
|