🚨 Try the new Chocolatine-2 14B available here 🚨
Chocolatine-14B-Instruct-DPO-v1.2
DPO fine-tuning of microsoft/Phi-3-medium-4k-instruct (14B params)
using the jpacifico/french-orca-dpo-pairs-revised rlhf dataset.
Training in French also improves the model in English, surpassing the performances of its base model.
Window context = 4k tokens  
- 4-bit quantized version available here : jpacifico/Chocolatine-14B-Instruct-DPO-v1.2-Q4_K_M-GGUF
 - Update 2024/12/15: also available on Ollama: jpacifico/chocolatine-14b
 
ollama run jpacifico/chocolatine-14b
OpenLLM Leaderboard
Chocolatine is the best-performing model in size 13B on the OpenLLM Leaderboard (last update: 2024/10/18)
| Metric | Value | 
|---|---|
| Avg. | 33.3 | 
| IFEval | 68.52 | 
| BBH | 49.85 | 
| MATH Lvl 5 | 17.98 | 
| GPQA | 10.07 | 
| MuSR | 12.35 | 
| MMLU-PRO | 41.07 | 
MT-Bench-French
Chocolatine-14B-Instruct-DPO-v1.2 outperforms its previous versions and its base model Phi-3-medium-4k-instruct on MT-Bench-French, used with multilingual-mt-bench and GPT-4-Turbo as LLM-judge.
[Update 2025/02/27] Chocolatine-2 v2.0.3 added  
########## First turn ##########
                                             score
model                                 turn        
gpt-4o-mini                           1     9.287500
Chocolatine-2-14B-Instruct-v2.0.3     1     9.112500
Qwen2.5-14B-Instruct                  1     8.887500
Chocolatine-14B-Instruct-4k-DPO       1     8.637500
Chocolatine-14B-Instruct-DPO-v1.2     1     8.612500
Phi-3.5-mini-instruct                 1     8.525000
Chocolatine-3B-Instruct-DPO-v1.2      1     8.375000
DeepSeek-R1-Distill-Qwen-14B          1     8.375000
phi-4                                 1     8.300000
Phi-3-medium-4k-instruct              1     8.225000
gpt-3.5-turbo                         1     8.137500
Chocolatine-3B-Instruct-DPO-Revised   1     7.987500
Daredevil-8B                          1     7.887500
Meta-Llama-3.1-8B-Instruct            1     7.050000
vigostral-7b-chat                     1     6.787500
Mistral-7B-Instruct-v0.3              1     6.750000
gemma-2-2b-it                         1     6.450000
########## Second turn ##########
                                               score
model                                 turn
Chocolatine-2-14B-Instruct-v2.0.3     2     9.050000         
gpt-4o-mini                           2     8.912500
Qwen2.5-14B-Instruct                  2     8.912500
Chocolatine-14B-Instruct-DPO-v1.2     2     8.337500
DeepSeek-R1-Distill-Qwen-14B          2     8.200000
phi-4                                 2     8.131250
Chocolatine-3B-Instruct-DPO-Revised   2     7.937500
Chocolatine-3B-Instruct-DPO-v1.2      2     7.862500
Phi-3-medium-4k-instruct              2     7.750000
Chocolatine-14B-Instruct-4k-DPO       2     7.737500
gpt-3.5-turbo                         2     7.679167
Phi-3.5-mini-instruct                 2     7.575000
Daredevil-8B                          2     7.087500
Meta-Llama-3.1-8B-Instruct            2     6.787500
Mistral-7B-Instruct-v0.3              2     6.500000
vigostral-7b-chat                     2     6.162500
gemma-2-2b-it                         2     6.100000
########## Average ##########
                                          score
model                                          
gpt-4o-mini                            9.100000
Chocolatine-2-14B-Instruct-v2.0.1      9.081250
Qwen2.5-14B-Instruct                   8.900000
Chocolatine-14B-Instruct-DPO-v1.2      8.475000
DeepSeek-R1-Distill-Qwen-14B           8.287500
phi-4                                  8.215625
Chocolatine-14B-Instruct-4k-DPO        8.187500
Chocolatine-3B-Instruct-DPO-v1.2       8.118750
Phi-3.5-mini-instruct                  8.050000
Phi-3-medium-4k-instruct               7.987500
Chocolatine-3B-Instruct-DPO-Revised    7.962500
gpt-3.5-turbo                          7.908333
Daredevil-8B                           7.487500
Meta-Llama-3.1-8B-Instruct             6.918750
Mistral-7B-Instruct-v0.3               6.625000
vigostral-7b-chat                      6.475000
gemma-2-2b-it                          6.275000
Usage
You can run this model using my Colab notebook
You can also run Chocolatine using the following code:
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])
Limitations
The Chocolatine model series is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.  
- Developed by: Jonathan Pacifico, 2024
 - Model type: LLM
 - Language(s) (NLP): French, English
 - License: MIT
 
Made with ❤️ in France
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 33.30 | 
| IFEval (0-Shot) | 68.52 | 
| BBH (3-Shot) | 49.85 | 
| MATH Lvl 5 (4-Shot) | 17.98 | 
| GPQA (0-shot) | 10.07 | 
| MuSR (0-shot) | 12.35 | 
| MMLU-PRO (5-shot) | 41.07 | 
- Downloads last month
 - 1,315
 
Model tree for jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
Dataset used to train jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
Space using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 1
Collection including jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard68.520
 - normalized accuracy on BBH (3-Shot)Open LLM Leaderboard49.850
 - exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard17.980
 - acc_norm on GPQA (0-shot)Open LLM Leaderboard10.070
 - acc_norm on MuSR (0-shot)Open LLM Leaderboard12.350
 - accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard41.070
 
