pandora-s commited on
Commit
fe18043
·
verified ·
1 Parent(s): cdb3642

[AUTO] CVST Tokenizer Badger

Browse files

A scripted PR to update the status of the transformer tokenizer.

```

> [!CAUTION]
> ⚠️
> The `transformers` tokenizer might give incorrect results as it has not been tested by the Mistral team. To make sure that your encoding and decoding is correct, please use mistral-common as shown below:
>
> ```py
> from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
> from mistral_common.protocol.instruct.messages import UserMessage
> from mistral_common.protocol.instruct.request import ChatCompletionRequest
>
> mistral_models_path = "MISTRAL_MODELS_PATH"
>
> tokenizer = MistralTokenizer.v1()
>
> completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
>
> tokens = tokenizer.encode_chat_completion(completion_request).tokens
>
> ## Inference with `mistral_inference`
>
> from mistral_inference.model import Transformer
> from mistral_inference.generate import generate
>
> model = Transformer.from_folder(mistral_models_path)
> out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
> result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
>
> print(result)
>
> ## Inference with hugging face `transformers`
>
> from transformers import AutoModelForCausalLM, AutoTokenizer
>
> device = "cuda"
> model = AutoModelForCausalLM.from_pretrained(mistralai/Mixtral-8x7B-Instruct-v0.1)
> model.to(device)
>
> generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
> decoded = tokenizer.batch_decode(generated_ids)
> print(decoded[0])
> ```
>
> PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome!
>

```

Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -15,6 +15,54 @@ widget:
15
  content: What is your favorite condiment?
16
  ---
17
  # Model Card for Mixtral-8x7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
19
 
20
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
 
15
  content: What is your favorite condiment?
16
  ---
17
  # Model Card for Mixtral-8x7B
18
+
19
+ ###
20
+
21
+ > [!CAUTION]
22
+ > ⚠️
23
+ > The `transformers` tokenizer might give incorrect results as it has not been tested by the Mistral team. To make sure that your encoding and decoding is correct, please use mistral-common as shown below:
24
+ >
25
+ > ```py
26
+ > from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
27
+ > from mistral_common.protocol.instruct.messages import UserMessage
28
+ > from mistral_common.protocol.instruct.request import ChatCompletionRequest
29
+ >
30
+ > mistral_models_path = "MISTRAL_MODELS_PATH"
31
+ >
32
+ > tokenizer = MistralTokenizer.v1()
33
+ >
34
+ > completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
35
+ >
36
+ > tokens = tokenizer.encode_chat_completion(completion_request).tokens
37
+ >
38
+ > ## Inference with `mistral_inference`
39
+ >
40
+ > from mistral_inference.model import Transformer
41
+ > from mistral_inference.generate import generate
42
+ >
43
+ > model = Transformer.from_folder(mistral_models_path)
44
+ > out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
45
+ > result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
46
+ >
47
+ > print(result)
48
+ >
49
+ > ## Inference with hugging face `transformers`
50
+ >
51
+ > from transformers import AutoModelForCausalLM, AutoTokenizer
52
+ >
53
+ > device = "cuda"
54
+ > model = AutoModelForCausalLM.from_pretrained(mistralai/Mixtral-8x7B-Instruct-v0.1)
55
+ > model.to(device)
56
+ >
57
+ > generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
58
+ > decoded = tokenizer.batch_decode(generated_ids)
59
+ > print(decoded[0])
60
+ > ```
61
+ >
62
+ > PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome!
63
+ >
64
+
65
+ ---
66
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
67
 
68
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).