EnlistedGhost commited on
Commit
d30a624
·
verified ·
1 Parent(s): b335724

Added: Modelfile and gguf for Q5_K_M

Browse files
.gitattributes CHANGED
@@ -41,3 +41,4 @@ Pixtral-12B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
  Pixtral-12B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
42
  Pixtral-12B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
43
  Pixtral-12B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
 
 
41
  Pixtral-12B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
42
  Pixtral-12B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
43
  Pixtral-12B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Pixtral-12B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
Modelfile-Pixtral-12B-Q5_K_M.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pixtral-12B-GGUF Modelfile (Q5_K_M)
2
+ # ---------------------------------
3
+ #
4
+ # Tested with: Ollama v0.11.X-->v0.12.6(latest)
5
+ # Quantization: Q5_K_M (Quant created by = mradermacher)
6
+ # Quality: Very-Good (Updated 2025/10/28)
7
+ # Real-world usability: Recommended!
8
+ # ----------------------------------------------------
9
+ #
10
+ # Vision Notes:
11
+ # Some users may need to set the context value -or- "num_ctx"
12
+ # value to ~9K-->19K.
13
+ # Personally tested with: num_ctx=9982 and num_ctx=19982
14
+ # -----------------------------------------------------------
15
+ #
16
+ # Created by:
17
+ # EnlistedGhost (aka Jon Zaretsky)
18
+ # Original GGUF by: https://huggingface.co/mradermacher
19
+ # Original GGUF type: Static Quantize (non-iMatrix)
20
+ # ----------------------------------------------------------
21
+ # | Warning! - iMatrix Quantize seems to suffer in regards |
22
+ # | to vision quality, but is still made available |
23
+ # ----------------------------------------------------------
24
+ #
25
+ # Goal:
26
+ # To provide the FIRST actually functional and usable
27
+ # GGUF model version of the Mistral Pixtral-12B for
28
+ # direct-usage with Ollama!
29
+ # Currently, there are NO USABLE OR WORKING versions
30
+ # of this model that are usable with Ollama...
31
+ # ---------------------------------------------------
32
+ #
33
+ # Big/Giant/Huge Thank You:
34
+ # (ggml-org, bartowski, and the Ollama team)
35
+ # ggml-org: Working mmproj-pixtral vision projector!
36
+ # Bartowski: Working I-Matrix Quants that can be paired with ggml-org vision projector!
37
+ # Mradermacher: Working Static Quants that can be paired with ggml-org vision projector!
38
+ # Ollama team: Because without them, this wouldn't be possible in the first place!
39
+ # ------------------------------------------------------------------------------------
40
+ #
41
+ # Import our GGUF quant files:
42
+ # (Assuming: Linux Operating System)
43
+ # (Assuming: downloaded files are stored in "Downloads" directory/folder)
44
+ FROM ~/Downloads/mmproj-pixtral-12b-f16.gguf
45
+ FROM ~/Downloads/Pixtral-12B-Q5_K_M.gguf
46
+ # ------------------------------------------------------------------------
47
+ #
48
+ # Set Default System-Message/Prompt:
49
+ SYSTEM """
50
+ #
51
+ # !!!-WARNING-!!!
52
+ # (Do not modify for: "recommended" configuration and behavior)
53
+ #
54
+ # !!!-OPTIONAL-!!!
55
+ # Pixtral-12B by default does NOT include a system-prompt, however, you can choose to input one within this section of the Ollama-Modelfile. Please be aware that you can possibly damage the linking between the Pixtral-->VisionProjector within the system-prompt field; BE CAREFUL!
56
+ """
57
+ # -------------------------------------------------------------------
58
+ #
59
+ # Define model-chat template (Thank you to: @rick-github for this mic-drop)
60
+ # Link to @rick-github post: https://github.com/ollama/ollama/issues/6748#issuecomment-3368146231
61
+ TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]
62
+ #
63
+ # Below are stop params (required for proper "assistant-->user" multi-turn)
64
+ PARAMETER stop [INST]
65
+ PARAMETER stop [/INST]
66
+ #
67
+ # Enjoy Pixtral-12B-GGUF for the ppl!
68
+ # Erm, or at least for Ollama users...
69
+ # <3 (^.^) <3
70
+ #
71
+ # Notice: Please, read the "Instructions.md" at HuggingFace or Ollama-Website
72
+ # for a how-to usage and guide on using this modelfile!
Pixtral-12B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47dbf57e4eb5e745ee8308d0fc83c98a729ead483642a0ede689ac93a60303fc
3
+ size 8727631936