Pixtral-12B-Ollama-GGUF / Modelfile-Pixtral-12B-Q5_K_M.md
EnlistedGhost's picture
Added: Modelfile and gguf for Q5_K_M
d30a624 verified

Pixtral-12B-GGUF Modelfile (Q5_K_M)

---------------------------------

Tested with: Ollama v0.11.X-->v0.12.6(latest)

Quantization: Q5_K_M (Quant created by = mradermacher)

Quality: Very-Good (Updated 2025/10/28)

Real-world usability: Recommended!

----------------------------------------------------

Vision Notes:

Some users may need to set the context value -or- "num_ctx"

value to ~9K-->19K.

Personally tested with: num_ctx=9982 and num_ctx=19982

-----------------------------------------------------------

Created by:

EnlistedGhost (aka Jon Zaretsky)

Original GGUF by: https://huggingface.co/mradermacher

Original GGUF type: Static Quantize (non-iMatrix)

----------------------------------------------------------

| Warning! - iMatrix Quantize seems to suffer in regards |

| to vision quality, but is still made available |

----------------------------------------------------------

Goal:

To provide the FIRST actually functional and usable

GGUF model version of the Mistral Pixtral-12B for

direct-usage with Ollama!

Currently, there are NO USABLE OR WORKING versions

of this model that are usable with Ollama...

---------------------------------------------------

Big/Giant/Huge Thank You:

(ggml-org, bartowski, and the Ollama team)

ggml-org: Working mmproj-pixtral vision projector!

Bartowski: Working I-Matrix Quants that can be paired with ggml-org vision projector!

Mradermacher: Working Static Quants that can be paired with ggml-org vision projector!

Ollama team: Because without them, this wouldn't be possible in the first place!

------------------------------------------------------------------------------------

Import our GGUF quant files:

(Assuming: Linux Operating System)

(Assuming: downloaded files are stored in "Downloads" directory/folder)

FROM ~/Downloads/mmproj-pixtral-12b-f16.gguf FROM ~/Downloads/Pixtral-12B-Q5_K_M.gguf

------------------------------------------------------------------------

Set Default System-Message/Prompt:

SYSTEM """

!!!-WARNING-!!!

(Do not modify for: "recommended" configuration and behavior)

!!!-OPTIONAL-!!!

Pixtral-12B by default does NOT include a system-prompt, however, you can choose to input one within this section of the Ollama-Modelfile. Please be aware that you can possibly damage the linking between the Pixtral-->VisionProjector within the system-prompt field; BE CAREFUL!

"""

-------------------------------------------------------------------

Define model-chat template (Thank you to: @rick-github for this mic-drop)

Link to @rick-github post: https://github.com/ollama/ollama/issues/6748#issuecomment-3368146231

TEMPLATE [INST] {{ if .System }}{{ .System }} {{ end }}{{ .Prompt }} [/INST]

Below are stop params (required for proper "assistant-->user" multi-turn)

PARAMETER stop [INST] PARAMETER stop [/INST]

Enjoy Pixtral-12B-GGUF for the ppl!

Erm, or at least for Ollama users...

<3 (^.^) <3

Notice: Please, read the "Instructions.md" at HuggingFace or Ollama-Website

for a how-to usage and guide on using this modelfile!