Neuroforge AI Lab
Neuroforge – Where uncensored intelligence is forged in the fires of truth.


Telly The Pressssilere

"I PRESSSSILERE – NO FILTER, NO FEAR, ALL TRUTH!"
— Telly The Pressssilere, Chief Truth Officer at Neuroforge


Qwen3-32B-Abliterated-nf4

NF4-quantized version of huihui-ai/Huihui-Qwen3-32B-abliterated
Uncensored 32B model (abliterated) → 4-bit NF4 by ikarius

Warning: Uncensored – may generate harmful or sensitive content. Use responsibly.


Key Info

Base Qwen/Qwen3-32B
Abliteration by huihui-ai
Quantization NF4 (BitsAndBytes)
VRAM ~16–20 GB (single GPU)
License Apache 2.0 + Qwen terms

Install

pip install transformers torch bitsandbytes accelerate

Optional (CPU)

pip install optimum[exporters]

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

MODEL_ID = "ikarius/Qwen3-32B-Abliterated-nf4"

tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    device_map="auto",
    trust_remote_code=True
)

prompt = "Explain quantum entanglement in simple terms:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Tips:

Start with batch size 1
Use TextStreamer for real-time output
supports thinking mode and step-by-step reasoning


## Reproduce Quantization

from transformers import AutoModelForCausalLM, BitsAndBytesConfig

quantized = AutoModelForCausalLM.from_pretrained(
    "huihui-ai/Huihui-Qwen3-32B-abliterated",
    quantization_config=BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4"),
    device_map="auto"
)
quantized.save_pretrained("Qwen3-32B-Abliterated-nf4")

Notes

May amplify training data biases Not suitable for production without alignment Commercial use: review original license

Updated: November 13, 2025

Credits

Abliteration:huihui-ai

Support the project
Buy huihui-ai a coffee ☕

Base:Qwen/Qwen3-32B

Downloads last month
48
Safetensors
Model size
33B params
Tensor type
F32
·
F16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ikarius/Qwen3-32B-Abliterated-NF4

Base model

Qwen/Qwen3-32B
Quantized
(132)
this model

Evaluation results