Model Card for LLM Instruction‑Tuning for Text Classification (LoRA + QLoRA)

This repository provides code and configuration to fine‑tune a decoder‑only LLM (default: meta-llama/Llama-3.2-1B) for instruction‑style text classification using LoRA/QLoRA. Rather than training a task‑specific classifier head, the project formulates classification as a short instruction → answer generation task and evaluates by exact string match against the label. It includes simple training/inference scripts, a 5‑label arXiv‑style demo, and optional Amazon SageMaker entrypoints.

Model Details

Model Description

This project instruction‑tunes a base, decoder‑only LLM with LoRA adapters loaded in 4‑bit NF4 precision for memory‑efficient training and inference. Supervised fine‑tuning is performed with TRL’s SFTTrainer. Prompts ask the model to “return the answer as the exact text label,” so predictions are decoded as plain text and compared by string match.

  • Developed by: Amirhossein Yousefi (GitHub: amirhossein-yousefi)
  • Model type: Decoder‑only LLM fine‑tuned with LoRA for single‑label text classification via instruction‑following
  • Language(s) (NLP): English by default (demo dataset uses arXiv titles/abstracts); broader multilingual coverage depends on the chosen base model
  • License: The repository itself does not include an explicit OSS license; the base model meta-llama/Llama-3.2-1B is governed by the Llama 3.2 Community License. You must accept and comply with Meta’s license to access and use the weights.
  • Finetuned from model : meta-llama/Llama-3.2-1B (configurable)

Model Sources

Uses

Direct Use

  • Fine‑tune LoRA adapters on your own CSV dataset for single‑label text classification (e.g., topic/category detection) using the provided scripts/train.py.
  • Run inference/evaluation with scripts/predict.py to generate deterministic label strings and compute accuracy, micro/macro F1, a classification report, and a confusion matrix.
  • Optional Amazon SageMaker utilities let you run managed training and deploy a real‑time endpoint with the LoRA adapters attached at load time.

Downstream Use

  • Integrate the trained LoRA adapters into applications where explainable, instruction‑driven classification is helpful (e.g., routing, tagging, moderation).
  • Swap the base model (any compatible decoder‑only LLM on the Hugging Face Hub) and re‑train with the same prompt template.
  • Extend label sets without architectural changes—only prompt/label lists need to be updated.

Out-of-Scope Use

  • CPU‑only training/inference with this repo as‑is (4‑bit bitsandbytes path expects NVIDIA CUDA GPUs).
  • Multi‑label classification (comma‑separated outputs) is not implemented out of the box (listed as a roadmap idea).
  • Open‑domain generation or safety‑critical decision‑making; this project focuses on label selection with short inputs.

Bias, Risks, and Limitations

  • Outputs mirror biases in the training corpus you provide and in the base model. If your labels or examples are imbalanced or ambiguous, the model may propagate that bias.
  • Exact‑match decoding can be brittle to tokenization/typo effects—ensure labels are short, canonical strings and restrict the decoding space.
  • The base Llama 3.2 model has its own safety limitations and license‑based usage constraints (e.g., attribution and acceptable‑use provisions).
  • The demo dataset is limited to 5 arXiv‑style labels and relatively short academic texts; generalizing beyond this domain requires additional data.

Recommendations

  • Curate balanced datasets; consider stratified splits and per‑class metrics.
  • Keep temperature = 0.0 for deterministic label decoding; constrain generation length (e.g., max_new_tokens=8).
  • Validate robustness with label synonyms/aliases and adversarial cases; consider post‑processing that maps variants to canonical labels.
  • Review and comply with the Llama 3.2 Community License (and any other upstream licenses) when distributing adapters/derivatives.

How to Get Started with the Model

Install & train

python -m venv .venv
source .venv/bin/activate   # Windows: .venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install -r requirements.txt

# If the base model is gated, export an HF token
export HF_TOKEN=YOUR_HF_ACCESS_TOKEN

# One‑command training on CSVs
python scripts/train.py   --base_path dataset   --train_file train.csv   --val_file validation.csv   --test_file test.csv   --label_column label_name   --text_fields title abstract   --base_model_name meta-llama/Llama-3.2-1B   --output_dir llama-3.2-1b-arxiver-lora

Inference & evaluation

python scripts/predict.py   --base_path dataset   --test_file test.csv   --base_model_name meta-llama/Llama-3.2-1B   --output_dir llama-3.2-1b-arxiver-lora   --save_csv predictions.csv

**SageMaker **

# Train a managed job
python sagemaker/train_sm.py   --source_dir .   --dataset_dir dataset   --train_file train.csv   --val_file validation.csv   --test_file test.csv   --label_column label_name   --text_fields title abstract   --base_model_id meta-llama/Llama-3.2-1B   --instance_type ml.g5.2xlarge   --instance_count 1

# Deploy a real‑time endpoint
python sagemaker/deploy_sm.py   --training_job_name <your-job>   --base_model_id meta-llama/Llama-3.2-1B   --instance_type ml.g5.2xlarge   --default_labels_json '["cs.CL","cs.CV","cs.LG","hep-ph","quant-ph"]'

Training Details

Training Data

  • Expected input: three CSV files under a base path: train.csv, validation.csv, test.csv.
  • Required columns: a label column (default label_name) and one or more text fields (defaults: title, abstract). Missing/blank text fields are skipped; text fields are concatenated with punctuation.
  • The repository ships utilities to prepare a 5‑class arXiv‑style demo (labels: ['cs.CL','cs.CV','cs.LG','hep-ph','quant-ph']).

Training Procedure

Preprocessing

  • Prompts are constructed as short instruction → answer pairs:
    • Train: includes the gold label after label:.
    • Inference: leaves label: empty and decodes the generated label.

Training Hyperparameters

  • Training regime: mixed precision with fp16=True, tf32=True; 4‑bit NF4 quantization with bfloat16 compute (QLoRA‑style).
  • Selected defaults (single‑GPU):
    • num_train_epochs=1
    • per_device_train_batch_size=8, per_device_eval_batch_size=8
    • gradient_accumulation_steps=2 (effective 16 per step, per device)
    • learning_rate=2e-4, weight_decay=1e-3, warmup_ratio=0.03
    • logging_steps=10, evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=2
    • LoRA: r=2, alpha=2, dropout=0.0
    • Quantization: load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="bfloat16", bnb_4bit_use_double_quant=True
    • Generation (eval): temperature=0.0, max_new_tokens=8, do_sample=False

Speeds, Sizes, Times

  • Example environment: Laptop RTX 3080 Ti (16 GB VRAM), CUDA 12.9, PyTorch 2.8.0+cu129.
  • Example run stats: ~6,314 seconds wall‑clock training, with TensorBoard logs under the run directory.
  • Total training FLOPs (example): ~3.69e16 (as reported by the training logs).

Evaluation

Testing Data, Factors & Metrics

Testing Data

  • The example evaluation uses the provided arXiv‑style 5‑label test split.

Factors

  • Per‑class metrics are reported for cs.CL, cs.CV, cs.LG, hep-ph, quant-ph.

Metrics

  • Accuracy, micro F1, macro F1, per‑class precision/recall/F1, and a confusion matrix.

Results

  • Overall: Accuracy 93.8%, Micro‑F1 0.938, Macro‑F1 0.950.
  • Per‑class (Precision / Recall / F1 / Support):
    • cs.CL: 0.914 / 0.963 / 0.938 / 432
    • cs.CV: 0.935 / 0.923 / 0.929 / 545
    • cs.LG: 0.917 / 0.890 / 0.903 / 536
    • hep-ph: 0.994 / 0.988 / 0.991 / 164
    • quant-ph: 0.986 / 0.990 / 0.988 / 293

Summary

The LoRA‑tuned 1B parameter Llama 3.2 model achieves strong performance on short academic texts while keeping training/inference affordable due to 4‑bit quantization. Performance is consistent across most classes, with particularly high scores for physics categories.

Model Examination

  • The repo includes utilities for a classification report and confusion matrix. Inspect misclassifications to refine label definitions or add examples. Consider probing sensitivity to prompt wording.

Environmental Impact

(Approximate; depends on your hardware and run length.)
Use the MLCO2 Impact calculator with your GPU model, power draw, and wall‑clock runtime.

  • Hardware Type: Single NVIDIA GPU (example: RTX 3080 Ti Laptop 16 GB)
  • Hours used: ~1.75 hours (example)

Technical Specifications

Model Architecture and Objective

  • Architecture: Decoder‑only Transformer (Llama 3.2 family when using the default base)
  • Objective: Supervised instruction‑tuning for single‑label classification via generative decoding with exact‑match evaluation
  • Context length: 512 tokens (config default; pass explicitly to trainer to ensure enforcement)

Compute Infrastructure

Hardware

  • NVIDIA CUDA GPU required for 4‑bit bitsandbytes training/inference
    (CPU‑only runs are not supported by the included scripts).

Software

  • Python ≥ 3.10, PyTorch, transformers, trl, peft, bitsandbytes, accelerate, and standard scientific Python packages.
  • Optional: Astral’s uv for faster, reproducible dependency management (the repo also ships requirements.txt).

Citation

If you use this repository, please cite the GitHub project and the base model as appropriate.

BibTeX (project):

@software{yousefi_2025_llm_instruction_tuning_text_classification,
  author    = {Yousefi, Amirhossein},
  title     = {LLM Instruction-Tuning for Text Classification (LoRA + QLoRA)},
  year      = {2025},
  publisher = {GitHub},
  url       = {https://github.com/amirhossein-yousefi/LLM-Instruction-Tuning-Text-Classification}
}

APA (project):
Yousefi, A. (2025). LLM Instruction‑Tuning for Text Classification (LoRA + QLoRA). GitHub. https://github.com/amirhossein-yousefi/LLM-Instruction-Tuning-Text-Classification

Base model: Meta AI. (2024). Llama 3.2‑1B [Computer software]. Meta. https://huggingface.co/meta-llama/Llama-3.2-1B

Glossary

  • LoRA: Low‑Rank Adapters for parameter‑efficient fine‑tuning.
  • QLoRA: LoRA training with quantized base weights (typically 4‑bit NF4) and higher‑precision compute.
  • SFT: Supervised Fine‑Tuning.
  • Exact‑match decoding: Evaluates whether the generated label text exactly matches the gold label string.

More Information

  • Amazon SageMaker scripts are included for managed training and deployment.
  • Roadmap ideas include multi‑label support and few‑shot exemplars in prompts.

Model Card Authors

  • Drafted by: ChatGPT (based on the repository’s README and code structure)
  • Repository author: Amirhossein Yousefi

Model Card Contact

  • Open an issue on the GitHub repository for questions or contributions.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Amirhossein75/Text-Classification-Instrunction-Tuning-Llama

Adapter
(589)
this model

Dataset used to train Amirhossein75/Text-Classification-Instrunction-Tuning-Llama