QWHA: Quantization-Aware Walsh-Hadamard Adaptation for Parameter-Efficient Fine-Tuning on Large Language Models Paper • 2509.17428 • Published Sep 22, 2025 • 9
Spacer: Towards Engineered Scientific Inspiration Paper • 2508.17661 • Published Aug 25, 2025 • 32
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits Paper • 2402.17764 • Published Feb 27, 2024 • 627