Spaces:
				
			
			
	
			
			
		Running
		
			on 
			
			A10G
	
	
	
			
			
	
	
	
	
		
		
		Running
		
			on 
			
			A10G
	Update app.py
#191 opened 2 days ago
		by
		
				
 Novaciano
							
						Novaciano
	
 
							splie not convert
#190 opened 11 days ago
		by
		
				
 wekW
							
						wekW
	
tencent/Hunyuan-MT-7B
#189 opened 2 months ago
		by
		
				
 wqerrewetw
							
						wqerrewetw
	
Problem GGUF'ing Steelskull/L3.3-Shakudo-70b
#188 opened 2 months ago
		by
		
				
 RiggityWrckd
							
						RiggityWrckd
	
GGUF My Repo re-design
#187 opened 3 months ago
		by
		
				
 olegshulyakov
							
						olegshulyakov
	
 
							AttributeError: module 'torch' has no attribute 'uint64' when trying to convert fine-tuned model to GGUF format.
๐
							โ
							
						3
				
									1
	#186 opened 3 months ago
		by
		
				
 jshargo
							
						jshargo
	
login pls?
								2
#184 opened 3 months ago
		by
		
				
 King-Cane
							
						King-Cane
	
 
							microsoft/Phi-4-multimodal-instruct Not working
#183 opened 3 months ago
		by
		
				
 jivaniyash
							
						jivaniyash
	
Microsoft/Phi-4-mini-flash-reasoning Not Supported
#182 opened 4 months ago
		by
		
				
 cob05
							
						cob05
	
 
							Space not working - black screen
๐
							
						4
				
								1
#181 opened 4 months ago
		by
		
				
 Aleteian
							
						Aleteian
	
Moondream is not supported
#180 opened 4 months ago
		by
		
				
 seedmanc
							
						seedmanc
	
 
							Model pruning
#179 opened 4 months ago
		by
		
				
 Clausss
							
						Clausss
	
 
							Imatrix quantization option is giving txt format error
								2
#170 opened 6 months ago
		by
		
				
 kaetemi
							
						kaetemi
	
 
							[Errno 2] No such file: `llama-quantize`
๐
							
						9
				
								6
#167 opened 7 months ago
		by
		
				
 AlirezaF138
							
						AlirezaF138
	
Failed to convert my repo to gguf
								1
#166 opened 7 months ago
		by
		
				
 jonACE
							
						jonACE
	
 
							The model downloaded from civiti only has safetytenser but no config. What should I do?
								1
#165 opened 7 months ago
		by
		
				
 ly131022
							
						ly131022
	
getting "You must be logged in to use GGUF-my-repo" but I am logged in?
๐
							
						7
				
								2
#164 opened 7 months ago
		by
		
				
 diegoasua
							
						diegoasua
	
Problem logging into gguf-my-repo although being logged in to hugging face
								2
#163 opened 7 months ago
		by
		
				
 ulizilles
							
						ulizilles
	
I have just converted Mistral-Small-3.1-24B-Instruct-2503 but it does not contain the vision adapter?
โ
							
						3
				#160 opened 8 months ago
		by
		
				
 Blakus
							
						Blakus
	
 
							Invalid file type Error
๐
							
						3
				
								8
#158 opened 8 months ago
		by
		
				
 Yuma42
							
						Yuma42
	
SmolVLM2 Support
								3
#156 opened 8 months ago
		by
		
				
 PlayAI
							
						PlayAI
	
ggml-org/gguf-my-repo fails to build
									2
	#153 opened 8 months ago
		by
		
				
 lefromage
							
						lefromage
	
More storage
#151 opened 8 months ago
		by
		
				
 noNyve
							
						noNyve
	
 
							Cant convert Linq-Embed-Mistral
#150 opened 8 months ago
		by
		
				
 Hoshino-Yumetsuki
							
						Hoshino-Yumetsuki
	
 
							Add IQ2_(some-letter) quantization
#149 opened 8 months ago
		by
		
				
 noNyve
							
						noNyve
	
 
							No support for making GGUF of HuggingFaceTB/SmolVLM-500M-Instruct
								2
#148 opened 9 months ago
		by
		
				
 TimexPeachtree
							
						TimexPeachtree
	
 
							Unable to convert Senqiao/LISA_Plus_7b
								1
#147 opened 9 months ago
		by
		
				
 PlayAI
							
						PlayAI
	
Unable to convert ostris/Flex.1-alpha
#146 opened 9 months ago
		by
		
				
 fullsoftwares
							
						fullsoftwares
	
Crashes on watt-ai/watt-tool-70B
#145 opened 10 months ago
		by
		
				
 ejschwartz
							
						ejschwartz
	
Update app.py
									1
	#144 opened 10 months ago
		by
		
				
 gghfez
							
						gghfez
	
Unable to convert Phi-3 Vision
โ
							
						2
				#143 opened 10 months ago
		by
		
				
 venkatsriram
							
						venkatsriram
	
Accessing own private repos
									2
	#141 opened 11 months ago
		by
		
				
 themex1380
							
						themex1380
	
Why can't i login?
๐
							
						1
				
								6
#139 opened 11 months ago
		by
		
				
 safe049
							
						safe049
	
 
							If generating model cards readmes, consider adding support for these extra authorship parameters
									3
	#137 opened 11 months ago
		by
		
				
 mofosyne
							
						mofosyne
	
 
							Add F16 and BF16 quantization
									2
	#129 opened about 1 year ago
		by
		
				
 andito
							
						andito
	
 
							update readme for card generation
									4
	#128 opened about 1 year ago
		by
		
				
 ariG23498
							
						ariG23498
	
 
							[bug] asymmetric t5 models fail to quantize
#126 opened about 1 year ago
		by
		
				
 pszemraj
							
						pszemraj
	
 
							[Bug] Extra files with related name were uploaded to the resulting repository
#125 opened about 1 year ago
		by
		
				
 Felladrin
							
						Felladrin
	
 
							Issue converting PEFT LoRA fine tuned model to GGUF
									3
	#124 opened about 1 year ago
		by
		
				
 AdnanRiaz107
							
						AdnanRiaz107
	
Issue converting nvidia/NV-Embed-v2 to GGUF
#123 opened about 1 year ago
		by
		
				
 redshiva
							
						redshiva
	
 
							Issue converting FLUX.1-dev model to GGUF format
								5
#122 opened about 1 year ago
		by
		
				
 cbrescia
							
						cbrescia
	
Add Llama 3.1 license
#121 opened about 1 year ago
		by
		
				
 jxtngx
							
						jxtngx
	
 
							Add an option to put all quantization variants in the same repo
๐
							
						2
				#120 opened about 1 year ago
		by
		
				
 A2va
							
						A2va
	
Phi-3.5-MoE-instruct
								6
#117 opened about 1 year ago
		by
		
				
 goodasdgood
							
						goodasdgood
	
Fails to quntize T5 (xl and xxl) models
								1
#116 opened about 1 year ago
		by
		
				
 girishponkiya
							
						girishponkiya
	
Arm optimized quants
โ
							
						3
				
									2
	#113 opened about 1 year ago
		by
		
				
 SaisExperiments
							
						SaisExperiments
	
 
							DeepseekForCausalLM is not supported
								1
#112 opened about 1 year ago
		by
		
				
 nanowell
							
						nanowell
	
 
							Please, update converting script. Llama.cpp added support for Nemotron and Minitron architectures.
									3
	#111 opened about 1 year ago
		by
		
				
 NikolayKozloff
							
						NikolayKozloff
	
Enable the created name repo to be without the quantization type
๐
							
						1
				#110 opened about 1 year ago
		by
		
				
 A2va
							
						A2va
	

