sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits
This model was converted to MLX format from mlabonne/NeuralHermes-2.5-Mistral-7B.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 3
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support
Model tree for sugatoray/mlx-neuralhermes-2.5-mistral-7b-q4bits
Base model
mistralai/Mistral-7B-v0.1
				Finetuned
	
	
teknium/OpenHermes-2.5-Mistral-7B
						