γƒˆγƒ¬γƒΌγƒ‹γƒ³γ‚°γ«γ―15m 23sγ‹γ‹γ‚ŠγΎγ—γŸγ€‚ ラむセンスはapache2.0です。

Uploaded model

  • Developed by: Sakalti
  • License: apache2.0
  • Finetuned from model : Sakalti/Saba1.5-1.5B This qwen model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
9
Safetensors
Model size
2B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Sakalti/Saba1.5-Pro

Finetuned
(1)
this model
Finetunes
9 models
Merges
4 models
Quantizations
2 models

Dataset used to train Sakalti/Saba1.5-Pro

Spaces using Sakalti/Saba1.5-Pro 4

Collection including Sakalti/Saba1.5-Pro