mlabonne commited on
Commit
27b199a
·
verified ·
1 Parent(s): f211bb9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -108,6 +108,16 @@ As we are excited about edge deployment, our goal is to limit memory consumption
108
  - llama.cpp: [LFM2-350M-Math-GGUF](https://huggingface.co/LiquidAI/LFM2-350M-Math-GGUF)
109
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-350M-math)
110
 
 
 
 
 
 
 
 
 
 
 
111
  ## 📬 Contact
112
 
113
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
 
108
  - llama.cpp: [LFM2-350M-Math-GGUF](https://huggingface.co/LiquidAI/LFM2-350M-Math-GGUF)
109
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-350M-math)
110
 
111
+ You can use the following Colab notebooks for easy inference and fine-tuning:
112
+
113
+ | Notebook | Description | Link |
114
+ |-------|------|------|
115
+ | Inference | Run the model with Hugging Face's transformers library. | <a href="https://colab.research.google.com/drive/1TfLUH1vpIiJE6TdZTlMxhbp95f3BNKaD?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
116
+ | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
117
+ | DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
118
+ | SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
119
+ | SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
120
+
121
  ## 📬 Contact
122
 
123
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).