mlabonne commited on
Commit
0487a41
·
verified ·
1 Parent(s): 3f86a20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -136,6 +136,16 @@ LFM2-1.2B-Extract can output complex objects in different languages on a level h
136
  - llama.cpp: [LFM2-1.2B-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract-GGUF)
137
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-1.2b-extract)
138
 
 
 
 
 
 
 
 
 
 
 
139
  ## 📬 Contact
140
 
141
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
 
136
  - llama.cpp: [LFM2-1.2B-Extract-GGUF](https://huggingface.co/LiquidAI/LFM2-1.2B-Extract-GGUF)
137
  - LEAP: [LEAP model library](https://leap.liquid.ai/models?model=lfm2-1.2b-extract)
138
 
139
+ You can use the following Colab notebooks for easy inference and fine-tuning:
140
+
141
+ | Notebook | Description | Link |
142
+ |-------|------|------|
143
+ | Inference | Run the model with Hugging Face's transformers library. | <a href="https://colab.research.google.com/drive/1zmiCLSG3WoyoqvNBXKf2M3gAB3XlV1Uu?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
144
+ | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
145
+ | DPO (TRL) | Preference alignment with Direct Preference Optimization (DPO) using TRL. | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
146
+ | SFT (Axolotl) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Axolotl. | <a href="https://colab.research.google.com/drive/155lr5-uYsOJmZfO6_QZPjbs8hA_v8S7t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
147
+ | SFT (Unsloth) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using Unsloth. | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
148
+
149
  ## 📬 Contact
150
 
151
  If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).