Update README.md
Browse files
README.md
CHANGED
|
@@ -36,9 +36,9 @@ Available in three sizes with full models, LoRA adapters, and quantized GGUF var
|
|
| 36 |
|
| 37 |
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|
| 38 |
|------------|------------|--------------|------------------|
|
| 39 |
-
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/
|
| 40 |
-
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/
|
| 41 |
-
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/
|
| 42 |
|
| 43 |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
|
| 44 |
|
|
|
|
| 36 |
|
| 37 |
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|
| 38 |
|------------|------------|--------------|------------------|
|
| 39 |
+
| **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0)| [LoRA](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0-LoRA) 📍| [GGUF](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0-GGUF) |
|
| 40 |
+
| **9B** | [Tucan-9B-v1.0](https://huggingface.co/llm-bg/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/llm-bg/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/llm-bg/Tucan-9B-v1.0-GGUF) |
|
| 41 |
+
| **27B** | [Tucan-27B-v1.0](https://huggingface.co/llm-bg/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/llm-bg/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/llm-bg/Tucan-27B-v1.0-GGUF) |
|
| 42 |
|
| 43 |
*GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
|
| 44 |
|