s-emanuilov commited on
Commit
82b9d94
·
verified ·
1 Parent(s): b60ec34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -37,9 +37,9 @@ Available in three sizes with full models, LoRA adapters, and quantized GGUF var
37
 
38
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
39
  |------------|------------|--------------|------------------|
40
- | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0) 📍| [LoRA](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-2.6B-v1.0-GGUF) |
41
- | **9B** | [Tucan-9B-v1.0](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-9B-v1.0-GGUF) |
42
- | **27B** | [Tucan-27B-v1.0](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/s-emanuilov/Tucan-27B-v1.0-GGUF) |
43
 
44
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
45
 
 
37
 
38
  | Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
39
  |------------|------------|--------------|------------------|
40
+ | **2.6B** | [Tucan-2.6B-v1.0](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0) 📍| [LoRA](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0-LoRA) | [GGUF](https://huggingface.co/llm-bg/Tucan-2.6B-v1.0-GGUF) |
41
+ | **9B** | [Tucan-9B-v1.0](https://huggingface.co/llm-bg/Tucan-9B-v1.0) | [LoRA](https://huggingface.co/llm-bg/Tucan-9B-v1.0-LoRA) | [GGUF](https://huggingface.co/llm-bg/Tucan-9B-v1.0-GGUF) |
42
+ | **27B** | [Tucan-27B-v1.0](https://huggingface.co/llm-bg/Tucan-27B-v1.0) | [LoRA](https://huggingface.co/llm-bg/Tucan-27B-v1.0-LoRA) | [GGUF](https://huggingface.co/llm-bg/Tucan-27B-v1.0-GGUF) |
43
 
44
  *GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations*
45