File size: 2,035 Bytes
e8efb9c
 
 
 
 
 
 
 
 
 
 
da1dbd9
 
9b7b312
da1dbd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b7b312
da1dbd9
 
 
 
 
97d13be
da1dbd9
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: apache-2.0
datasets:
- Phonepadith/laos-long-content
language:
- lo
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
new_version: Phonepadith/Laollm
pipeline_tag: text-generation
library_name: fastai
---

# 🧠 AIDC LLM ສຳລັບພາສາລາວ - ແລະ ສະຫລຸບຂໍ້ມູນ — Fine-tuned on AIDC-5K (Q8 GGUF)

This is a **Lao language chat model** based on [`Qwen/Qwen1.5-7B`](https://huggingface.co/Qwen/Qwen1.5-7B), fine-tuned on the **AIDC 5K Lao prompt-completion dataset**, and exported to **GGUF format** with **Q8_0 quantization** for fast inference using `llama.cpp`, `LM Studio`, or `Ollama`.

---

## 🧾 Model Summary

| Feature              | Details                                      |
|----------------------|----------------------------------------------|
| Base Model           | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) |
| Fine-tuned By        | [Phonepadith Phoummavong](https://huggingface.co/Phonepadith) |
| Language             | Lao (lo)                                     |
| Dataset              | AIDC 5K Lao Prompt-Completion Dataset        |
| Quantization         | Q8_0 (8-bit, GGUF)                           |
| Format               | GGUF                                         |
| File Name            | `seallm-7b-lao-finetuned-lora-q8.gguf`       |
| Size (est.)          | ~3–5 GB                                      |
| License              | apache-2.0                                   |

---

## 💡 Use Cases

- Lao chatbots or digital assistants
- Cultural and educational tools in Lao
- Research on low-resource language modeling
- Lao-native prompt generation and dialogue completion

---

## 📥 How to Use


### 🔸 LM Studio

1. Download the `.gguf` model file.
2. Open **LM Studio**.
3. Click **"Add Local Model"**.
4. Load `aidc-laollm-5k-based-qwen-Q8.gguf`.

### 🔸 Ollama

1. Place the `.gguf` file in your models directory.
2. Create a `Modelfile`:
   ```Dockerfile
   FROM qwen:7b
   PARAMETER quantization=q8_0