Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,53 @@ base_model:
|
|
9 |
new_version: Phonepadith/Laollm
|
10 |
pipeline_tag: text-generation
|
11 |
library_name: fastai
|
12 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
new_version: Phonepadith/Laollm
|
10 |
pipeline_tag: text-generation
|
11 |
library_name: fastai
|
12 |
+
---
|
13 |
+
|
14 |
+
# 🧠 SEALLM-7B Lao Chat — Fine-tuned on AIDC-5K (Q8 GGUF)
|
15 |
+
|
16 |
+
This is a **Lao language chat model** based on [`Qwen/Qwen1.5-7B`](https://huggingface.co/Qwen/Qwen1.5-7B), fine-tuned on the **AIDC 5K Lao prompt-completion dataset**, and exported to **GGUF format** with **Q8_0 quantization** for fast inference using `llama.cpp`, `LM Studio`, or `Ollama`.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## 🧾 Model Summary
|
21 |
+
|
22 |
+
| Feature | Details |
|
23 |
+
|----------------------|----------------------------------------------|
|
24 |
+
| Base Model | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) |
|
25 |
+
| Fine-tuned By | [Phonepadith Phoummavong](https://huggingface.co/Phonepadith) |
|
26 |
+
| Language | Lao (lo) |
|
27 |
+
| Dataset | AIDC 5K Lao Prompt-Completion Dataset |
|
28 |
+
| Quantization | Q8_0 (8-bit, GGUF) |
|
29 |
+
| Format | GGUF |
|
30 |
+
| File Name | `seallm-7b-lao-finetuned-lora-q8.gguf` |
|
31 |
+
| Size (est.) | ~3–5 GB |
|
32 |
+
| License | apache-2.0 |
|
33 |
+
|
34 |
+
---
|
35 |
+
|
36 |
+
## 💡 Use Cases
|
37 |
+
|
38 |
+
- Lao chatbots or digital assistants
|
39 |
+
- Cultural and educational tools in Lao
|
40 |
+
- Research on low-resource language modeling
|
41 |
+
- Lao-native prompt generation and dialogue completion
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
## 📥 How to Use
|
46 |
+
|
47 |
+
### 🔸 LM Studio
|
48 |
+
|
49 |
+
1. Download the `.gguf` model file.
|
50 |
+
2. Open **LM Studio**.
|
51 |
+
3. Click **"Add Local Model"**.
|
52 |
+
4. Load `seallm-7b-lao-finetuned-lora-q8.gguf`.
|
53 |
+
|
54 |
+
### 🔸 Ollama
|
55 |
+
|
56 |
+
1. Place the `.gguf` file in your models directory.
|
57 |
+
2. Create a `Modelfile`:
|
58 |
+
```Dockerfile
|
59 |
+
FROM qwen:7b
|
60 |
+
PARAMETER quantization=q8_0
|
61 |
+
|