Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,18 @@ language:
|
|
12 |
- en
|
13 |
pipeline_tag: text-generation
|
14 |
---
|
15 |
-
|
16 |
-
Image here
|
17 |
|
18 |
# MoLA-LM: Mixture of LoRA Adapters LLM
|
19 |
|
20 |
MoLA-LM combines multiple LoRA adapters with an intelligent router to automatically select the best adapter for each input prompt. This approach enables specialized performance across different tasks while maintaining efficiency.
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Model Details
|
25 |
|
@@ -32,7 +36,6 @@ Evals are coming...
|
|
32 |
|
33 |
```python
|
34 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
35 |
-
|
36 |
# Load the model (trust_remote_code=True is required for custom architecture)
|
37 |
model = AutoModelForCausalLM.from_pretrained(
|
38 |
"MoLA-LLM/MoLA-v0.6-9x4b",
|
@@ -40,7 +43,6 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
40 |
device_map="auto"
|
41 |
)
|
42 |
tokenizer = AutoTokenizer.from_pretrained("MoLA-LLM/MoLA-v0.6-9x4b", trust_remote_code=True)
|
43 |
-
|
44 |
# Use like any other language model - adapter selection is automatic
|
45 |
prompt = "Write a Python function to calculate fibonacci numbers"
|
46 |
messages = [{"role": "user", "content": prompt}]
|
@@ -51,10 +53,8 @@ inputs = tokenizer.apply_chat_template(
|
|
51 |
return_dict=True,
|
52 |
return_tensors="pt",
|
53 |
).to(model.device)
|
54 |
-
|
55 |
outputs = model.generate(**inputs, max_new_tokens=8192, temperature=.6, do_sample=True)
|
56 |
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
|
57 |
-
|
58 |
print(f"Selected LoRA: {model.get_current_lora()}")
|
59 |
print(response)
|
60 |
```
|
@@ -65,7 +65,7 @@ print(response)
|
|
65 |
The MoLA-LM architecture consists of:
|
66 |
|
67 |
1. **Base Model**: Qwen/Qwen3-4B-Thinking-2507
|
68 |
-
2. **Router Network**: Frozen encoder as Sentence transformer + decoder as
|
69 |
3. **LoRA Adapters**: 9 task-specific fine-tuned adapters
|
70 |
4. **Dynamic Switching**: Automatic adapter application based on input
|
71 |
|
|
|
12 |
- en
|
13 |
pipeline_tag: text-generation
|
14 |
---
|
15 |
+

|
|
|
16 |
|
17 |
# MoLA-LM: Mixture of LoRA Adapters LLM
|
18 |
|
19 |
MoLA-LM combines multiple LoRA adapters with an intelligent router to automatically select the best adapter for each input prompt. This approach enables specialized performance across different tasks while maintaining efficiency.
|
20 |
|
21 |
+
[**Click for evals**](https://github.com/alkinun/MoLA/blob/main/README.md)
|
22 |
+
|
23 |
+
**Important Note**: *The v0.5 had issues with the lora applying part of the custom lm class and its router was a bit too small with little generalization.
|
24 |
+
In v0.6 and future models, all of these issues are/will be resolved.*
|
25 |
+
|
26 |
+
**TLDR:** *Dont use v0.5, use v0.6 and above.*
|
27 |
|
28 |
## Model Details
|
29 |
|
|
|
36 |
|
37 |
```python
|
38 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
39 |
# Load the model (trust_remote_code=True is required for custom architecture)
|
40 |
model = AutoModelForCausalLM.from_pretrained(
|
41 |
"MoLA-LLM/MoLA-v0.6-9x4b",
|
|
|
43 |
device_map="auto"
|
44 |
)
|
45 |
tokenizer = AutoTokenizer.from_pretrained("MoLA-LLM/MoLA-v0.6-9x4b", trust_remote_code=True)
|
|
|
46 |
# Use like any other language model - adapter selection is automatic
|
47 |
prompt = "Write a Python function to calculate fibonacci numbers"
|
48 |
messages = [{"role": "user", "content": prompt}]
|
|
|
53 |
return_dict=True,
|
54 |
return_tensors="pt",
|
55 |
).to(model.device)
|
|
|
56 |
outputs = model.generate(**inputs, max_new_tokens=8192, temperature=.6, do_sample=True)
|
57 |
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
|
|
|
58 |
print(f"Selected LoRA: {model.get_current_lora()}")
|
59 |
print(response)
|
60 |
```
|
|
|
65 |
The MoLA-LM architecture consists of:
|
66 |
|
67 |
1. **Base Model**: Qwen/Qwen3-4B-Thinking-2507
|
68 |
+
2. **Router Network**: Frozen encoder as Sentence transformer + decoder as MLP for adapter selection
|
69 |
3. **LoRA Adapters**: 9 task-specific fine-tuned adapters
|
70 |
4. **Dynamic Switching**: Automatic adapter application based on input
|
71 |
|