Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ language:
|
|
9 |
|
10 |

|
11 |
|
|
|
|
|
12 |
Updated 240413: Dataset: 14002 rows. Rank: 64/128. Increased diversity of the instruct dataset. 4k context length training
|
13 |
A light DPO pass to 'align' the model and make it less prone to saying untrue things. Ref: https://huggingface.co/datasets/neph1/truthy-dpo-v0.1-swe
|
14 |
|
|
|
9 |
|
10 |

|
11 |
|
12 |
+
Update 250114: There is now a new bellman with mistral-7b-instruct-v3: https://huggingface.co/neph1/bellman-mistral-7b-instruct-v0.3
|
13 |
+
|
14 |
Updated 240413: Dataset: 14002 rows. Rank: 64/128. Increased diversity of the instruct dataset. 4k context length training
|
15 |
A light DPO pass to 'align' the model and make it less prone to saying untrue things. Ref: https://huggingface.co/datasets/neph1/truthy-dpo-v0.1-swe
|
16 |
|