Update README.md
Browse files
README.md
CHANGED
@@ -45,7 +45,8 @@ LoRa Finetuning 1000 steps of MNTP on cleaned Danish Wikipedia https://huggingfa
|
|
45 |
LoRa Finetuning ~1000 steps of Supervised Contrastive learniing on this dataset: https://huggingface.co/datasets/jealk/supervised-da
|
46 |
|
47 |
Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec .
|
48 |
-
|
|
|
49 |
|
50 |
Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
|
51 |
|
|
|
45 |
LoRa Finetuning ~1000 steps of Supervised Contrastive learniing on this dataset: https://huggingface.co/datasets/jealk/supervised-da
|
46 |
|
47 |
Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec .
|
48 |
+
|
49 |
+
Thanks to **Arrow Denmark** and **Nvidia** for sponsoring the compute used to train this model.
|
50 |
|
51 |
Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
|
52 |
|