jealk commited on
Commit
d0596a3
·
verified ·
1 Parent(s): 35b89e4

Added NVIDIA & Arrow credits

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -44,7 +44,8 @@ Trained by using the approach outlined in the paper **LLM2Vec: Large Language Mo
44
  LoRa Finetuning 1000 steps of MNTP on cleaned Danish Wikipedia https://huggingface.co/datasets/jealk/wiki40b-da-clean
45
  LoRa Finetuning ~1000 steps of Supervised Contrastive learniing on this dataset: https://huggingface.co/datasets/jealk/supervised-da
46
 
47
- Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec
 
48
 
49
  Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
50
 
 
44
  LoRa Finetuning 1000 steps of MNTP on cleaned Danish Wikipedia https://huggingface.co/datasets/jealk/wiki40b-da-clean
45
  LoRa Finetuning ~1000 steps of Supervised Contrastive learniing on this dataset: https://huggingface.co/datasets/jealk/supervised-da
46
 
47
+ Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec .
48
+ Thanks to Arrow Denmark and Nvidia for sponsoring the compute used to train this model.
49
 
50
  Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
51