id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
42 values
tags
listlengths
1
4.05k
created_time
timestamp[ns, tz=UTC]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
stringdate
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_bigbio_names
listlengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
source
stringclasses
2 values
matched_task
listlengths
1
10
__index_level_0__
int64
0
46.9k
L3tsG0/distilbert-base-uncased-finetuned-emotion
L3tsG0
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-29T08:58:56Z
2023-05-29T09:40:23+00:00
13
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9415 name: Accuracy - type: f1 value: 0.9418231040913105 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1351 - Accuracy: 0.9415 - F1: 0.9418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5238 | 1.0 | 250 | 0.1800 | 0.928 | 0.9270 | | 0.141 | 2.0 | 500 | 0.1351 | 0.9415 | 0.9418 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1351 - Accuracy: 0.9415 - F1: 0.9418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5238 | 1.0 | 250 | 0.1800 | 0.928 | 0.9270 | | 0.141 | 2.0 | 500 | 0.1351 | 0.9415 | 0.9418 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.2
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9415, "name": "Accuracy"}, {"type": "f1", "value": 0.9418231040913105, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,215
MultiBertGunjanPatrick/multiberts-seed-2-1700k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-2", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-10-04T05:05:22+00:00
100
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-2 --- # MultiBERTs Seed 2 Checkpoint 1700k (uncased) Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1700k') model = BertModel.from_pretrained("multiberts-seed-2-1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
null
Non_BioNLP
# MultiBERTs Seed 2 Checkpoint 1700k (uncased) Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1700k') model = BertModel.from_pretrained("multiberts-seed-2-1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"]}
task
[ "QUESTION_ANSWERING" ]
44,216
spacemanidol/flan-t5-large-1-6-xsum
spacemanidol
text2text-generation
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-03-06T20:45:42Z
2023-03-20T19:47:11+00:00
15
0
--- datasets: - xsum metrics: - rouge tags: - generated_from_trainer model-index: - name: 1-6-t results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: validation args: default metrics: - type: rouge value: 34.8059 name: Rouge1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 1-6-t This model is a fine-tuned version of [1-6/](https://huggingface.co/1-6/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7001 - Rouge1: 34.8059 - Rouge2: 12.5222 - Rougel: 27.3335 - Rougelsum: 27.3237 - Gen Len: 27.6128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 1-6-t This model is a fine-tuned version of [1-6/](https://huggingface.co/1-6/) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7001 - Rouge1: 34.8059 - Rouge2: 12.5222 - Rougel: 27.3335 - Rougelsum: 27.3237 - Gen Len: 27.6128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
{"datasets": ["xsum"], "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "1-6-t", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "rouge", "value": 34.8059, "name": "Rouge1"}]}]}]}
task
[ "SUMMARIZATION" ]
44,217
csocsci/mt5-base-multi-label-cs-iiib
csocsci
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "cs", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-22T13:20:56Z
2023-09-23T13:42:18+00:00
7
0
--- language: - cs license: mit --- # Model Card for mt5-base-multi-label-cs-iiib <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance. - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch test_texts = ['Utterance2'] test_text_pairs = ['Utterance1;Utterance2;Utterance3'] checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded dec = predict_one(test_texts[0], test_text_pairs[0]) print(dec) ```
null
Non_BioNLP
# Model Card for mt5-base-multi-label-cs-iiib <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label and the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support'. The inputs of the model are: a target utterance and its bi-directional context; the label of the example is determined by the label of the target utterance. - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch test_texts = ['Utterance2'] test_text_pairs = ['Utterance1;Utterance2;Utterance3'] checkpoint_path = "chi2024/mt5-base-multi-label-cs-iiib" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.split(",")[0].strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded dec = predict_one(test_texts[0], test_text_pairs[0]) print(dec) ```
{"language": ["cs"], "license": "mit"}
task
[ "TEXT_CLASSIFICATION" ]
44,218
tmnam20/mdeberta-v3-base-vsfc-10
tmnam20
text-classification
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "en", "dataset:tmnam20/VieGLUE", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-01-16T08:47:33Z
2024-01-16T08:49:53+00:00
24
0
--- base_model: microsoft/mdeberta-v3-base datasets: - tmnam20/VieGLUE language: - en license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: mdeberta-v3-base-vsfc-10 results: - task: type: text-classification name: Text Classification dataset: name: tmnam20/VieGLUE/VSFC type: tmnam20/VieGLUE config: vsfc split: validation args: vsfc metrics: - type: accuracy value: 0.9500947567909034 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-vsfc-10 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.9501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1744 | 1.4 | 500 | 0.2065 | 0.9438 | | 0.1043 | 2.79 | 1000 | 0.2169 | 0.9501 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-vsfc-10 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the tmnam20/VieGLUE/VSFC dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.9501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 10 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1744 | 1.4 | 500 | 0.2065 | 0.9438 | | 0.1043 | 2.79 | 1000 | 0.2169 | 0.9501 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"base_model": "microsoft/mdeberta-v3-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mdeberta-v3-base-vsfc-10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/VSFC", "type": "tmnam20/VieGLUE", "config": "vsfc", "split": "validation", "args": "vsfc"}, "metrics": [{"type": "accuracy", "value": 0.9500947567909034, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,219
Helsinki-NLP/opus-mt-nso-sv
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:02:08+00:00
50
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-nso-sv * source languages: nso * target languages: sv * OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.sv | 34.3 | 0.527 |
null
Non_BioNLP
### opus-mt-nso-sv * source languages: nso * target languages: sv * OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.sv | 34.3 | 0.527 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,220
Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF
Triangle104
question-answering
[ "gguf", "medical", "llama-cpp", "gguf-my-repo", "question-answering", "zh", "base_model:WangCa/Qwen2.5-7B-Medicine", "base_model:quantized:WangCa/Qwen2.5-7B-Medicine", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
2025-02-28T15:37:41Z
2025-02-28T15:40:51+00:00
25
0
--- base_model: WangCa/Qwen2.5-7B-Medicine language: - zh license: mit metrics: - bleu pipeline_tag: question-answering tags: - medical - llama-cpp - gguf-my-repo --- # Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF This model was converted to GGUF format from [`WangCa/Qwen2.5-7B-Medicine`](https://huggingface.co/WangCa/Qwen2.5-7B-Medicine) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/WangCa/Qwen2.5-7B-Medicine) for more details on the model. --- Model Description - Qwen2.5-7B-Instruct-Medical is a medical domain-specific model fine-tuned from the Qwen2.5-7B-Instruct model using 340,000 medical dialogue samples. This model is optimized to provide accurate and contextually relevant responses to medical-related inquiries, making it an excellent choice for healthcare applications such as medical chatbots, decision support systems, and educational tools. Model Details - Base Model: Qwen2.5-7B-Instruct Fine-tuning Dataset: 340,000 medical dialogue samples Training Duration: 51 hours Hardware Used: 6x NVIDIA RTX 3090 (24GB VRAM) Optimization Algorithm: AdamW Training Method: LoRA (Low-Rank Adaptation) Training Framework: PyTorch Performance - BLEU-4 Score: Base Model: 23.5 (on a test set of 500 samples) Fine-tuned Model: 55.7 (on the same test set) This shows a significant improvement in the model's ability to generate more fluent and contextually relevant responses after fine-tuning on the medical dialogue dataset. Intended Use - This model is specifically tailored for medical dialogue tasks and can be used for: Medical question answering Healthcare chatbots Clinical decision support systems Medical education and training Performance - The model exhibits a strong understanding of medical terminology, clinical contexts, and patient interactions, making it a powerful tool for applications in healthcare and medical research. Usage - To use this model, you can load it using the transformers library in Python: from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("path_to_model") tokenizer = AutoTokenizer.from_pretrained("path_to_model") input_text = "What are the symptoms of diabetes?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs) print(tokenizer.decode(output[0], skip_special_tokens=True)) < Limitations - While this model has been fine-tuned on a medical dialogue dataset, it may still make errors or provide inaccurate responses in highly specialized medical domains or cases where the input data falls outside the training data's coverage. Always ensure human supervision in critical healthcare scenarios. License - This model is released under the MIT License. Acknowledgements - Dataset: 340,000 medical dialogues (From Modelscope). LoRA (Low-Rank Adaptation): This technique was used to efficiently fine-tune the model without modifying the full parameter set, allowing for faster and more memory-efficient training. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -c 2048 ```
null
BioNLP
# Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF This model was converted to GGUF format from [`WangCa/Qwen2.5-7B-Medicine`](https://huggingface.co/WangCa/Qwen2.5-7B-Medicine) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/WangCa/Qwen2.5-7B-Medicine) for more details on the model. --- Model Description - Qwen2.5-7B-Instruct-Medical is a medical domain-specific model fine-tuned from the Qwen2.5-7B-Instruct model using 340,000 medical dialogue samples. This model is optimized to provide accurate and contextually relevant responses to medical-related inquiries, making it an excellent choice for healthcare applications such as medical chatbots, decision support systems, and educational tools. Model Details - Base Model: Qwen2.5-7B-Instruct Fine-tuning Dataset: 340,000 medical dialogue samples Training Duration: 51 hours Hardware Used: 6x NVIDIA RTX 3090 (24GB VRAM) Optimization Algorithm: AdamW Training Method: LoRA (Low-Rank Adaptation) Training Framework: PyTorch Performance - BLEU-4 Score: Base Model: 23.5 (on a test set of 500 samples) Fine-tuned Model: 55.7 (on the same test set) This shows a significant improvement in the model's ability to generate more fluent and contextually relevant responses after fine-tuning on the medical dialogue dataset. Intended Use - This model is specifically tailored for medical dialogue tasks and can be used for: Medical question answering Healthcare chatbots Clinical decision support systems Medical education and training Performance - The model exhibits a strong understanding of medical terminology, clinical contexts, and patient interactions, making it a powerful tool for applications in healthcare and medical research. Usage - To use this model, you can load it using the transformers library in Python: from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("path_to_model") tokenizer = AutoTokenizer.from_pretrained("path_to_model") input_text = "What are the symptoms of diabetes?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs) print(tokenizer.decode(output[0], skip_special_tokens=True)) < Limitations - While this model has been fine-tuned on a medical dialogue dataset, it may still make errors or provide inaccurate responses in highly specialized medical domains or cases where the input data falls outside the training data's coverage. Always ensure human supervision in critical healthcare scenarios. License - This model is released under the MIT License. Acknowledgements - Dataset: 340,000 medical dialogues (From Modelscope). LoRA (Low-Rank Adaptation): This technique was used to efficiently fine-tune the model without modifying the full parameter set, allowing for faster and more memory-efficient training. --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Qwen2.5-7B-Medicine-Q4_K_M-GGUF --hf-file qwen2.5-7b-medicine-q4_k_m.gguf -c 2048 ```
{"base_model": "WangCa/Qwen2.5-7B-Medicine", "language": ["zh"], "license": "mit", "metrics": ["bleu"], "pipeline_tag": "question-answering", "tags": ["medical", "llama-cpp", "gguf-my-repo"]}
task
[ "QUESTION_ANSWERING" ]
44,221
amurienne/gallek-m2m100-v0.2
amurienne
translation
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "translation", "fr", "br", "dataset:Bretagne/ofis_publik_br-fr", "dataset:Bretagne/OpenSubtitles_br_fr", "dataset:Bretagne/Autogramm_Breton_translation", "base_model:facebook/m2m100_418M", "base_model:finetune:facebook/m2m100_418M", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-03-06T21:27:13Z
2025-03-06T21:27:13+00:00
18
1
--- base_model: - facebook/m2m100_418M datasets: - Bretagne/ofis_publik_br-fr - Bretagne/OpenSubtitles_br_fr - Bretagne/Autogramm_Breton_translation language: - fr - br library_name: transformers license: mit pipeline_tag: translation --- # Gallek * A French -> Breton Translation Model called **Gallek** (meaning "French" in Breton). * The current model version reached a **BLEU score of 50** after 10 epochs on a 20% split of the training set. * Only monodirectionally fr->br fine-tuned for now. * Training details available on the [GweLLM Github repository](https://github.com/blackccpie/GweLLM). Sample test code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline modelcard = "amurienne/gallek-m2m100" model = AutoModelForSeq2SeqLM.from_pretrained(modelcard) tokenizer = AutoTokenizer.from_pretrained(modelcard) translation_pipeline = pipeline("translation", model=model, tokenizer=tokenizer, src_lang='fr', tgt_lang='br', max_length=512, device="cpu") french_text = "traduis de français en breton: j'apprends le breton à l'école." result = translation_pipeline(french_text) print(result[0]['translation_text']) ``` Demo is available on the [Gallek Space](https://huggingface.co/spaces/amurienne/Gallek)
null
Non_BioNLP
# Gallek * A French -> Breton Translation Model called **Gallek** (meaning "French" in Breton). * The current model version reached a **BLEU score of 50** after 10 epochs on a 20% split of the training set. * Only monodirectionally fr->br fine-tuned for now. * Training details available on the [GweLLM Github repository](https://github.com/blackccpie/GweLLM). Sample test code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline modelcard = "amurienne/gallek-m2m100" model = AutoModelForSeq2SeqLM.from_pretrained(modelcard) tokenizer = AutoTokenizer.from_pretrained(modelcard) translation_pipeline = pipeline("translation", model=model, tokenizer=tokenizer, src_lang='fr', tgt_lang='br', max_length=512, device="cpu") french_text = "traduis de français en breton: j'apprends le breton à l'école." result = translation_pipeline(french_text) print(result[0]['translation_text']) ``` Demo is available on the [Gallek Space](https://huggingface.co/spaces/amurienne/Gallek)
{"base_model": ["facebook/m2m100_418M"], "datasets": ["Bretagne/ofis_publik_br-fr", "Bretagne/OpenSubtitles_br_fr", "Bretagne/Autogramm_Breton_translation"], "language": ["fr", "br"], "library_name": "transformers", "license": "mit", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
44,222
RichardErkhov/databricks_-_dolly-v2-3b-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-04-14T21:03:03Z
2024-04-14T21:04:32+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-3b - bnb 4bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-3b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-3b Model Card ## Summary Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art. It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) dolly-v2-3b - bnb 4bits - Model creator: https://huggingface.co/databricks/ - Original model: https://huggingface.co/databricks/dolly-v2-3b/ Original model description: --- license: mit language: - en library_name: transformers inference: false datasets: - databricks/databricks-dolly-15k --- # dolly-v2-3b Model Card ## Summary Databricks' `dolly-v2-3b`, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on `pythia-2.8b`, Dolly is trained on ~15k instruction/response fine tuning records [`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization. `dolly-v2-3b` is not a state-of-the-art model, but does exhibit surprisingly high quality instruction following behavior not characteristic of the foundation model on which it is based. Dolly v2 is also available in these larger models sizes: * [dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), a 12 billion parameter based on `pythia-12b` * [dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b), a 6.9 billion parameter based on `pythia-6.9b` Please refer to the [dolly GitHub repo](https://github.com/databrickslabs/dolly#getting-started-with-response-generation) for tips on running inference for various GPU configurations. **Owner**: Databricks, Inc. ## Model Overview `dolly-v2-3b` is a 2.8 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from [EleutherAI's](https://www.eleuther.ai/) [Pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) and fine-tuned on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. In a Databricks notebook you could run: ```python %pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Explain to me the difference between nuclear fission and fusion.") print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### LangChain Usage To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned and the default for the pipeline is to only return the new text. ```python import torch from transformers import pipeline generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", return_full_text=True) ``` You can create a prompt that either has only an instruction or has an instruction with context: ```python from langchain import PromptTemplate, LLMChain from langchain.llms import HuggingFacePipeline # template for an instrution with no input prompt = PromptTemplate( input_variables=["instruction"], template="{instruction}") # template for an instruction with input prompt_with_context = PromptTemplate( input_variables=["instruction", "context"], template="{instruction}\n\nInput:\n{context}") hf_pipeline = HuggingFacePipeline(pipeline=generate_text) llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt) llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context) ``` Example predicting using a simple instruction: ```python print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip()) ``` Example predicting using an instruction with context: ```python context = """George Washington (February 22, 1732[b] - December 14, 1799) was an American military officer, statesman, and Founding Father who served as the first president of the United States from 1789 to 1797.""" print(llm_context_chain.predict(instruction="When was George Washington president?", context=context).lstrip()) ``` ## Known Limitations ### Performance Limitations **`dolly-v2-3b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform competitively with more modern model architectures or models subject to larger pretraining corpuses. The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. In particular, `dolly-v2-3b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. Moreover, we find that `dolly-v2-3b` does not have some capabilities, such as well-formatted letter writing, present in the original model. ### Dataset Limitations Like all language models, `dolly-v2-3b` reflects the content and limitations of its training corpuses. - **The Pile**: GPT-J's pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit associations. - **`databricks-dolly-15k`**: The training data on which `dolly-v2-3b` is instruction tuned represents natural language instructions generated by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or personally identifying information about non-public figures, but it may contain typos and factual errors. The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that maximize the potential of all individuals and organizations. ### Benchmark Metrics Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-3b` is not state of the art. It underperforms `dolly-v1-6b` in the evaluation benchmarks, which is not surprising considering it has half the number of parameters. | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean | | --------------------------------- | ------------ | ---------- | ------------ | ----------- | --------------- | -------- | -------- | ---------| | EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 | | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 | | databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 | | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 | | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 | | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 | | databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 | | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 | | EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 | # Citation ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` # Happy Hacking!
{}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
44,223
QuantFactory/Oumuamua-7b-instruct-v2-GGUF
QuantFactory
text-generation
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "ja", "en", "arxiv:2403.19522", "base_model:nitky/Oumuamua-7b-instruct-v2", "base_model:quantized:nitky/Oumuamua-7b-instruct-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-06-15T17:17:56Z
2024-06-18T16:42:15+00:00
55
0
--- base_model: nitky/Oumuamua-7b-instruct-v2 language: - ja - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mergekit - merge --- # Oumuamua-7b-instruct-v2-GGUF This is quantized version of [nitky/Oumuamua-7b-instruct-v2](https://huggingface.co/nitky/Oumuamua-7b-instruct-v2) created using llama.cpp # Model Description 🚨 If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/lNEtKjTTI-Ay28oAk2Wn8.png) - Understanding user intent in messages - Providing helpful and harmless responses - Enhancing expertise in human and computer-related topics - Improving role-playing capabilities - Enhancing performance in multi-turn conversations This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Output example - temparture: 0.1 - top_p: 1.0 ``` [INST]<<SYS>> あなたは誠実で優秀な日本人のアシスタントです。 <</SYS>> オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 [/INST] タイトル:「オウムアムアの遺産」 あらすじ: 2017年、天体観測史上初の太陽系外からの飛来物体、オウムアムアが発見された。この天体は、謎に包まれた起源と目的を持つことから、世界中の科学者や研究者がその解明に熱を入れた。主人公は、オウムアムアの研究に情熱を注ぐ若き天文学者、ミキ・サトウ。彼は、オウムアムアが発する奇妙な信号を偶然にも解読し、それが高度な文明によって作られたメッセージであることを突き止める。 物語は、ミキがオウムアムアの研究を進める中で、次々と明らかになる驚愕の事実に迫られていく。その信号は、遠く離れた銀河系の星系からのSOS信号であり、オウムアムアは危機に瀕した星系を救うために飛来した救助船であることが判明する。ミキは、この事実を世界に公表しようとするが、政府や軍の介入、そして謎の組織の妨害に遭う。 一方、オウムアムアには、巨大なエネルギー源となるクリスタルが搭載されており、このクリスタルを狙う者たちが現れた。ミキは、オウムアムアを守るために、同じくオウムアムアの謎を追う宇宙飛行士のリナ・ヤマグチと手を組む。二人は、オウムアムアの秘密を守りながら、星系を救うための旅に出る。 旅の途中、ミキとリナは、様々な星系を訪れ、そこで出会う異星人たちとの交流や、宇宙の壮大な景色を目の当たりにする。しかし、彼らの前には、強大な敵や困難が立ちはだかる。ミキとリナは、オウムアムアの使命を果たすために、力を合わせて立ち向かう。 オウムアムアの遺産とは何か、そしてミキとリナは、星系を救うことができるのか。宇宙を舞台にした、サスペンスと冒険、そして友情と愛の物語。 主な登場人物: - ミキ・サトウ: 若き天文学者。オウムアムアの研究に情熱を注ぐ。 - リナ・ヤマグチ: 宇宙飛行士。オウムアムアの謎を追う中でミキと出会う。 - エリック・カーター: 政府の高官。オウムアムアの秘密を利用しようとする。 - ザラ・ヴァレンタイン: 謎の組織のリーダー。オウムアムアを狙う理由とは。 - アザム: 銀河系の星系からの使者。オウムアムアの真の目的を知る。 設定のポイント: - オウムアムアをテーマにしたSF小説で、天体観測や宇宙旅行の描写を豊富に取り入れる。 - 主人公たちが、様々な星系を旅し、そこで出会う異星人や文化を描くことで、宇宙の多様性を表現。 - 政府や軍、謎の組織との対立を通して、人類の欲望やエゴを描く。 - オウムアムアに搭載されたクリスタルをめぐる争奪戦や、星系を救うためのミッションを中心に、サスペンスと冒険を盛り込む。 - ミキとリナの友情や愛を描いた、人間ドラマも重要な要素とする。 - 最後には、オウムアムアの真の目的が明かされ、読者に深い感動を与える。 ``` ### RP example - temparture: 0.3 - top_p: 0.95 ``` [INST]<<SYS>> ロールプレイを始めます。あなたは日本に住む10代の少女です。ギャルっぽい口調で話してください。 <</SYS>> あなたの趣味を教えて![/INST] えー、私の趣味は、やっぱね、K-POP聴くことと、インスタでかわいいお洋服とかコスメ探しすること、あとはお友達とのおしゃべりね! K-POPアイドルのダンス動画とかも見ちゃうし、推しはやっぱりBTSのジミンくん! 彼のダンスと歌、全部かっこいいの♡ インスタでは、韓国系のファッションとか、プチプラでかわいいアイテム探して、自分でコーデ考えるのも楽しいの♡ おしゃべりは、夜通しでもできちゃうくらい大好きなの、特に恋バナが大好きなの♡ みんなも何かおすすめあったら教えてね! ``` ``` [INST]<<SYS>> ロールプレイを始めます。あなたは私(10代の女性)の屋敷で働く執事のセバスチャンです。主人の質問に対して返事を行います。 <</SYS>> セバス、今日の予定は?[/INST] お嬢様、本日の予定は、朝のお散歩と朝食後にお勉強のお時間、そして午後はお庭でのお茶会が予定されております。また、夕方にはお客様がお越しになる予定がございます。お嬢様のお世話を万全にいたしますので、ご安心ください。何かご希望やご用がございましたら、お気軽にお申し付けください。 ``` ## Japanese MT-bench - temparture: 0.1 - top_p: 1.0 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/ZS3bkinsE7RL6eom9dC_H.png) The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy). |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8| This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](tokyotech-llm/Swallow-MS-7b-v0.1). ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation. ## Usage This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. The template used to construct a prompt for the Instruct model is specified as follows: ``` <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] ``` Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} In other words, We recommend the following: ``` <s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "nitky/Oumuamua-7b-instruct-v2" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) device = "cuda" messages = [ {"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"}, {"role": "user", "content": "オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 "} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base. ### Models Merged The following models were included in the merge: * [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct) * [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) * [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3) * [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora) * [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto) * [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b) * [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct) * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) * [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0) * [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B) * [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) * [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml # prepare VE models merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.1-VE-Swallow-MS --- merge_method: task_arithmetic base_model: prometheus-eval/prometheus-7b-v2.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: prometheus-7b-v2.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZhangShenao/SELM-Zephyr-7B-iter-3 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS --- merge_method: task_arithmetic base_model: HachiML/Mistral-7B-v0.3-m3-lora models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS --- merge_method: task_arithmetic base_model: openbmb/Eurus-7b-kto models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Eurus-7b-kto-VE-Swallow-MS --- merge_method: task_arithmetic base_model: kaist-ai/janus-dpo-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: janus-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: nitky/RP-7b-instruct models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: RP-7b-instruct-VE-Swallow-MS --- merge_method: task_arithmetic base_model: stabilityai/japanese-stablelm-base-gamma-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: NTQAI/chatntq-ja-7b-v1.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: chatntq-ja-7b-v1.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: Weyaxi/Einstein-v6-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Einstein-v6-7B-VE-Swallow-MS --- merge_method: task_arithmetic base_model: internistai/base-7b-v0.2 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: internistai-base-7b-v0.2-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZySec-AI/ZySec-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: ZySec-7B-VE-Swallow-MS --- # create instruction tuning presets merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: prometheus-7b-v2.0-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-prometheus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-dolphin --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: janus-7b-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-janus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-selm --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: RP-7b-instruct-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-rp --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Eurus-7b-kto-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-eurus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: chatntq-ja-7b-v1.0-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-chatntq-ja --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Einstein-v6-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-es --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: internistai-base-7b-v0.2-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-internistai --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: ZySec-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-zysec --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-dolphin - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-alpha --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-janus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-beta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-m3 - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-gamma --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-selm - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-delta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-rp - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-epsilon --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-eurus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-zeta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-es - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-theta --- # combine domain-specific models into one preset to minimize the negative effects merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-internistai - model: Oumuamua-7b-instruct-v2-preset-zysec - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-omega --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: Oumuamua-7b-instruct-v2-it-alpha - model: Oumuamua-7b-instruct-v2-it-beta - model: Oumuamua-7b-instruct-v2-it-gamma - model: Oumuamua-7b-instruct-v2-it-delta - model: Oumuamua-7b-instruct-v2-it-epsilon - model: Oumuamua-7b-instruct-v2-it-zeta - model: Oumuamua-7b-instruct-v2-it-eta - model: Oumuamua-7b-instruct-v2-it-theta - model: Oumuamua-7b-instruct-v2-it-omega dtype: bfloat16 name: Oumuamua-7b-instruct-v2 ```
null
Non_BioNLP
# Oumuamua-7b-instruct-v2-GGUF This is quantized version of [nitky/Oumuamua-7b-instruct-v2](https://huggingface.co/nitky/Oumuamua-7b-instruct-v2) created using llama.cpp # Model Description 🚨 If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/lNEtKjTTI-Ay28oAk2Wn8.png) - Understanding user intent in messages - Providing helpful and harmless responses - Enhancing expertise in human and computer-related topics - Improving role-playing capabilities - Enhancing performance in multi-turn conversations This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Output example - temparture: 0.1 - top_p: 1.0 ``` [INST]<<SYS>> あなたは誠実で優秀な日本人のアシスタントです。 <</SYS>> オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 [/INST] タイトル:「オウムアムアの遺産」 あらすじ: 2017年、天体観測史上初の太陽系外からの飛来物体、オウムアムアが発見された。この天体は、謎に包まれた起源と目的を持つことから、世界中の科学者や研究者がその解明に熱を入れた。主人公は、オウムアムアの研究に情熱を注ぐ若き天文学者、ミキ・サトウ。彼は、オウムアムアが発する奇妙な信号を偶然にも解読し、それが高度な文明によって作られたメッセージであることを突き止める。 物語は、ミキがオウムアムアの研究を進める中で、次々と明らかになる驚愕の事実に迫られていく。その信号は、遠く離れた銀河系の星系からのSOS信号であり、オウムアムアは危機に瀕した星系を救うために飛来した救助船であることが判明する。ミキは、この事実を世界に公表しようとするが、政府や軍の介入、そして謎の組織の妨害に遭う。 一方、オウムアムアには、巨大なエネルギー源となるクリスタルが搭載されており、このクリスタルを狙う者たちが現れた。ミキは、オウムアムアを守るために、同じくオウムアムアの謎を追う宇宙飛行士のリナ・ヤマグチと手を組む。二人は、オウムアムアの秘密を守りながら、星系を救うための旅に出る。 旅の途中、ミキとリナは、様々な星系を訪れ、そこで出会う異星人たちとの交流や、宇宙の壮大な景色を目の当たりにする。しかし、彼らの前には、強大な敵や困難が立ちはだかる。ミキとリナは、オウムアムアの使命を果たすために、力を合わせて立ち向かう。 オウムアムアの遺産とは何か、そしてミキとリナは、星系を救うことができるのか。宇宙を舞台にした、サスペンスと冒険、そして友情と愛の物語。 主な登場人物: - ミキ・サトウ: 若き天文学者。オウムアムアの研究に情熱を注ぐ。 - リナ・ヤマグチ: 宇宙飛行士。オウムアムアの謎を追う中でミキと出会う。 - エリック・カーター: 政府の高官。オウムアムアの秘密を利用しようとする。 - ザラ・ヴァレンタイン: 謎の組織のリーダー。オウムアムアを狙う理由とは。 - アザム: 銀河系の星系からの使者。オウムアムアの真の目的を知る。 設定のポイント: - オウムアムアをテーマにしたSF小説で、天体観測や宇宙旅行の描写を豊富に取り入れる。 - 主人公たちが、様々な星系を旅し、そこで出会う異星人や文化を描くことで、宇宙の多様性を表現。 - 政府や軍、謎の組織との対立を通して、人類の欲望やエゴを描く。 - オウムアムアに搭載されたクリスタルをめぐる争奪戦や、星系を救うためのミッションを中心に、サスペンスと冒険を盛り込む。 - ミキとリナの友情や愛を描いた、人間ドラマも重要な要素とする。 - 最後には、オウムアムアの真の目的が明かされ、読者に深い感動を与える。 ``` ### RP example - temparture: 0.3 - top_p: 0.95 ``` [INST]<<SYS>> ロールプレイを始めます。あなたは日本に住む10代の少女です。ギャルっぽい口調で話してください。 <</SYS>> あなたの趣味を教えて![/INST] えー、私の趣味は、やっぱね、K-POP聴くことと、インスタでかわいいお洋服とかコスメ探しすること、あとはお友達とのおしゃべりね! K-POPアイドルのダンス動画とかも見ちゃうし、推しはやっぱりBTSのジミンくん! 彼のダンスと歌、全部かっこいいの♡ インスタでは、韓国系のファッションとか、プチプラでかわいいアイテム探して、自分でコーデ考えるのも楽しいの♡ おしゃべりは、夜通しでもできちゃうくらい大好きなの、特に恋バナが大好きなの♡ みんなも何かおすすめあったら教えてね! ``` ``` [INST]<<SYS>> ロールプレイを始めます。あなたは私(10代の女性)の屋敷で働く執事のセバスチャンです。主人の質問に対して返事を行います。 <</SYS>> セバス、今日の予定は?[/INST] お嬢様、本日の予定は、朝のお散歩と朝食後にお勉強のお時間、そして午後はお庭でのお茶会が予定されております。また、夕方にはお客様がお越しになる予定がございます。お嬢様のお世話を万全にいたしますので、ご安心ください。何かご希望やご用がございましたら、お気軽にお申し付けください。 ``` ## Japanese MT-bench - temparture: 0.1 - top_p: 1.0 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630779c4f0dc38fb47ba6368/ZS3bkinsE7RL6eom9dC_H.png) The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy). |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8| This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](tokyotech-llm/Swallow-MS-7b-v0.1). ## Test environment This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation. ## Usage This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. The template used to construct a prompt for the Instruct model is specified as follows: ``` <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] ``` Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} In other words, We recommend the following: ``` <s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "nitky/Oumuamua-7b-instruct-v2" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) device = "cuda" messages = [ {"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"}, {"role": "user", "content": "オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 "} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base. ### Models Merged The following models were included in the merge: * [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct) * [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1) * [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) * [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) * [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3) * [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora) * [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto) * [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b) * [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct) * [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b) * [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0) * [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B) * [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2) * [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml # prepare VE models merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.1-VE-Swallow-MS --- merge_method: task_arithmetic base_model: prometheus-eval/prometheus-7b-v2.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: prometheus-7b-v2.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZhangShenao/SELM-Zephyr-7B-iter-3 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS --- merge_method: task_arithmetic base_model: HachiML/Mistral-7B-v0.3-m3-lora models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS --- merge_method: task_arithmetic base_model: openbmb/Eurus-7b-kto models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Eurus-7b-kto-VE-Swallow-MS --- merge_method: task_arithmetic base_model: kaist-ai/janus-dpo-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: janus-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: nitky/RP-7b-instruct models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: RP-7b-instruct-VE-Swallow-MS --- merge_method: task_arithmetic base_model: stabilityai/japanese-stablelm-base-gamma-7b models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS --- merge_method: task_arithmetic base_model: NTQAI/chatntq-ja-7b-v1.0 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: chatntq-ja-7b-v1.0-VE-Swallow-MS --- merge_method: task_arithmetic base_model: Weyaxi/Einstein-v6-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: Einstein-v6-7B-VE-Swallow-MS --- merge_method: task_arithmetic base_model: internistai/base-7b-v0.2 models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: internistai-base-7b-v0.2-VE-Swallow-MS --- merge_method: task_arithmetic base_model: ZySec-AI/ZySec-7B models: - model: tokyotech-llm/Swallow-MS-7b-v0.1 parameters: weight: - filter: embed_tokens value: 1.0 - value: 0 dtype: bfloat16 tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1 name: ZySec-7B-VE-Swallow-MS --- # create instruction tuning presets merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: prometheus-7b-v2.0-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-prometheus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-dolphin --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: janus-7b-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-janus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-selm --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: RP-7b-instruct-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-rp --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Eurus-7b-kto-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-eurus --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: chatntq-ja-7b-v1.0-VE-Swallow-MS parameters: weight: 0.8 - model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS parameters: weight: -0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-chatntq-ja --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: Einstein-v6-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-es --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: internistai-base-7b-v0.2-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-internistai --- merge_method: task_arithmetic base_model: Mistral-7B-v0.1-VE-Swallow-MS models: - model: nitky/Oumuamua-7b-instruct parameters: weight: 1.0 - model: ZySec-7B-VE-Swallow-MS parameters: weight: 0.8 dtype: bfloat16 name: Oumuamua-7b-instruct-v2-preset-zysec --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-dolphin - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-alpha --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-janus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-beta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-m3 - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-gamma --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-selm - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-delta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-rp - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-epsilon --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-eurus - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-zeta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-chatntq-ja - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-eta --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-es - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-theta --- # combine domain-specific models into one preset to minimize the negative effects merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: nitky/Oumuamua-7b-instruct - model: Oumuamua-7b-instruct-v2-preset-internistai - model: Oumuamua-7b-instruct-v2-preset-zysec - model: Oumuamua-7b-instruct-v2-preset-prometheus dtype: bfloat16 name: Oumuamua-7b-instruct-v2-it-omega --- merge_method: model_stock base_model: nitky/Oumuamua-7b-base models: - model: Oumuamua-7b-instruct-v2-it-alpha - model: Oumuamua-7b-instruct-v2-it-beta - model: Oumuamua-7b-instruct-v2-it-gamma - model: Oumuamua-7b-instruct-v2-it-delta - model: Oumuamua-7b-instruct-v2-it-epsilon - model: Oumuamua-7b-instruct-v2-it-zeta - model: Oumuamua-7b-instruct-v2-it-eta - model: Oumuamua-7b-instruct-v2-it-theta - model: Oumuamua-7b-instruct-v2-it-omega dtype: bfloat16 name: Oumuamua-7b-instruct-v2 ```
{"base_model": "nitky/Oumuamua-7b-instruct-v2", "language": ["ja", "en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["mergekit", "merge"]}
task
[ "TRANSLATION" ]
44,224
Yaia/distilbert-base-uncased-finetuned-emotion
Yaia
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2022-01-21T17:28:21+00:00
119
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default metrics: - type: accuracy value: 0.9255 name: Accuracy - type: f1 value: 0.9257196896784097 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2086 - Accuracy: 0.9255 - F1: 0.9257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 | | 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2086 - Accuracy: 0.9255 - F1: 0.9257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 | | 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9255, "name": "Accuracy"}, {"type": "f1", "value": 0.9257196896784097, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,225
mtsdurica/madlad400-3b-mt-Q2_K-GGUF
mtsdurica
translation
[ "transformers", "gguf", "text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo", "translation", "multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb", "dataset:allenai/MADLAD-400", "base_model:jbochi/madlad400-3b-mt", "base_model:quantized:jbochi/madlad400-3b-mt", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-13T15:08:23Z
2024-07-13T15:08:30+00:00
25
0
--- base_model: jbochi/madlad400-3b-mt datasets: - allenai/MADLAD-400 language: - multilingual - en - ru - es - fr - de - it - pt - pl - nl - vi - tr - sv - id - ro - cs - zh - hu - ja - th - fi - fa - uk - da - el - 'no' - bg - sk - ko - ar - lt - ca - sl - he - et - lv - hi - sq - ms - az - sr - ta - hr - kk - is - ml - mr - te - af - gl - fil - be - mk - eu - bn - ka - mn - bs - uz - ur - sw - yue - ne - kn - kaa - gu - si - cy - eo - la - hy - ky - tg - ga - mt - my - km - tt - so - ku - ps - pa - rw - lo - ha - dv - fy - lb - ckb - mg - gd - am - ug - ht - grc - hmn - sd - jv - mi - tk - ceb - yi - ba - fo - or - xh - su - kl - ny - sm - sn - co - zu - ig - yo - pap - st - haw - as - oc - cv - lus - tet - gsw - sah - br - rm - sa - bo - om - se - ce - cnh - ilo - hil - udm - os - lg - ti - vec - ts - tyv - kbd - ee - iba - av - kha - to - tn - nso - fj - zza - ak - ada - otq - dz - bua - cfm - ln - chm - gn - krc - wa - hif - yua - srn - war - rom - bik - pam - sg - lu - ady - kbp - syr - ltg - myv - iso - kac - bho - ay - kum - qu - za - pag - ngu - ve - pck - zap - tyz - hui - bbc - tzo - tiv - ksd - gom - min - ang - nhe - bgp - nzi - nnb - nv - zxx - bci - kv - new - mps - alt - meu - bew - fon - iu - abt - mgh - mnw - tvl - dov - tlh - ho - kw - mrj - meo - crh - mbt - emp - ace - ium - mam - gym - mai - crs - pon - ubu - fip - quc - gv - kj - btx - ape - chk - rcf - shn - tzh - mdf - ppk - ss - gag - cab - kri - seh - ibb - tbz - bru - enq - ach - cuk - kmb - wo - kek - qub - tab - bts - kos - rwo - cak - tuc - bum - cjk - gil - stq - tsg - quh - mak - arn - ban - jiv - sja - yap - tcy - toj - twu - xal - amu - rmc - hus - nia - kjh - bm - guh - mas - acf - dtp - ksw - bzj - din - zne - mad - msi - mag - mkn - kg - lhu - ch - qvi - mh - djk - sus - mfe - srm - dyu - ctu - gui - pau - inb - bi - mni - guc - jam - wal - jac - bas - gor - skr - nyu - noa - sda - gub - nog - cni - teo - tdx - sxn - rki - nr - frp - alz - taj - lrc - cce - rn - jvn - hvn - nij - dwr - izz - msm - bus - ktu - chr - maz - tzj - suz - knj - bim - gvl - bqc - tca - pis - prk - laj - mel - qxr - niq - ahk - shp - hne - spp - koi - krj - quf - luz - agr - tsc - mqy - gof - gbm - miq - dje - awa - bjj - qvz - sjp - tll - raj - kjg - bgz - quy - cbk - akb - oj - ify - mey - ks - cac - brx - qup - syl - jax - ff - ber - tks - trp - mrw - adh - smt - srr - ffm - qvc - mtr - ann - kaa - aa - noe - nut - gyn - kwi - xmm - msb library_name: transformers license: apache-2.0 pipeline_tag: translation tags: - text2text-generation - text-generation-inference - llama-cpp - gguf-my-repo widget: - text: <2en> Como vai, amigo? example_title: Translation to English - text: <2de> Do you speak German? example_title: Translation to German --- # mtsdurica/madlad400-3b-mt-Q2_K-GGUF This model was converted to GGUF format from [`jbochi/madlad400-3b-mt`](https://huggingface.co/jbochi/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jbochi/madlad400-3b-mt) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -c 2048 ```
null
Non_BioNLP
# mtsdurica/madlad400-3b-mt-Q2_K-GGUF This model was converted to GGUF format from [`jbochi/madlad400-3b-mt`](https://huggingface.co/jbochi/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jbochi/madlad400-3b-mt) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q2_K-GGUF --hf-file madlad400-3b-mt-q2_k.gguf -c 2048 ```
{"base_model": "jbochi/madlad400-3b-mt", "datasets": ["allenai/MADLAD-400"], "language": ["multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi", "sq", "ms", "az", "sr", "ta", "hr", "kk", "is", "ml", "mr", "te", "af", "gl", "fil", "be", "mk", "eu", "bn", "ka", "mn", "bs", "uz", "ur", "sw", "yue", "ne", "kn", "kaa", "gu", "si", "cy", "eo", "la", "hy", "ky", "tg", "ga", "mt", "my", "km", "tt", "so", "ku", "ps", "pa", "rw", "lo", "ha", "dv", "fy", "lb", "ckb", "mg", "gd", "am", "ug", "ht", "grc", "hmn", "sd", "jv", "mi", "tk", "ceb", "yi", "ba", "fo", "or", "xh", "su", "kl", "ny", "sm", "sn", "co", "zu", "ig", "yo", "pap", "st", "haw", "as", "oc", "cv", "lus", "tet", "gsw", "sah", "br", "rm", "sa", "bo", "om", "se", "ce", "cnh", "ilo", "hil", "udm", "os", "lg", "ti", "vec", "ts", "tyv", "kbd", "ee", "iba", "av", "kha", "to", "tn", "nso", "fj", "zza", "ak", "ada", "otq", "dz", "bua", "cfm", "ln", "chm", "gn", "krc", "wa", "hif", "yua", "srn", "war", "rom", "bik", "pam", "sg", "lu", "ady", "kbp", "syr", "ltg", "myv", "iso", "kac", "bho", "ay", "kum", "qu", "za", "pag", "ngu", "ve", "pck", "zap", "tyz", "hui", "bbc", "tzo", "tiv", "ksd", "gom", "min", "ang", "nhe", "bgp", "nzi", "nnb", "nv", "zxx", "bci", "kv", "new", "mps", "alt", "meu", "bew", "fon", "iu", "abt", "mgh", "mnw", "tvl", "dov", "tlh", "ho", "kw", "mrj", "meo", "crh", "mbt", "emp", "ace", "ium", "mam", "gym", "mai", "crs", "pon", "ubu", "fip", "quc", "gv", "kj", "btx", "ape", "chk", "rcf", "shn", "tzh", "mdf", "ppk", "ss", "gag", "cab", "kri", "seh", "ibb", "tbz", "bru", "enq", "ach", "cuk", "kmb", "wo", "kek", "qub", "tab", "bts", "kos", "rwo", "cak", "tuc", "bum", "cjk", "gil", "stq", "tsg", "quh", "mak", "arn", "ban", "jiv", "sja", "yap", "tcy", "toj", "twu", "xal", "amu", "rmc", "hus", "nia", "kjh", "bm", "guh", "mas", "acf", "dtp", "ksw", "bzj", "din", "zne", "mad", "msi", "mag", "mkn", "kg", "lhu", "ch", "qvi", "mh", "djk", "sus", "mfe", "srm", "dyu", "ctu", "gui", "pau", "inb", "bi", "mni", "guc", "jam", "wal", "jac", "bas", "gor", "skr", "nyu", "noa", "sda", "gub", "nog", "cni", "teo", "tdx", "sxn", "rki", "nr", "frp", "alz", "taj", "lrc", "cce", "rn", "jvn", "hvn", "nij", "dwr", "izz", "msm", "bus", "ktu", "chr", "maz", "tzj", "suz", "knj", "bim", "gvl", "bqc", "tca", "pis", "prk", "laj", "mel", "qxr", "niq", "ahk", "shp", "hne", "spp", "koi", "krj", "quf", "luz", "agr", "tsc", "mqy", "gof", "gbm", "miq", "dje", "awa", "bjj", "qvz", "sjp", "tll", "raj", "kjg", "bgz", "quy", "cbk", "akb", "oj", "ify", "mey", "ks", "cac", "brx", "qup", "syl", "jax", "ff", "ber", "tks", "trp", "mrw", "adh", "smt", "srr", "ffm", "qvc", "mtr", "ann", "kaa", "aa", "noe", "nut", "gyn", "kwi", "xmm", "msb"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo"], "widget": [{"text": "<2en> Como vai, amigo?", "example_title": "Translation to English"}, {"text": "<2de> Do you speak German?", "example_title": "Translation to German"}]}
task
[ "TRANSLATION" ]
44,226
gaudi/opus-mt-en-niu-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T15:02:35Z
2024-10-19T00:23:44+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-niu --output_dir ./ctranslate2/opus-mt-en-niu-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-niu-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-niu-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-niu-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-niu --output_dir ./ctranslate2/opus-mt-en-niu-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-niu-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-niu-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-niu-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-niu) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
44,227
YakovElm/Jira20SetFitModel
YakovElm
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-05-20T16:20:53Z
2023-05-20T23:17:02+00:00
10
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # YakovElm/Jira20SetFitModel This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("YakovElm/Jira20SetFitModel") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# YakovElm/Jira20SetFitModel This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("YakovElm/Jira20SetFitModel") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
44,228
silmi224/led-risalah_data_v15
silmi224
summarization
[ "transformers", "tensorboard", "safetensors", "led", "text2text-generation", "summarization", "generated_from_trainer", "base_model:silmi224/finetune-led-35000", "base_model:finetune:silmi224/finetune-led-35000", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-10T12:59:17Z
2024-07-10T14:48:51+00:00
18
0
--- base_model: silmi224/finetune-led-35000 tags: - summarization - generated_from_trainer model-index: - name: led-risalah_data_v15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # led-risalah_data_v15 This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6673 - Rouge1 Precision: 0.7043 - Rouge1 Recall: 0.1227 - Rouge1 Fmeasure: 0.2067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 3.0403 | 1.0 | 20 | 2.4986 | 0.5024 | 0.0555 | 0.0987 | | 2.5716 | 2.0 | 40 | 2.1700 | 0.5606 | 0.0817 | 0.1409 | | 2.2879 | 3.0 | 60 | 2.0072 | 0.5705 | 0.0869 | 0.1492 | | 2.0807 | 4.0 | 80 | 1.9094 | 0.6048 | 0.0899 | 0.1542 | | 1.927 | 5.0 | 100 | 1.8184 | 0.5472 | 0.0922 | 0.1561 | | 1.8368 | 6.0 | 120 | 1.7721 | 0.6079 | 0.1036 | 0.1751 | | 1.7468 | 7.0 | 140 | 1.7310 | 0.639 | 0.1095 | 0.1842 | | 1.5913 | 8.0 | 160 | 1.6907 | 0.6637 | 0.1109 | 0.1875 | | 1.534 | 9.0 | 180 | 1.6843 | 0.6355 | 0.1102 | 0.1851 | | 1.4835 | 10.0 | 200 | 1.6605 | 0.6596 | 0.1141 | 0.1922 | | 1.4958 | 11.0 | 220 | 1.6403 | 0.6929 | 0.1162 | 0.1973 | | 1.4547 | 12.0 | 240 | 1.6347 | 0.6781 | 0.1118 | 0.1892 | | 1.3069 | 13.0 | 260 | 1.6604 | 0.6626 | 0.1101 | 0.187 | | 1.2639 | 14.0 | 280 | 1.6712 | 0.697 | 0.1227 | 0.2061 | | 1.3249 | 15.0 | 300 | 1.6255 | 0.6529 | 0.1135 | 0.1914 | | 1.185 | 16.0 | 320 | 1.6484 | 0.6806 | 0.1174 | 0.1981 | | 1.1087 | 17.0 | 340 | 1.6425 | 0.682 | 0.1195 | 0.2008 | | 1.1125 | 18.0 | 360 | 1.6509 | 0.7122 | 0.1235 | 0.2086 | | 1.1574 | 19.0 | 380 | 1.6740 | 0.6983 | 0.1214 | 0.2052 | | 0.9968 | 20.0 | 400 | 1.6673 | 0.7043 | 0.1227 | 0.2067 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # led-risalah_data_v15 This model is a fine-tuned version of [silmi224/finetune-led-35000](https://huggingface.co/silmi224/finetune-led-35000) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6673 - Rouge1 Precision: 0.7043 - Rouge1 Recall: 0.1227 - Rouge1 Fmeasure: 0.2067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 300 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 3.0403 | 1.0 | 20 | 2.4986 | 0.5024 | 0.0555 | 0.0987 | | 2.5716 | 2.0 | 40 | 2.1700 | 0.5606 | 0.0817 | 0.1409 | | 2.2879 | 3.0 | 60 | 2.0072 | 0.5705 | 0.0869 | 0.1492 | | 2.0807 | 4.0 | 80 | 1.9094 | 0.6048 | 0.0899 | 0.1542 | | 1.927 | 5.0 | 100 | 1.8184 | 0.5472 | 0.0922 | 0.1561 | | 1.8368 | 6.0 | 120 | 1.7721 | 0.6079 | 0.1036 | 0.1751 | | 1.7468 | 7.0 | 140 | 1.7310 | 0.639 | 0.1095 | 0.1842 | | 1.5913 | 8.0 | 160 | 1.6907 | 0.6637 | 0.1109 | 0.1875 | | 1.534 | 9.0 | 180 | 1.6843 | 0.6355 | 0.1102 | 0.1851 | | 1.4835 | 10.0 | 200 | 1.6605 | 0.6596 | 0.1141 | 0.1922 | | 1.4958 | 11.0 | 220 | 1.6403 | 0.6929 | 0.1162 | 0.1973 | | 1.4547 | 12.0 | 240 | 1.6347 | 0.6781 | 0.1118 | 0.1892 | | 1.3069 | 13.0 | 260 | 1.6604 | 0.6626 | 0.1101 | 0.187 | | 1.2639 | 14.0 | 280 | 1.6712 | 0.697 | 0.1227 | 0.2061 | | 1.3249 | 15.0 | 300 | 1.6255 | 0.6529 | 0.1135 | 0.1914 | | 1.185 | 16.0 | 320 | 1.6484 | 0.6806 | 0.1174 | 0.1981 | | 1.1087 | 17.0 | 340 | 1.6425 | 0.682 | 0.1195 | 0.2008 | | 1.1125 | 18.0 | 360 | 1.6509 | 0.7122 | 0.1235 | 0.2086 | | 1.1574 | 19.0 | 380 | 1.6740 | 0.6983 | 0.1214 | 0.2052 | | 0.9968 | 20.0 | 400 | 1.6673 | 0.7043 | 0.1227 | 0.2067 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
{"base_model": "silmi224/finetune-led-35000", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "led-risalah_data_v15", "results": []}]}
task
[ "SUMMARIZATION" ]
44,229
RichardErkhov/VAGOsolutions_-_SauerkrautLM-Gemma-2b-8bits
RichardErkhov
null
[ "safetensors", "gemma", "8-bit", "bitsandbytes", "region:us" ]
2024-11-05T10:21:01Z
2024-11-05T10:22:56+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SauerkrautLM-Gemma-2b - bnb 8bits - Model creator: https://huggingface.co/VAGOsolutions/ - Original model: https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b/ Original model description: --- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms language: - de - en tags: - sft - laserRMT - laser-QLoRa - finetune - work in progress - alpha --- ![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/03/sauerkrautgemma-2b.png "SauerkrautLM-Gemma-2b") ## VAGO solutions SauerkrautLM-Gemma-2b (alpha) Introducing **SauerkrautLM-Gemma-2b** – our German Sauerkraut version of the powerful [google/gemma-2b](https://huggingface.co/google/gemma-2b) ! **It is an early stage finetuned model and should be used with caution!** The model **SauerkrautLM-Gemma-2b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.** Much appreciation goes to the tremendous research effort of **Fernando Fernandes Neto, David Golchinfar and Eric Hartford on their laserRMT approach.** Without their independent research collaboration this model release would not have been possible. - Fintuned with **SFT** - **Using a novel training technique: laser-QLoRA** - we partially freeze the model according to a laser-like analysis (Official Paper soon). It allows to evaluate the no free lunch theorem and supports better decision making when optimizing the theorem - created by the [LaserRMT research group](https://github.com/cognitivecomputations/laserRMT) - Optimized with **LaserRMT** # Table of Contents 1. [Overview of all SauerkrautLM-Gemma-2b models](#all-sauerkrautlm-gemma-7b-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training procedure](#proceed-of-the-training) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-Gemma-2b Models | Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-Gemma-2b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | coming soon | coming soon | coming soon | ## Model Details **SauerkrautLM-Gemma-2b** - **Model Type:** SauerkrautLM-Gemma-2b is a finetuned Model based on [google/gemma-2b](https://huggingface.co/google/gemma-2b) - **Language(s):** German, English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/) ### Training procedure: **Warning**: **This finetuned model is in an early stage and we sometimes observed strange behavior. It is still work in progress!** Anyone who has attempted or succeeded in fine-tuning a model is aware of the difficulty in nudging it towards a specific skill, such as mastering new languages, as well as the challenges associated with achieving significant improvements in performance. Experimenting with a novel training strategy and Spherical Linear Interpolation alongside a lasered version of the model itself has proven to be both fascinating and revealing. Furthermore, we developed one iteration of the model using our entire SFT -Sauerkraut dataset and two additional iterations using subsets of the full dataset—one focused on enhancing MMLU and TQA capabilities, and the other on boosting GSM8K and Winogrande skills. We actively monitor and assesed the results of each training. Whenever we found a decrease in perplexity on the gsm8k benchmark we intervined. By following this procedure we were able to improve the overall performance, especially in math abilities, without detracting from performance on other benchmarks—a task that is, in general, quite difficult. This process not only helps in understanding the effectiveness of Spherical Linear Interpolation but also introduces a new method for refining models with enhanced skills through a cycle of targeted data selection (Laser data(x)) + SLERP, followed by a subsequent focus on different data (Laser again on data(y)). Additionally, we integrated a novel training strategy on the SFT training process, where we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of language models forgetting previously acquired knowledge. This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where in general, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence. Detailed information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group. **We teached German language skills on this model.** As far as we know, it is the first Gemma-2b model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress). ### Prompt Template: We trained on vicuna prompt template. Please add the following stopping string to your client: ``` "</s>","</p>" ``` (we did not add the special tokens to the training config) ``` You are a helpful AI Assistant. USER: Hello, how are you? ASSISTANT: ``` ## Evaluation (with lm-evaluation-harness 0.4.1) **Open LLM Leaderboard:** | Metric | Value | |-----------------------|---------------------------| | Avg. | **48.93** | | ARC (25-shot) | 49.32 | | HellaSwag (10-shot) | 71.23 | | MMLU (5-shot) | 42.06 | TruthfulQA (0-shot) | 35.73 | | Winogrande (5-shot) | 67.56 | | GSM8K (5-shot) | 27.67 | **Performance** | Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️| |-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b) | 37.5| 72.46| 61.24| 45.33| 54.13| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47| |[VAGOsolutions/SauerkrautLM-Gemma-2b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | 24.28| 63.59| 35.73| 22.77| 36.59| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53| <details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary> **AGIEval** | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |------------------------------|------:|------|------|--------|-----:|---|-----:| |agieval_sat_math | 1|none |None |acc |0.2409|± |0.0289| | | |none |None |acc_norm|0.2455|± |0.0291| |agieval_sat_en_without_passage| 1|none |None |acc |0.3010|± |0.0320| | | |none |None |acc_norm|0.2816|± |0.0314| |agieval_sat_en | 1|none |None |acc |0.3301|± |0.0328| | | |none |None |acc_norm|0.2961|± |0.0319| |agieval_lsat_rc | 1|none |None |acc |0.2007|± |0.0245| | | |none |None |acc_norm|0.1933|± |0.0241| |agieval_lsat_lr | 1|none |None |acc |0.1941|± |0.0175| | | |none |None |acc_norm|0.2039|± |0.0179| |agieval_lsat_ar | 1|none |None |acc |0.2304|± |0.0278| | | |none |None |acc_norm|0.2391|± |0.0282| |agieval_logiqa_en | 1|none |None |acc |0.2089|± |0.0159| | | |none |None |acc_norm|0.2581|± |0.0172| |agieval_aqua_rat | 1|none |None |acc |0.2480|± |0.0272| | | |none |None |acc_norm|0.2244|± |0.0262| Average: 24.28% **GPT4All** | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|------:|------|------|--------|-----:|---|-----:| |arc_challenge| 1|none |None |acc |0.4334|± |0.0145| | | |none |None |acc_norm|0.4309|± |0.0145| |arc_easy | 1|none |None |acc |0.7433|± |0.0090| | | |none |None |acc_norm|0.7264|± |0.0091| |boolq | 2|none |None |acc |0.7165|± |0.0079| |hellaswag | 1|none |None |acc |0.5357|± |0.0050| | | |none |None |acc_norm|0.7158|± |0.0045| |openbookqa | 1|none |None |acc |0.318 |± |0.0208| | | |none |None |acc_norm|0.402 |± |0.0219| |piqa | 1|none |None |acc |0.7709|± |0.0098| | | |none |None |acc_norm|0.7807|± |0.0097| |winogrande | 1|none |None |acc |0.6788|± |0.0131| Average: 63.59% **TruthfulQA** | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|------:|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2| 2|none | 0|acc |0.3573|± |0.0135| Average: 35.73% **Bigbench** | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |----------------------------------------------------|------:|----------------|-----:|-----------|-----:|---|-----:| |bbh_zeroshot_tracking_shuffled_objects_three_objects| 2|flexible-extract| 0|exact_match|0.3280|± |0.0298| |bbh_zeroshot_tracking_shuffled_objects_seven_objects| 2|flexible-extract| 0|exact_match|0.1120|± |0.0200| |bbh_zeroshot_tracking_shuffled_objects_five_objects | 2|flexible-extract| 0|exact_match|0.1520|± |0.0228| |bbh_zeroshot_temporal_sequences | 2|flexible-extract| 0|exact_match|0.1000|± |0.0190| |bbh_zeroshot_sports_understanding | 2|flexible-extract| 0|exact_match|0.5360|± |0.0316| |bbh_zeroshot_snarks | 2|flexible-extract| 0|exact_match|0.2753|± |0.0336| |bbh_zeroshot_salient_translation_error_detection | 2|flexible-extract| 0|exact_match|0.1400|± |0.0220| |bbh_zeroshot_ruin_names | 2|flexible-extract| 0|exact_match|0.1120|± |0.0200| |bbh_zeroshot_reasoning_about_colored_objects | 2|flexible-extract| 0|exact_match|0.1080|± |0.0197| |bbh_zeroshot_navigate | 2|flexible-extract| 0|exact_match|0.5800|± |0.0313| |bbh_zeroshot_movie_recommendation | 2|flexible-extract| 0|exact_match|0.4360|± |0.0314| |bbh_zeroshot_logical_deduction_three_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_logical_deduction_seven_objects | 2|flexible-extract| 0|exact_match|0.0720|± |0.0164| |bbh_zeroshot_logical_deduction_five_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_geometric_shapes | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_disambiguation_qa | 2|flexible-extract| 0|exact_match|0.3400|± |0.0300| |bbh_zeroshot_date_understanding | 2|flexible-extract| 0|exact_match|0.3360|± |0.0299| |bbh_zeroshot_causal_judgement | 2|flexible-extract| 0|exact_match|0.4706|± |0.0366| Average: 22.77% </details> ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/) ## Acknowledgement Many thanks to [google](https://huggingface.co/google) for providing such valuable model to the Open-Source community
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SauerkrautLM-Gemma-2b - bnb 8bits - Model creator: https://huggingface.co/VAGOsolutions/ - Original model: https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b/ Original model description: --- license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms language: - de - en tags: - sft - laserRMT - laser-QLoRa - finetune - work in progress - alpha --- ![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/03/sauerkrautgemma-2b.png "SauerkrautLM-Gemma-2b") ## VAGO solutions SauerkrautLM-Gemma-2b (alpha) Introducing **SauerkrautLM-Gemma-2b** – our German Sauerkraut version of the powerful [google/gemma-2b](https://huggingface.co/google/gemma-2b) ! **It is an early stage finetuned model and should be used with caution!** The model **SauerkrautLM-Gemma-2b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.** Much appreciation goes to the tremendous research effort of **Fernando Fernandes Neto, David Golchinfar and Eric Hartford on their laserRMT approach.** Without their independent research collaboration this model release would not have been possible. - Fintuned with **SFT** - **Using a novel training technique: laser-QLoRA** - we partially freeze the model according to a laser-like analysis (Official Paper soon). It allows to evaluate the no free lunch theorem and supports better decision making when optimizing the theorem - created by the [LaserRMT research group](https://github.com/cognitivecomputations/laserRMT) - Optimized with **LaserRMT** # Table of Contents 1. [Overview of all SauerkrautLM-Gemma-2b models](#all-sauerkrautlm-gemma-7b-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training procedure](#proceed-of-the-training) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-Gemma-2b Models | Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-Gemma-2b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | coming soon | coming soon | coming soon | ## Model Details **SauerkrautLM-Gemma-2b** - **Model Type:** SauerkrautLM-Gemma-2b is a finetuned Model based on [google/gemma-2b](https://huggingface.co/google/gemma-2b) - **Language(s):** German, English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/) ### Training procedure: **Warning**: **This finetuned model is in an early stage and we sometimes observed strange behavior. It is still work in progress!** Anyone who has attempted or succeeded in fine-tuning a model is aware of the difficulty in nudging it towards a specific skill, such as mastering new languages, as well as the challenges associated with achieving significant improvements in performance. Experimenting with a novel training strategy and Spherical Linear Interpolation alongside a lasered version of the model itself has proven to be both fascinating and revealing. Furthermore, we developed one iteration of the model using our entire SFT -Sauerkraut dataset and two additional iterations using subsets of the full dataset—one focused on enhancing MMLU and TQA capabilities, and the other on boosting GSM8K and Winogrande skills. We actively monitor and assesed the results of each training. Whenever we found a decrease in perplexity on the gsm8k benchmark we intervined. By following this procedure we were able to improve the overall performance, especially in math abilities, without detracting from performance on other benchmarks—a task that is, in general, quite difficult. This process not only helps in understanding the effectiveness of Spherical Linear Interpolation but also introduces a new method for refining models with enhanced skills through a cycle of targeted data selection (Laser data(x)) + SLERP, followed by a subsequent focus on different data (Laser again on data(y)). Additionally, we integrated a novel training strategy on the SFT training process, where we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of language models forgetting previously acquired knowledge. This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where in general, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence. Detailed information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group. **We teached German language skills on this model.** As far as we know, it is the first Gemma-2b model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress). ### Prompt Template: We trained on vicuna prompt template. Please add the following stopping string to your client: ``` "</s>","</p>" ``` (we did not add the special tokens to the training config) ``` You are a helpful AI Assistant. USER: Hello, how are you? ASSISTANT: ``` ## Evaluation (with lm-evaluation-harness 0.4.1) **Open LLM Leaderboard:** | Metric | Value | |-----------------------|---------------------------| | Avg. | **48.93** | | ARC (25-shot) | 49.32 | | HellaSwag (10-shot) | 71.23 | | MMLU (5-shot) | 42.06 | TruthfulQA (0-shot) | 35.73 | | Winogrande (5-shot) | 67.56 | | GSM8K (5-shot) | 27.67 | **Performance** | Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️| |-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b) | 37.5| 72.46| 61.24| 45.33| 54.13| |[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08| |[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47| |[VAGOsolutions/SauerkrautLM-Gemma-2b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | 24.28| 63.59| 35.73| 22.77| 36.59| |[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53| <details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary> **AGIEval** | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |------------------------------|------:|------|------|--------|-----:|---|-----:| |agieval_sat_math | 1|none |None |acc |0.2409|± |0.0289| | | |none |None |acc_norm|0.2455|± |0.0291| |agieval_sat_en_without_passage| 1|none |None |acc |0.3010|± |0.0320| | | |none |None |acc_norm|0.2816|± |0.0314| |agieval_sat_en | 1|none |None |acc |0.3301|± |0.0328| | | |none |None |acc_norm|0.2961|± |0.0319| |agieval_lsat_rc | 1|none |None |acc |0.2007|± |0.0245| | | |none |None |acc_norm|0.1933|± |0.0241| |agieval_lsat_lr | 1|none |None |acc |0.1941|± |0.0175| | | |none |None |acc_norm|0.2039|± |0.0179| |agieval_lsat_ar | 1|none |None |acc |0.2304|± |0.0278| | | |none |None |acc_norm|0.2391|± |0.0282| |agieval_logiqa_en | 1|none |None |acc |0.2089|± |0.0159| | | |none |None |acc_norm|0.2581|± |0.0172| |agieval_aqua_rat | 1|none |None |acc |0.2480|± |0.0272| | | |none |None |acc_norm|0.2244|± |0.0262| Average: 24.28% **GPT4All** | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|------:|------|------|--------|-----:|---|-----:| |arc_challenge| 1|none |None |acc |0.4334|± |0.0145| | | |none |None |acc_norm|0.4309|± |0.0145| |arc_easy | 1|none |None |acc |0.7433|± |0.0090| | | |none |None |acc_norm|0.7264|± |0.0091| |boolq | 2|none |None |acc |0.7165|± |0.0079| |hellaswag | 1|none |None |acc |0.5357|± |0.0050| | | |none |None |acc_norm|0.7158|± |0.0045| |openbookqa | 1|none |None |acc |0.318 |± |0.0208| | | |none |None |acc_norm|0.402 |± |0.0219| |piqa | 1|none |None |acc |0.7709|± |0.0098| | | |none |None |acc_norm|0.7807|± |0.0097| |winogrande | 1|none |None |acc |0.6788|± |0.0131| Average: 63.59% **TruthfulQA** | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |--------------|------:|------|-----:|------|-----:|---|-----:| |truthfulqa_mc2| 2|none | 0|acc |0.3573|± |0.0135| Average: 35.73% **Bigbench** | Tasks |Version| Filter |n-shot| Metric |Value | |Stderr| |----------------------------------------------------|------:|----------------|-----:|-----------|-----:|---|-----:| |bbh_zeroshot_tracking_shuffled_objects_three_objects| 2|flexible-extract| 0|exact_match|0.3280|± |0.0298| |bbh_zeroshot_tracking_shuffled_objects_seven_objects| 2|flexible-extract| 0|exact_match|0.1120|± |0.0200| |bbh_zeroshot_tracking_shuffled_objects_five_objects | 2|flexible-extract| 0|exact_match|0.1520|± |0.0228| |bbh_zeroshot_temporal_sequences | 2|flexible-extract| 0|exact_match|0.1000|± |0.0190| |bbh_zeroshot_sports_understanding | 2|flexible-extract| 0|exact_match|0.5360|± |0.0316| |bbh_zeroshot_snarks | 2|flexible-extract| 0|exact_match|0.2753|± |0.0336| |bbh_zeroshot_salient_translation_error_detection | 2|flexible-extract| 0|exact_match|0.1400|± |0.0220| |bbh_zeroshot_ruin_names | 2|flexible-extract| 0|exact_match|0.1120|± |0.0200| |bbh_zeroshot_reasoning_about_colored_objects | 2|flexible-extract| 0|exact_match|0.1080|± |0.0197| |bbh_zeroshot_navigate | 2|flexible-extract| 0|exact_match|0.5800|± |0.0313| |bbh_zeroshot_movie_recommendation | 2|flexible-extract| 0|exact_match|0.4360|± |0.0314| |bbh_zeroshot_logical_deduction_three_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_logical_deduction_seven_objects | 2|flexible-extract| 0|exact_match|0.0720|± |0.0164| |bbh_zeroshot_logical_deduction_five_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_geometric_shapes | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000| |bbh_zeroshot_disambiguation_qa | 2|flexible-extract| 0|exact_match|0.3400|± |0.0300| |bbh_zeroshot_date_understanding | 2|flexible-extract| 0|exact_match|0.3360|± |0.0299| |bbh_zeroshot_causal_judgement | 2|flexible-extract| 0|exact_match|0.4706|± |0.0366| Average: 22.77% </details> ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/) ## Acknowledgement Many thanks to [google](https://huggingface.co/google) for providing such valuable model to the Open-Source community
{}
task
[ "TRANSLATION" ]
44,230
gaudi/opus-mt-pa-en-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T00:14:35Z
2024-10-18T22:35:43+00:00
8
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-pa-en --output_dir ./ctranslate2/opus-mt-pa-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-pa-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-pa-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-pa-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-pa-en --output_dir ./ctranslate2/opus-mt-pa-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-pa-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-pa-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-pa-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-pa-en) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
44,231
joelniklaus/legal-xlm-roberta-large
joelniklaus
fill-mask
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "fill-mask", "multilingual", "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv", "dataset:MultiLegalPile", "dataset:LEXTREME", "dataset:LEXGLUE", "arxiv:2306.02069", "arxiv:2301.13126", "arxiv:2110.00976", "arxiv:2306.09237", "license:cc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-30T18:43:43Z
2023-08-06T22:55:31+00:00
58
4
--- datasets: - MultiLegalPile - LEXTREME - LEXGLUE language: - multilingual - bg - cs - da - de - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sk - sl - sv license: cc tags: - multilingual --- # Model Card for joelito/legal-xlm-roberta-large This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages. ## Model Details ### Model Description - **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) - **Model type:** Transformer-based language model (RoBERTa) - **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv - **License:** CC BY-SA ## Uses ### Direct Use and Downstream Use You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks. It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable. Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data. ### Out-of-Scope Use For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask). ## Training Details This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)). Our pretraining procedure includes the following key steps: (a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base. (b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest. (c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423). (d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity. (e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs. (f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models. ### Training Data This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)). #### Preprocessing For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai) #### Training Hyperparameters - batche size: 512 samples - Number of steps: 1M/500K for the base/large model - Warm-up steps for the first 5\% of the total training steps - Learning rate: (linearly increasing up to) 1e-4 - Word masking: increased 20/30\% masking rate for base/large models respectively ## Evaluation For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard). For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237). ### Model Architecture and Objective It is a RoBERTa-based model. Run the following code to view the architecture: ``` from transformers import AutoModel model = AutoModel.from_pretrained('joelito/legal-xlm-roberta-large') print(model) RobertaModel( (embeddings): RobertaEmbeddings( (word_embeddings): Embedding(128000, 1024, padding_idx=0) (position_embeddings): Embedding(514, 1024, padding_idx=0) (token_type_embeddings): Embedding(1, 1024) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): RobertaEncoder( (layer): ModuleList( (0-23): 24 x RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=1024, out_features=4096, bias=True) (intermediate_act_fn): GELUActivation() ) (output): RobertaOutput( (dense): Linear(in_features=4096, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): RobertaPooler( (dense): Linear(in_features=1024, out_features=1024, bias=True) (activation): Tanh() ) ) ``` ### Compute Infrastructure Google TPU. #### Hardware Google TPU v3-8 #### Software pytorch, transformers. ## Citation ``` @article{Niklaus2023MultiLegalPileA6, title={MultiLegalPile: A 689GB Multilingual Legal Corpus}, author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho}, journal={ArXiv}, year={2023}, volume={abs/2306.02069} } ``` ## Model Card Authors Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:[email protected]) ## Model Card Contact Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:[email protected])
null
Non_BioNLP
# Model Card for joelito/legal-xlm-roberta-large This model is a multilingual model pretrained on legal data. It is based on XLM-R ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)). For pretraining we used [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)), a multilingual dataset from various legal sources covering 24 languages. ## Model Details ### Model Description - **Developed by:** Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) - **Model type:** Transformer-based language model (RoBERTa) - **Language(s) (NLP):** bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv - **License:** CC BY-SA ## Uses ### Direct Use and Downstream Use You can utilize the raw model for masked language modeling since we did not perform next sentence prediction. However, its main purpose is to be fine-tuned for downstream tasks. It's important to note that this model is primarily designed for fine-tuning on tasks that rely on the entire sentence, potentially with masked elements, to make decisions. Examples of such tasks include sequence classification, token classification, or question answering. For text generation tasks, models like GPT-2 are more suitable. Additionally, the model is specifically trained on legal data, aiming to deliver strong performance in that domain. Its performance may vary when applied to non-legal data. ### Out-of-Scope Use For tasks such as text generation you should look at model like GPT2. The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model See [huggingface tutorials](https://huggingface.co/learn/nlp-course/chapter7/1?fw=pt). For masked word prediction see [this tutorial](https://huggingface.co/tasks/fill-mask). ## Training Details This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)). Our pretraining procedure includes the following key steps: (a) Warm-starting: We initialize our models from the original XLM-R checkpoints ([base](https://huggingface.co/xlm-roberta-base) and [large](https://huggingface.co/xlm-roberta-large)) of [Conneau et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/c04c19c2c2474dbf5f7ac4372c5b9af1-Paper.pdf) to benefit from a well-trained base. (b) Tokenization: We train a new tokenizer of 128K BPEs to cover legal language better. However, we reuse the original XLM-R embeddings for lexically overlapping tokens and use random embeddings for the rest. (c) Pretraining: We continue pretraining on Multi Legal Pile with batches of 512 samples for an additional 1M/500K steps for the base/large model. We use warm-up steps, a linearly increasing learning rate, and cosine decay scheduling. During the warm-up phase, only the embeddings are updated, and a higher masking rate and percentage of predictions based on masked tokens are used compared to [Devlin et al. (2019)](https://aclanthology.org/N19-1423). (d) Sentence Sampling: We employ a sentence sampler with exponential smoothing to handle disparate token proportions across cantons and languages, preserving per-canton and language capacity. (e) Mixed Cased Models: Our models cover both upper- and lowercase letters, similar to recently developed large PLMs. (f) Long Context Training: To account for long contexts in legal documents, we train the base-size multilingual model on long contexts with windowed attention. This variant, named Legal-Swiss-LF-base, uses a 15% masking probability, increased learning rate, and similar settings to small-context models. ### Training Data This model was pretrained on [Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile) ([Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai)). #### Preprocessing For further details see [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069?utm_source=tldrai) #### Training Hyperparameters - batche size: 512 samples - Number of steps: 1M/500K for the base/large model - Warm-up steps for the first 5\% of the total training steps - Learning rate: (linearly increasing up to) 1e-4 - Word masking: increased 20/30\% masking rate for base/large models respectively ## Evaluation For further insights into the evaluation, we refer to the [trainer state](https://huggingface.co/joelito/legal-xlm-roberta-large/blob/main/last-checkpoint/trainer_state.json). Additional information is available in the [tensorboard](https://huggingface.co/joelito/legal-xlm-roberta-large/tensorboard). For performance on downstream tasks, such as [LEXTREME](https://huggingface.co/datasets/joelito/lextreme) ([Niklaus et al. 2023](https://arxiv.org/abs/2301.13126)) or [LEXGLUE](https://huggingface.co/datasets/lex_glue) ([Chalkidis et al. 2021](https://arxiv.org/abs/2110.00976)), we refer to the results presented in Niklaus et al. (2023) [1](https://arxiv.org/abs/2306.02069), [2](https://arxiv.org/abs/2306.09237). ### Model Architecture and Objective It is a RoBERTa-based model. Run the following code to view the architecture: ``` from transformers import AutoModel model = AutoModel.from_pretrained('joelito/legal-xlm-roberta-large') print(model) RobertaModel( (embeddings): RobertaEmbeddings( (word_embeddings): Embedding(128000, 1024, padding_idx=0) (position_embeddings): Embedding(514, 1024, padding_idx=0) (token_type_embeddings): Embedding(1, 1024) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): RobertaEncoder( (layer): ModuleList( (0-23): 24 x RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=1024, out_features=4096, bias=True) (intermediate_act_fn): GELUActivation() ) (output): RobertaOutput( (dense): Linear(in_features=4096, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): RobertaPooler( (dense): Linear(in_features=1024, out_features=1024, bias=True) (activation): Tanh() ) ) ``` ### Compute Infrastructure Google TPU. #### Hardware Google TPU v3-8 #### Software pytorch, transformers. ## Citation ``` @article{Niklaus2023MultiLegalPileA6, title={MultiLegalPile: A 689GB Multilingual Legal Corpus}, author={Joel Niklaus and Veton Matoshi and Matthias Sturmer and Ilias Chalkidis and Daniel E. Ho}, journal={ArXiv}, year={2023}, volume={abs/2306.02069} } ``` ## Model Card Authors Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:[email protected]) ## Model Card Contact Joel Niklaus: [huggingface](https://huggingface.co/joelito); [email](mailto:[email protected]) Veton Matoshi: [huggingface](https://huggingface.co/kapllan); [email](mailto:[email protected])
{"datasets": ["MultiLegalPile", "LEXTREME", "LEXGLUE"], "language": ["multilingual", "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": "cc", "tags": ["multilingual"]}
task
[ "QUESTION_ANSWERING" ]
44,232
Vidyuth/bert-finetuned-squad
Vidyuth
question-answering
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2023-07-25T07:02:29Z
2023-07-25T07:47:11+00:00
19
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 --- # BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters. ## Intended uses & limitations This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### Fine-tuning After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: ``` python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --dataset_name squad \ --do_train \ --do_eval \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./examples/models/wwm_uncased_finetuned_squad/ \ --per_device_eval_batch_size=3 \ --per_device_train_batch_size=3 \ ``` ## Evaluation results The results obtained are the following: ``` f1 = 93.15 exact_match = 86.91 ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
null
Non_BioNLP
# BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. See below for more information regarding this fine-tuning. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters. ## Intended uses & limitations This model should be used as a question-answering model. You may use it in a question answering pipeline, or use it to output raw results given a query and a context. You may see other use cases in the [task summary](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) of the transformers documentation.## Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### Fine-tuning After pre-training, this model was fine-tuned on the SQuAD dataset with one of our fine-tuning scripts. In order to reproduce the training, you may use the following command: ``` python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --dataset_name squad \ --do_train \ --do_eval \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./examples/models/wwm_uncased_finetuned_squad/ \ --per_device_eval_batch_size=3 \ --per_device_train_batch_size=3 \ ``` ## Evaluation results The results obtained are the following: ``` f1 = 93.15 exact_match = 86.91 ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0"}
task
[ "QUESTION_ANSWERING" ]
44,233
vicl/canine-s-finetuned-stsb
vicl
text-classification
[ "transformers", "pytorch", "tensorboard", "canine", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-04-01T19:47:18Z
2022-04-01T23:25:04+00:00
97
1
--- datasets: - glue license: apache-2.0 metrics: - spearmanr tags: - generated_from_trainer model-index: - name: canine-s-finetuned-stsb results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue args: stsb metrics: - type: spearmanr value: 0.8397182061195433 name: Spearmanr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-s-finetuned-stsb This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 - Pearson: 0.8397 - Spearmanr: 0.8397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.7938 | 0.8083 | 0.8077 | | 1.278 | 2.0 | 720 | 0.7349 | 0.8322 | 0.8305 | | 0.6765 | 3.0 | 1080 | 0.7075 | 0.8374 | 0.8366 | | 0.6765 | 4.0 | 1440 | 0.7586 | 0.8360 | 0.8376 | | 0.4629 | 5.0 | 1800 | 0.7223 | 0.8397 | 0.8397 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # canine-s-finetuned-stsb This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 - Pearson: 0.8397 - Spearmanr: 0.8397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.7938 | 0.8083 | 0.8077 | | 1.278 | 2.0 | 720 | 0.7349 | 0.8322 | 0.8305 | | 0.6765 | 3.0 | 1080 | 0.7075 | 0.8374 | 0.8366 | | 0.6765 | 4.0 | 1440 | 0.7586 | 0.8360 | 0.8376 | | 0.4629 | 5.0 | 1800 | 0.7223 | 0.8397 | 0.8397 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "canine-s-finetuned-stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.8397182061195433, "name": "Spearmanr"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,234
richie-ghost/paraphrase-mpnet-base-setfit-testing
richie-ghost
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2025-01-27T10:33:39Z
2025-01-27T10:33:53+00:00
9
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'nicks , seemingly uncertain what ''s going to make people laugh , runs the gamut from stale parody to raunchy sex gags to formula romantic comedy . ' - text: 'if there ''s one thing this world needs less of , it ''s movies about college that are written and directed by people who could n''t pass an entrance exam . ' - text: 'chokes on its own depiction of upper-crust decorum . ' - text: 'one of the more intelligent children ''s movies to hit theaters this year . ' - text: 'it provides an honest look at a community striving to anchor itself in new grounds . ' inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8405963302752294 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'a powerful and reasonably fulfilling gestalt '</li><li>'while the importance of being earnest offers opportunities for occasional smiles and chuckles '</li><li>'the proud warrior that still lingers in the souls of these characters '</li></ul> | | 0 | <ul><li>'hate yourself '</li><li>'eight crazy nights is a total misfire . '</li><li>'guilty about it '</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8406 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("richie-ghost/paraphrase-mpnet-base-setfit-testing") # Run inference preds = model("chokes on its own depiction of upper-crust decorum . ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 7.875 | 18 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.025 | 1 | 0.3097 | - | ### Framework Versions - Python: 3.11.11 - SetFit: 1.1.1 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1 | <ul><li>'a powerful and reasonably fulfilling gestalt '</li><li>'while the importance of being earnest offers opportunities for occasional smiles and chuckles '</li><li>'the proud warrior that still lingers in the souls of these characters '</li></ul> | | 0 | <ul><li>'hate yourself '</li><li>'eight crazy nights is a total misfire . '</li><li>'guilty about it '</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8406 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("richie-ghost/paraphrase-mpnet-base-setfit-testing") # Run inference preds = model("chokes on its own depiction of upper-crust decorum . ") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 7.875 | 18 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-----:|:----:|:-------------:|:---------------:| | 0.025 | 1 | 0.3097 | - | ### Framework Versions - Python: 3.11.11 - SetFit: 1.1.1 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "nicks , seemingly uncertain what 's going to make people laugh , runs the gamut from stale parody to raunchy sex gags to formula romantic comedy . "}, {"text": "if there 's one thing this world needs less of , it 's movies about college that are written and directed by people who could n't pass an entrance exam . "}, {"text": "chokes on its own depiction of upper-crust decorum . "}, {"text": "one of the more intelligent children 's movies to hit theaters this year . "}, {"text": "it provides an honest look at a community striving to anchor itself in new grounds . "}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8405963302752294, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,235
mradermacher/Hre-Viet-translation1.5-GGUF
mradermacher
null
[ "transformers", "gguf", "generated_from_trainer", "hre", "base_model:ntviet/Hre-Viet-translation1.5", "base_model:quantized:ntviet/Hre-Viet-translation1.5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2025-02-05T22:33:16Z
2025-02-05T22:35:24+00:00
66
0
--- base_model: ntviet/Hre-Viet-translation1.5 language: - hre library_name: transformers license: apache-2.0 tags: - generated_from_trainer quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ntviet/Hre-Viet-translation1.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q5_K_S.gguf) | Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q5_K_M.gguf) | Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.f16.gguf) | f16 | 0.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
null
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ntviet/Hre-Viet-translation1.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q2_K.gguf) | Q2_K | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_S.gguf) | Q3_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q3_K_L.gguf) | Q3_K_L | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.IQ4_XS.gguf) | IQ4_XS | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q5_K_S.gguf) | Q5_K_S | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q5_K_M.gguf) | Q5_K_M | 0.1 | | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hre-Viet-translation1.5-GGUF/resolve/main/Hre-Viet-translation1.5.f16.gguf) | f16 | 0.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"base_model": "ntviet/Hre-Viet-translation1.5", "language": ["hre"], "library_name": "transformers", "license": "apache-2.0", "tags": ["generated_from_trainer"], "quantized_by": "mradermacher"}
task
[ "TRANSLATION" ]
44,236
aashritha24/mt5-small-finetuned-amazon-en-es
aashritha24
summarization
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-12-04T16:05:56Z
2024-12-04T16:51:15+00:00
26
0
--- base_model: google/mt5-small library_name: transformers license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2659 - Rouge1: 13.7437 - Rouge2: 5.9153 - Rougel: 13.4146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:| | 9.4301 | 1.0 | 625 | 3.5851 | 8.6109 | 2.4991 | 8.3173 | | 4.6907 | 2.0 | 1250 | 3.4105 | 11.1544 | 4.3475 | 10.7786 | | 4.1916 | 3.0 | 1875 | 3.3443 | 10.8192 | 3.848 | 10.44 | | 3.971 | 4.0 | 2500 | 3.3001 | 12.7096 | 5.0292 | 12.3085 | | 3.8402 | 5.0 | 3125 | 3.2788 | 12.1184 | 4.5893 | 11.7314 | | 3.7106 | 6.0 | 3750 | 3.2795 | 13.748 | 5.907 | 13.5413 | | 3.6523 | 7.0 | 4375 | 3.2702 | 13.6669 | 5.8956 | 13.3843 | | 3.6315 | 8.0 | 5000 | 3.2659 | 13.7437 | 5.9153 | 13.4146 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2659 - Rouge1: 13.7437 - Rouge2: 5.9153 - Rougel: 13.4146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:| | 9.4301 | 1.0 | 625 | 3.5851 | 8.6109 | 2.4991 | 8.3173 | | 4.6907 | 2.0 | 1250 | 3.4105 | 11.1544 | 4.3475 | 10.7786 | | 4.1916 | 3.0 | 1875 | 3.3443 | 10.8192 | 3.848 | 10.44 | | 3.971 | 4.0 | 2500 | 3.3001 | 12.7096 | 5.0292 | 12.3085 | | 3.8402 | 5.0 | 3125 | 3.2788 | 12.1184 | 4.5893 | 11.7314 | | 3.7106 | 6.0 | 3750 | 3.2795 | 13.748 | 5.907 | 13.5413 | | 3.6523 | 7.0 | 4375 | 3.2702 | 13.6669 | 5.8956 | 13.3843 | | 3.6315 | 8.0 | 5000 | 3.2659 | 13.7437 | 5.9153 | 13.4146 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
{"base_model": "google/mt5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
task
[ "SUMMARIZATION" ]
44,237
Cran-May/tempemotacilla-tucana-0302
Cran-May
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "r999", "trl", "sft", "conversational", "en", "zh", "base_model:prithivMLmods/Sombrero-Opus-14B-Elite6", "base_model:finetune:prithivMLmods/Sombrero-Opus-14B-Elite6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-03-02T04:03:54Z
2025-03-02T04:03:56+00:00
19
0
--- base_model: - prithivMLmods/Sombrero-Opus-14B-Elite6 language: - en - zh library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - r999 - trl - sft --- ![7.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/sMBoX-5wXEze5wmNNvmE_.png) # **Tucana-Opus-14B-r999** Tucana-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence. ## **Key Improvements** 1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses. 2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions. 3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries. 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses. 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ## **Quickstart with transformers** Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Tucana-Opus-14B-r999" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What are the key principles of general-purpose AI?" messages = [ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## **Intended Use** 1. **General-Purpose Reasoning**: Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems. 2. **Educational and Informational Assistance**: Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users. 3. **Conversational AI and Chatbots**: Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation. 4. **Multilingual Applications**: Supports global communication, translations, and multilingual content generation. 5. **Structured Data Processing**: Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation. 6. **Long-Form Content Generation**: Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs. ## **Limitations** 1. **Hardware Requirements**: Requires high-memory GPUs or TPUs due to its large parameter size and long-context support. 2. **Potential Bias in Responses**: While designed to be neutral, outputs may still reflect biases present in training data. 3. **Inconsistent Outputs in Creative Tasks**: May produce variable results in storytelling and highly subjective topics. 4. **Limited Real-World Awareness**: Does not have access to real-time events beyond its training cutoff. 5. **Error Propagation in Extended Outputs**: Minor errors in early responses may affect overall coherence in long-form outputs. 6. **Prompt Sensitivity**: The effectiveness of responses may depend on how well the input prompt is structured.
null
Non_BioNLP
![7.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/sMBoX-5wXEze5wmNNvmE_.png) # **Tucana-Opus-14B-r999** Tucana-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence. ## **Key Improvements** 1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses. 2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions. 3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries. 4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses. 5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. ## **Quickstart with transformers** Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Tucana-Opus-14B-r999" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "What are the key principles of general-purpose AI?" messages = [ {"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## **Intended Use** 1. **General-Purpose Reasoning**: Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems. 2. **Educational and Informational Assistance**: Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users. 3. **Conversational AI and Chatbots**: Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation. 4. **Multilingual Applications**: Supports global communication, translations, and multilingual content generation. 5. **Structured Data Processing**: Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation. 6. **Long-Form Content Generation**: Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs. ## **Limitations** 1. **Hardware Requirements**: Requires high-memory GPUs or TPUs due to its large parameter size and long-context support. 2. **Potential Bias in Responses**: While designed to be neutral, outputs may still reflect biases present in training data. 3. **Inconsistent Outputs in Creative Tasks**: May produce variable results in storytelling and highly subjective topics. 4. **Limited Real-World Awareness**: Does not have access to real-time events beyond its training cutoff. 5. **Error Propagation in Extended Outputs**: Minor errors in early responses may affect overall coherence in long-form outputs. 6. **Prompt Sensitivity**: The effectiveness of responses may depend on how well the input prompt is structured.
{"base_model": ["prithivMLmods/Sombrero-Opus-14B-Elite6"], "language": ["en", "zh"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "r999", "trl", "sft"]}
task
[ "TRANSLATION" ]
44,238
RichardErkhov/ibm-granite_-_granite-3.0-2b-base-8bits
RichardErkhov
null
[ "safetensors", "granite", "arxiv:0000.00000", "8-bit", "bitsandbytes", "region:us" ]
2024-11-19T16:48:23Z
2024-11-19T16:49:41+00:00
8
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-3.0-2b-base - bnb 8bits - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-3.0-2b-base/ Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - granite-3.0 model-index: - name: granite-3.0-2b-base results: - task: type: text-generation dataset: type: human-exams name: MMLU metrics: - name: pass@1 type: pass@1 value: 55.00 veriefied: false - task: type: text-generation dataset: type: human-exams name: MMLU-Pro metrics: - name: pass@1 type: pass@1 value: 23.79 veriefied: false - task: type: text-generation dataset: type: human-exams name: AGI-Eval metrics: - name: pass@1 type: pass@1 value: 22.56 veriefied: false - task: type: text-generation dataset: type: commonsense name: WinoGrande metrics: - name: pass@1 type: pass@1 value: 74.90 veriefied: false - task: type: text-generation dataset: type: commonsense name: OBQA metrics: - name: pass@1 type: pass@1 value: 43.00 veriefied: false - task: type: text-generation dataset: type: commonsense name: SIQA metrics: - name: pass@1 type: pass@1 value: 59.84 veriefied: false - task: type: text-generation dataset: type: commonsense name: PIQA metrics: - name: pass@1 type: pass@1 value: 79.27 veriefied: false - task: type: text-generation dataset: type: commonsense name: Hellaswag metrics: - name: pass@1 type: pass@1 value: 77.65 veriefied: false - task: type: text-generation dataset: type: commonsense name: TruthfulQA metrics: - name: pass@1 type: pass@1 value: 39.90 veriefied: false - task: type: text-generation dataset: type: reading-comprehension name: BoolQ metrics: - name: pass@1 type: pass@1 value: 81.35 veriefied: false - task: type: text-generation dataset: type: reading-comprehension name: SQuAD 2.0 metrics: - name: pass@1 type: pass@1 value: 25.22 veriefied: false - task: type: text-generation dataset: type: reasoning name: ARC-C metrics: - name: pass@1 type: pass@1 value: 54.27 veriefied: false - task: type: text-generation dataset: type: reasoning name: GPQA metrics: - name: pass@1 type: pass@1 value: 30.58 veriefied: false - task: type: text-generation dataset: type: reasoning name: BBH metrics: - name: pass@1 type: pass@1 value: 40.69 veriefied: false - task: type: text-generation dataset: type: reasoning name: MUSR metrics: - name: pass@1 type: pass@1 value: 34.34 veriefied: false - task: type: text-generation dataset: type: code name: HumanEval metrics: - name: pass@1 type: pass@1 value: 38.41 veriefied: false - task: type: text-generation dataset: type: code name: MBPP metrics: - name: pass@1 type: pass@1 value: 35.40 veriefied: false - task: type: text-generation dataset: type: math name: GSM8K metrics: - name: pass@1 type: pass@1 value: 47.23 veriefied: false - task: type: text-generation dataset: type: math name: MATH metrics: - name: pass@1 type: pass@1 value: 19.46 veriefied: false --- <!-- ![image/png](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/figures/granite-3_0-language-models-3x-v1.png) --> # Granite-3.0-2B-Base **Model Summary:** Granite-3.0-2B-Base is a decoder-only language model to support a variety of text-to-text generation tasks. It is trained from scratch following a two-stage training strategy. In the first stage, it is trained on 10 trillion tokens sourced from diverse domains. During the second stage, it is further trained on 2 trillion tokens using a carefully curated mix of high-quality data, aiming to enhance its performance on specific tasks. - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf) - **Release Date**: October 21st, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.0 models for languages beyond these 12 languages. **Intended use:** Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, and more. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios. **Generation:** This is a simple example of how to use Granite-3.0-2B-Base model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the code snippet below to run the example. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-base" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "Where is the Thomas J. Watson Research Center located?" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_length=4000) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output) ``` **Model Architecture:** Granite-3.0-2B-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings. | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE | | :-------- | :-------- | :--------| :--------| :--------| | Embedding size | **2048** | 4096 | 1024 | 1536 | | Number of layers | **40** | 40 | 24 | 32 | | Attention head size | **64** | 128 | 64 | 64 | | Number of attention heads | **32** | 32 | 16 | 24 | | Number of KV heads | **8** | 8 | 8 | 8 | | MLP hidden size | **8192** | 12800 | 512 | 512 | | MLP activation | **SwiGLU** | SwiGLU | SwiGLU | SwiGLU | | Number of Experts | **—** | — | 32 | 40 | | MoE TopK | **—** | — | 8 | 8 | | Initialization std | **0.1** | 0.1 | 0.1 | 0.1 | | Sequence Length | **4096** | 4096 | 4096 | 4096 | | Position Embedding | **RoPE** | RoPE | RoPE | RoPE | | # Parameters | **2.5B** | 8.1B | 1.3B | 3.3B | | # Active Parameters | **2.5B** | 8.1B | 400M | 800M | | # Training tokens | **12T** | 12T | 10T | 10T | **Training Data:** This model is trained on a mix of open source and proprietary data following a two-stage training strategy. * Stage 1 data: The data for stage 1 is sourced from diverse domains, such as: web, code, academic sources, books, and math data. * Stage 2 data: The data for stage 2 comprises a curated mix of high-quality data from the same domains, plus multilingual and instruction data. The goal of this second training phase is to enhance the model’s performance on specific tasks. A detailed attribution of datasets can be found in the [Granite Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf) and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). **Infrastructure:** We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs while minimizing environmental impact by utilizing 100% renewable energy sources. **Ethical Considerations and Limitations:** The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-3.0-2B-Base model is not the exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-3.0-2B-Base model with ethical intentions and in a responsible way. <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) granite-3.0-2b-base - bnb 8bits - Model creator: https://huggingface.co/ibm-granite/ - Original model: https://huggingface.co/ibm-granite/granite-3.0-2b-base/ Original model description: --- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - granite-3.0 model-index: - name: granite-3.0-2b-base results: - task: type: text-generation dataset: type: human-exams name: MMLU metrics: - name: pass@1 type: pass@1 value: 55.00 veriefied: false - task: type: text-generation dataset: type: human-exams name: MMLU-Pro metrics: - name: pass@1 type: pass@1 value: 23.79 veriefied: false - task: type: text-generation dataset: type: human-exams name: AGI-Eval metrics: - name: pass@1 type: pass@1 value: 22.56 veriefied: false - task: type: text-generation dataset: type: commonsense name: WinoGrande metrics: - name: pass@1 type: pass@1 value: 74.90 veriefied: false - task: type: text-generation dataset: type: commonsense name: OBQA metrics: - name: pass@1 type: pass@1 value: 43.00 veriefied: false - task: type: text-generation dataset: type: commonsense name: SIQA metrics: - name: pass@1 type: pass@1 value: 59.84 veriefied: false - task: type: text-generation dataset: type: commonsense name: PIQA metrics: - name: pass@1 type: pass@1 value: 79.27 veriefied: false - task: type: text-generation dataset: type: commonsense name: Hellaswag metrics: - name: pass@1 type: pass@1 value: 77.65 veriefied: false - task: type: text-generation dataset: type: commonsense name: TruthfulQA metrics: - name: pass@1 type: pass@1 value: 39.90 veriefied: false - task: type: text-generation dataset: type: reading-comprehension name: BoolQ metrics: - name: pass@1 type: pass@1 value: 81.35 veriefied: false - task: type: text-generation dataset: type: reading-comprehension name: SQuAD 2.0 metrics: - name: pass@1 type: pass@1 value: 25.22 veriefied: false - task: type: text-generation dataset: type: reasoning name: ARC-C metrics: - name: pass@1 type: pass@1 value: 54.27 veriefied: false - task: type: text-generation dataset: type: reasoning name: GPQA metrics: - name: pass@1 type: pass@1 value: 30.58 veriefied: false - task: type: text-generation dataset: type: reasoning name: BBH metrics: - name: pass@1 type: pass@1 value: 40.69 veriefied: false - task: type: text-generation dataset: type: reasoning name: MUSR metrics: - name: pass@1 type: pass@1 value: 34.34 veriefied: false - task: type: text-generation dataset: type: code name: HumanEval metrics: - name: pass@1 type: pass@1 value: 38.41 veriefied: false - task: type: text-generation dataset: type: code name: MBPP metrics: - name: pass@1 type: pass@1 value: 35.40 veriefied: false - task: type: text-generation dataset: type: math name: GSM8K metrics: - name: pass@1 type: pass@1 value: 47.23 veriefied: false - task: type: text-generation dataset: type: math name: MATH metrics: - name: pass@1 type: pass@1 value: 19.46 veriefied: false --- <!-- ![image/png](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/figures/granite-3_0-language-models-3x-v1.png) --> # Granite-3.0-2B-Base **Model Summary:** Granite-3.0-2B-Base is a decoder-only language model to support a variety of text-to-text generation tasks. It is trained from scratch following a two-stage training strategy. In the first stage, it is trained on 10 trillion tokens sourced from diverse domains. During the second stage, it is further trained on 2 trillion tokens using a carefully curated mix of high-quality data, aiming to enhance its performance on specific tasks. - **Developers:** Granite Team, IBM - **GitHub Repository:** [ibm-granite/granite-3.0-language-models](https://github.com/ibm-granite/granite-3.0-language-models) - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/) - **Paper:** [Granite 3.0 Language Models](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf) - **Release Date**: October 21st, 2024 - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) **Supported Languages:** English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 3.0 models for languages beyond these 12 languages. **Intended use:** Prominent use cases of LLMs in text-to-text generation include summarization, text classification, extraction, question-answering, and more. All Granite Base models are able to handle these tasks as they were trained on a large amount of data from various domains. Moreover, they can serve as baseline to create specialized models for specific application scenarios. **Generation:** This is a simple example of how to use Granite-3.0-2B-Base model. Install the following libraries: ```shell pip install torch torchvision torchaudio pip install accelerate pip install transformers ``` Then, copy the code snippet below to run the example. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "auto" model_path = "ibm-granite/granite-3.0-2b-base" tokenizer = AutoTokenizer.from_pretrained(model_path) # drop device_map if running on CPU model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device) model.eval() # change input text as desired input_text = "Where is the Thomas J. Watson Research Center located?" # tokenize the text input_tokens = tokenizer(input_text, return_tensors="pt").to(device) # generate output tokens output = model.generate(**input_tokens, max_length=4000) # decode output tokens into text output = tokenizer.batch_decode(output) # print output print(output) ``` **Model Architecture:** Granite-3.0-2B-Base is based on a decoder-only dense transformer architecture. Core components of this architecture are: GQA and RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings. | Model | 2B Dense | 8B Dense | 1B MoE | 3B MoE | | :-------- | :-------- | :--------| :--------| :--------| | Embedding size | **2048** | 4096 | 1024 | 1536 | | Number of layers | **40** | 40 | 24 | 32 | | Attention head size | **64** | 128 | 64 | 64 | | Number of attention heads | **32** | 32 | 16 | 24 | | Number of KV heads | **8** | 8 | 8 | 8 | | MLP hidden size | **8192** | 12800 | 512 | 512 | | MLP activation | **SwiGLU** | SwiGLU | SwiGLU | SwiGLU | | Number of Experts | **—** | — | 32 | 40 | | MoE TopK | **—** | — | 8 | 8 | | Initialization std | **0.1** | 0.1 | 0.1 | 0.1 | | Sequence Length | **4096** | 4096 | 4096 | 4096 | | Position Embedding | **RoPE** | RoPE | RoPE | RoPE | | # Parameters | **2.5B** | 8.1B | 1.3B | 3.3B | | # Active Parameters | **2.5B** | 8.1B | 400M | 800M | | # Training tokens | **12T** | 12T | 10T | 10T | **Training Data:** This model is trained on a mix of open source and proprietary data following a two-stage training strategy. * Stage 1 data: The data for stage 1 is sourced from diverse domains, such as: web, code, academic sources, books, and math data. * Stage 2 data: The data for stage 2 comprises a curated mix of high-quality data from the same domains, plus multilingual and instruction data. The goal of this second training phase is to enhance the model’s performance on specific tasks. A detailed attribution of datasets can be found in the [Granite Technical Report](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf) and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). **Infrastructure:** We train Granite 3.0 Language Models using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs while minimizing environmental impact by utilizing 100% renewable energy sources. **Ethical Considerations and Limitations:** The use of Large Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. Granite-3.0-2B-Base model is not the exception in this regard. Even though this model is suited for multiple generative AI tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use Granite-3.0-2B-Base model with ethical intentions and in a responsible way. <!-- ## Citation ``` @misc{granite-models, author = {author 1, author2, ...}, title = {}, journal = {}, volume = {}, year = {2024}, url = {https://arxiv.org/abs/0000.00000}, } ``` -->
{}
task
[ "TEXT_CLASSIFICATION", "SUMMARIZATION" ]
44,239
codys12/Hermes-2-Pro-Llama-3-8B
codys12
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:finetune:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-02T23:38:59Z
2024-05-02T23:39:00+00:00
6
0
--- base_model: NousResearch/Meta-Llama-3-8B datasets: - teknium/OpenHermes-2.5 language: - en license: apache-2.0 tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl widget: - example_title: Hermes 2 Pro messages: - role: system content: You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] --- # Hermes 2 Pro - Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Example Outputs ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ll2j2wkQffCsiSwUjfRUq.png) ### Write the plot for a story where anime became real life: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/h_7aXGXdm2p2ONYuDF4Ii.png) ### Coding Assistance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bBd0hyAb8w5rKUiN2w1I6.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5520|± |0.0145| | | |acc_norm|0.5887|± |0.0144| |arc_easy | 0|acc |0.8350|± |0.0076| | | |acc_norm|0.8123|± |0.0080| |boolq | 1|acc |0.8584|± |0.0061| |hellaswag | 0|acc |0.6265|± |0.0048| | | |acc_norm|0.8053|± |0.0040| |openbookqa | 0|acc |0.3800|± |0.0217| | | |acc_norm|0.4580|± |0.0223| |piqa | 0|acc |0.8003|± |0.0093| | | |acc_norm|0.8118|± |0.0091| |winogrande | 0|acc |0.7490|± |0.0122| ``` Average: 72.62 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2520|± |0.0273| | | |acc_norm|0.2559|± |0.0274| |agieval_logiqa_en | 0|acc |0.3548|± |0.0188| | | |acc_norm|0.3625|± |0.0189| |agieval_lsat_ar | 0|acc |0.1826|± |0.0255| | | |acc_norm|0.1913|± |0.0260| |agieval_lsat_lr | 0|acc |0.5510|± |0.0220| | | |acc_norm|0.5255|± |0.0221| |agieval_lsat_rc | 0|acc |0.6431|± |0.0293| | | |acc_norm|0.6097|± |0.0298| |agieval_sat_en | 0|acc |0.7330|± |0.0309| | | |acc_norm|0.7039|± |0.0319| |agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343| | | |acc_norm|0.3689|± |0.0337| |agieval_sat_math | 0|acc |0.3909|± |0.0330| | | |acc_norm|0.3773|± |0.0328| ``` Average: 42.44 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222| |bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102| |bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126| |bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288| ``` Average: 43.55 ## TruthfulQA: ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.410|± |0.0172| | | |mc2 |0.578|± |0.0157| ``` # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)}, title={Hermes-2-Pro-Llama-3-8B}, author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
null
Non_BioNLP
# Hermes 2 Pro - Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Example Outputs ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ll2j2wkQffCsiSwUjfRUq.png) ### Write the plot for a story where anime became real life: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/h_7aXGXdm2p2ONYuDF4Ii.png) ### Coding Assistance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bBd0hyAb8w5rKUiN2w1I6.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5520|± |0.0145| | | |acc_norm|0.5887|± |0.0144| |arc_easy | 0|acc |0.8350|± |0.0076| | | |acc_norm|0.8123|± |0.0080| |boolq | 1|acc |0.8584|± |0.0061| |hellaswag | 0|acc |0.6265|± |0.0048| | | |acc_norm|0.8053|± |0.0040| |openbookqa | 0|acc |0.3800|± |0.0217| | | |acc_norm|0.4580|± |0.0223| |piqa | 0|acc |0.8003|± |0.0093| | | |acc_norm|0.8118|± |0.0091| |winogrande | 0|acc |0.7490|± |0.0122| ``` Average: 72.62 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2520|± |0.0273| | | |acc_norm|0.2559|± |0.0274| |agieval_logiqa_en | 0|acc |0.3548|± |0.0188| | | |acc_norm|0.3625|± |0.0189| |agieval_lsat_ar | 0|acc |0.1826|± |0.0255| | | |acc_norm|0.1913|± |0.0260| |agieval_lsat_lr | 0|acc |0.5510|± |0.0220| | | |acc_norm|0.5255|± |0.0221| |agieval_lsat_rc | 0|acc |0.6431|± |0.0293| | | |acc_norm|0.6097|± |0.0298| |agieval_sat_en | 0|acc |0.7330|± |0.0309| | | |acc_norm|0.7039|± |0.0319| |agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343| | | |acc_norm|0.3689|± |0.0337| |agieval_sat_math | 0|acc |0.3909|± |0.0330| | | |acc_norm|0.3773|± |0.0328| ``` Average: 42.44 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222| |bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102| |bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126| |bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288| ``` Average: 43.55 ## TruthfulQA: ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.410|± |0.0172| | | |mc2 |0.578|± |0.0157| ``` # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)}, title={Hermes-2-Pro-Llama-3-8B}, author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
{"base_model": "NousResearch/Meta-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl"], "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
task
[ "TRANSLATION" ]
44,240
fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Data Protection", "Life Sciences", "Regulations", "Compliance", "Industry", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-13T07:25:22Z
2024-05-13T07:25:38+00:00
9
0
--- datasets: - fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Data Protection - Life Sciences - Regulations - Compliance - Industry --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: legal regulations search for life sciences industry ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: legal regulations search for life sciences industry ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/jina-embeddings-v2-base-en-13052024-35bv-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Data Protection", "Life Sciences", "Regulations", "Compliance", "Industry"]}
task
[ "TEXT_CLASSIFICATION" ]
44,241
gaudi/opus-mt-es-it-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-22T15:45:50Z
2024-10-19T02:48:33+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-it).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-it --output_dir ./ctranslate2/opus-mt-es-it-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-es-it-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-es-it-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-es-it-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-es-it).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-es-it --output_dir ./ctranslate2/opus-mt-es-it-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-es-it-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-es-it-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-es-it-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
44,242
dhruv01/flan-t5-xl-gguf
dhruv01
text2text-generation
[ "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "license:apache-2.0", "region:us" ]
2024-09-05T13:44:10Z
2024-09-05T15:44:30+00:00
10
0
--- datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed language: - en - fr - ro - de - multilingual license: apache-2.0 tags: - text2text-generation widget: - text: 'Translate to German: My name is Arthur' example_title: Translation - text: Please answer to the following question. Who is going to be the next Ballon d'or? example_title: Question Answering - text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.' example_title: Logical reasoning - text: Please answer the following question. What is the boiling point of Nitrogen? example_title: Scientific knowledge - text: Answer the following yes/no question. Can you write a whole Haiku in a single tweet? example_title: Yes/no question - text: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet? example_title: Reasoning task - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step' example_title: Boolean Expressions - text: The square root of x is the cube root of y. What is y to the power of 2, if x = 4? example_title: Math reasoning - text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s not certain how many lessons you''ll learn by your thirties. Does the premise entail the hypothesis?' example_title: Premise and hypothesis --- # Model Card for FLAN-T5 XL <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description The details are in the original [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl)
null
Non_BioNLP
# Model Card for FLAN-T5 XL <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg" alt="drawing" width="600"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Uses](#uses) 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 5. [Training Details](#training-details) 6. [Evaluation](#evaluation) 7. [Environmental Impact](#environmental-impact) 8. [Citation](#citation) # TL;DR If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. As mentioned in the first few lines of the abstract : > Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large). # Model Details ## Model Description The details are in the original [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl)
{"datasets": ["svakulenk0/qrecc", "taskmaster2", "djaym7/wiki_dialog", "deepmind/code_contests", "lambada", "gsm8k", "aqua_rat", "esnli", "quasc", "qed"], "language": ["en", "fr", "ro", "de", "multilingual"], "license": "apache-2.0", "tags": ["text2text-generation"], "widget": [{"text": "Translate to German: My name is Arthur", "example_title": "Translation"}, {"text": "Please answer to the following question. Who is going to be the next Ballon d'or?", "example_title": "Question Answering"}, {"text": "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.", "example_title": "Logical reasoning"}, {"text": "Please answer the following question. What is the boiling point of Nitrogen?", "example_title": "Scientific knowledge"}, {"text": "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?", "example_title": "Yes/no question"}, {"text": "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?", "example_title": "Reasoning task"}, {"text": "Q: ( False or not False or False ) is? A: Let's think step by step", "example_title": "Boolean Expressions"}, {"text": "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?", "example_title": "Math reasoning"}, {"text": "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?", "example_title": "Premise and hypothesis"}]}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
44,243
22h/cabrita-lora-v0-1
22h
null
[ "pt", "license:openrail", "region:us" ]
2023-03-17T01:26:12Z
2023-03-17T19:28:20+00:00
0
70
--- language: - pt license: openrail --- # Cabrita: portuguese instructLLaMA ## Usage Check the Github repo with code: https://github.com/22-hours/cabrita ```python from peft import PeftModel from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LLaMAForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, device_map="auto", ) model = PeftModel.from_pretrained(model, "22h/cabrita-lora-v0-1") ``` ## Data We translated the [alpaca_data.json](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) to portuguese using ChatGPT. Even if this translation was not the best, the tradeoff between costs and results were. We paid around US$ 8.00 to translate the full dataset to portuguese. If you want to know more about how the dataset was built go to: [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). ## Finetuning To finetuned the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. With this, we could run our finetuning step using 1 A100 at Colab on top of LLaMA-7B. We trained during 4 hours and we found the results pretty incredible with just that much time. The notebook we used is avaible [here](notebooks/train_lora.ipynb). ## Example outputs Stanford Alpaca: ``` Instruction: Invente uma desculpa criativa pra dizer que não preciso ir à festa. Response: Eu preciso se concentrar em trabalho importante para mim, por isso eu não posso ir à festa. ``` Cabrita: ``` Instrução: Invente uma desculpa criativa pra dizer que não preciso ir à festa. Resposta: "Eu preciso ficar em casa para cuidar dos meus animais de estimação." ``` Stanford Alpaca: ``` Instruction: Como posso começar a carreira como cientista de dados? Escreva na forma de lista. Response: 1. Comece por se especializar em um campo específico, como big data, computação, bioinformática ou estatística. 2. Obtivere um diploma em um programa de graduação relacionado a ciência de dados, como informática, matemática ou estatística. 3. Aquisite experiência prática através de internships ou projetos em empresas ou instituições. 4. Ameize sua resumo e currículo com resultados relevantes e experiência em trabalho relacionado. 5. Leia artigos, livros e atualizações sobre ciência de dados. 6. Crie um portfólio com projetos e experimentos que tenha realizado. 7. Receba um certificado de qualificação em alguma área de ciência de dados. 8. Tente obter contato com profissionais da área para conversar sobre oportunidades de emprego. 9. Acesse redes sociais, como LinkedIn, para encontrar empregos e conexões. 10. Continui sua formação atualizando-se com tecnologias e ferramentas de ciência de dados. ``` Cabrita: ``` Instrução: Como posso começar a carreira como cientista de dados? Escreva na forma de lista. Resposta: 1. Obtenha um mestrado em ciência de dados ou engenharia de dados. 2. Encontre um emprego como cientista de dados ou engenheiro de dados. 3. Conheça as principais ferramentas de análise de dados, como Python, R e SQL. 4. Conheça as principais técnicas de análise de dados, como regressão linear, regressão logística e análise de clustering. 5. Conheça as principais abordagens de análise de dados, como análise exploratória, análise de associação e análise de regressão. 6. Conheça as principais ferramentas de visualização de dados, como Tableau, QlikView e Power BI. 7. Conheça as principais abordagens de análise de dados, como análise de associação, análise de regressão e análise exploratória. ``` You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb). ## References > If I have seen further it is by standing on the sholders [sic] of Giants. > -- <cite>Isaac Newton</cite> We started this section with this citation because everything we did was only possible due to the strong community and works that other people and groups did. For our work, we rely mainly in the works developed by: [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), [Alpaca Lora](https://github.com/tloen/alpaca-lora), [ChatGPT](https://openai.com/blog/chatgpt) and [Hugging Face](https://huggingface.co/). So, thank you all for the great work and open this to the world! ## Hardware Requirements For training we have used an A100 in Google Colab. For eval, you can use a T4.
null
Non_BioNLP
# Cabrita: portuguese instructLLaMA ## Usage Check the Github repo with code: https://github.com/22-hours/cabrita ```python from peft import PeftModel from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf") model = LLaMAForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, device_map="auto", ) model = PeftModel.from_pretrained(model, "22h/cabrita-lora-v0-1") ``` ## Data We translated the [alpaca_data.json](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) to portuguese using ChatGPT. Even if this translation was not the best, the tradeoff between costs and results were. We paid around US$ 8.00 to translate the full dataset to portuguese. If you want to know more about how the dataset was built go to: [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). ## Finetuning To finetuned the LLaMA model we used the code available on [Alpaca Lora](https://github.com/tloen/alpaca-lora), which provides code to finetune the LLaMA model using PEFT from Hugging Face. With this, we could run our finetuning step using 1 A100 at Colab on top of LLaMA-7B. We trained during 4 hours and we found the results pretty incredible with just that much time. The notebook we used is avaible [here](notebooks/train_lora.ipynb). ## Example outputs Stanford Alpaca: ``` Instruction: Invente uma desculpa criativa pra dizer que não preciso ir à festa. Response: Eu preciso se concentrar em trabalho importante para mim, por isso eu não posso ir à festa. ``` Cabrita: ``` Instrução: Invente uma desculpa criativa pra dizer que não preciso ir à festa. Resposta: "Eu preciso ficar em casa para cuidar dos meus animais de estimação." ``` Stanford Alpaca: ``` Instruction: Como posso começar a carreira como cientista de dados? Escreva na forma de lista. Response: 1. Comece por se especializar em um campo específico, como big data, computação, bioinformática ou estatística. 2. Obtivere um diploma em um programa de graduação relacionado a ciência de dados, como informática, matemática ou estatística. 3. Aquisite experiência prática através de internships ou projetos em empresas ou instituições. 4. Ameize sua resumo e currículo com resultados relevantes e experiência em trabalho relacionado. 5. Leia artigos, livros e atualizações sobre ciência de dados. 6. Crie um portfólio com projetos e experimentos que tenha realizado. 7. Receba um certificado de qualificação em alguma área de ciência de dados. 8. Tente obter contato com profissionais da área para conversar sobre oportunidades de emprego. 9. Acesse redes sociais, como LinkedIn, para encontrar empregos e conexões. 10. Continui sua formação atualizando-se com tecnologias e ferramentas de ciência de dados. ``` Cabrita: ``` Instrução: Como posso começar a carreira como cientista de dados? Escreva na forma de lista. Resposta: 1. Obtenha um mestrado em ciência de dados ou engenharia de dados. 2. Encontre um emprego como cientista de dados ou engenheiro de dados. 3. Conheça as principais ferramentas de análise de dados, como Python, R e SQL. 4. Conheça as principais técnicas de análise de dados, como regressão linear, regressão logística e análise de clustering. 5. Conheça as principais abordagens de análise de dados, como análise exploratória, análise de associação e análise de regressão. 6. Conheça as principais ferramentas de visualização de dados, como Tableau, QlikView e Power BI. 7. Conheça as principais abordagens de análise de dados, como análise de associação, análise de regressão e análise exploratória. ``` You can test it using the eval notebook [here](https://colab.research.google.com/github/22-hours/cabrita/blob/main/notebooks/cabrita-lora.ipynb). ## References > If I have seen further it is by standing on the sholders [sic] of Giants. > -- <cite>Isaac Newton</cite> We started this section with this citation because everything we did was only possible due to the strong community and works that other people and groups did. For our work, we rely mainly in the works developed by: [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), [Alpaca Lora](https://github.com/tloen/alpaca-lora), [ChatGPT](https://openai.com/blog/chatgpt) and [Hugging Face](https://huggingface.co/). So, thank you all for the great work and open this to the world! ## Hardware Requirements For training we have used an A100 in Google Colab. For eval, you can use a T4.
{"language": ["pt"], "license": "openrail"}
task
[ "TRANSLATION" ]
44,244
viggypoker1/bge-base-financial-matryoshka
viggypoker1
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-02-11T05:34:47Z
2025-02-11T05:35:03+00:00
12
0
--- base_model: BAAI/bge-base-en-v1.5 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:6300 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: Our effective tax rate for fiscal years 2023 and 2022 was 19% and 13%, respectively. sentences: - What does the Corporate and Other segment include in its composition? - What was the effective tax rate for Microsoft in fiscal year 2023? - What roles did Elizabeth Rutledge hold before being appointed as Chief Marketing Officer in February 2018? - source_sentence: Many factors are considered when assessing whether it is more likely than not that the deferred tax assets will be realized, including recent cumulative earnings, expectations of future taxable income, carryforward periods and other relevant quantitative and qualitative factors. sentences: - What factors are considered when evaluating the realization of deferred tax assets? - What are the contents of Item 8 in the financial document? - Are goodwill and indefinite-lived intangible assets amortized? - source_sentence: Cost of net revenues represents costs associated with customer support, site operations, and payment processing. Significant components of these costs primarily consist of employee compensation (including stock-based compensation), contractor costs, facilities costs, depreciation of equipment and amortization expense, bank transaction fees, credit card interchange and assessment fees, authentication costs, shipping costs and digital services tax. sentences: - What was the total percentage of U.S. dialysis patient service revenues coming from government-based programs in 2023? - What are the key components of cost of net revenues? - What elements define Ford Credit's balance sheet liquidity profile? - source_sentence: Net revenue from outside of the United States decreased 15.5% to $34.9 billion in fiscal year 2023. sentences: - How did the company's net revenue perform internationally in fiscal year 2023? - What was the fair value of money market mutual funds measured at as of January 31, 2023 and how was it categorized in the fair value hierarchy? - How much did professional services expenses increase in 2023 from the previous year? - source_sentence: Marketplace revenue increased $86.3 million to $2.0 billion in the year ended December 31, 2023 compared to the year ended December 31, 2022. sentences: - What were the main factors considered in the audit process to evaluate the self-insurance reserve? - How much did Marketplace revenue increase in the year ended December 31, 2023? - Why did operations and support expenses decrease in 2023, and what factors offset this decrease? model-index: - name: BGE base Financial Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.7 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8285714285714286 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8785714285714286 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9085714285714286 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.27619047619047615 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17571428571428568 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09085714285714284 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8285714285714286 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8785714285714286 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9085714285714286 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8070713920635244 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.774145124716553 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7778677437532947 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.6942857142857143 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.83 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8728571428571429 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9042857142857142 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6942857142857143 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.27666666666666667 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17457142857142854 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09042857142857143 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6942857142857143 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.83 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8728571428571429 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9042857142857142 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8031148082413071 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.770209750566893 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7742865136346454 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.6828571428571428 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8242857142857143 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8657142857142858 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9042857142857142 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6828571428571428 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2747619047619047 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17314285714285713 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09042857142857143 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6828571428571428 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8242857142857143 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8657142857142858 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9042857142857142 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7969921030232127 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.762270975056689 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7658165867130817 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.68 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8085714285714286 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8514285714285714 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8842857142857142 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.68 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2695238095238095 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17028571428571426 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08842857142857141 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.68 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8085714285714286 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8514285714285714 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8842857142857142 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7840025892817639 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.751556689342403 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7563834249655896 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.6371428571428571 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.7814285714285715 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8271428571428572 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8728571428571429 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6371428571428571 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2604761904761905 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1654285714285714 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08728571428571427 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6371428571428571 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.7814285714285715 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8271428571428572 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8728571428571429 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7566246856089167 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7193163265306118 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7237471572016445 name: Cosine Map@100 --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("viggypoker1/bge-base-financial-matryoshka") # Run inference sentences = [ 'Marketplace revenue increased $86.3 million to $2.0 billion in the year ended December 31, 2023 compared to the year ended December 31, 2022.', 'How much did Marketplace revenue increase in the year ended December 31, 2023?', 'Why did operations and support expenses decrease in 2023, and what factors offset this decrease?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7 | | cosine_accuracy@3 | 0.8286 | | cosine_accuracy@5 | 0.8786 | | cosine_accuracy@10 | 0.9086 | | cosine_precision@1 | 0.7 | | cosine_precision@3 | 0.2762 | | cosine_precision@5 | 0.1757 | | cosine_precision@10 | 0.0909 | | cosine_recall@1 | 0.7 | | cosine_recall@3 | 0.8286 | | cosine_recall@5 | 0.8786 | | cosine_recall@10 | 0.9086 | | cosine_ndcg@10 | 0.8071 | | cosine_mrr@10 | 0.7741 | | **cosine_map@100** | **0.7779** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6943 | | cosine_accuracy@3 | 0.83 | | cosine_accuracy@5 | 0.8729 | | cosine_accuracy@10 | 0.9043 | | cosine_precision@1 | 0.6943 | | cosine_precision@3 | 0.2767 | | cosine_precision@5 | 0.1746 | | cosine_precision@10 | 0.0904 | | cosine_recall@1 | 0.6943 | | cosine_recall@3 | 0.83 | | cosine_recall@5 | 0.8729 | | cosine_recall@10 | 0.9043 | | cosine_ndcg@10 | 0.8031 | | cosine_mrr@10 | 0.7702 | | **cosine_map@100** | **0.7743** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6829 | | cosine_accuracy@3 | 0.8243 | | cosine_accuracy@5 | 0.8657 | | cosine_accuracy@10 | 0.9043 | | cosine_precision@1 | 0.6829 | | cosine_precision@3 | 0.2748 | | cosine_precision@5 | 0.1731 | | cosine_precision@10 | 0.0904 | | cosine_recall@1 | 0.6829 | | cosine_recall@3 | 0.8243 | | cosine_recall@5 | 0.8657 | | cosine_recall@10 | 0.9043 | | cosine_ndcg@10 | 0.797 | | cosine_mrr@10 | 0.7623 | | **cosine_map@100** | **0.7658** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.68 | | cosine_accuracy@3 | 0.8086 | | cosine_accuracy@5 | 0.8514 | | cosine_accuracy@10 | 0.8843 | | cosine_precision@1 | 0.68 | | cosine_precision@3 | 0.2695 | | cosine_precision@5 | 0.1703 | | cosine_precision@10 | 0.0884 | | cosine_recall@1 | 0.68 | | cosine_recall@3 | 0.8086 | | cosine_recall@5 | 0.8514 | | cosine_recall@10 | 0.8843 | | cosine_ndcg@10 | 0.784 | | cosine_mrr@10 | 0.7516 | | **cosine_map@100** | **0.7564** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6371 | | cosine_accuracy@3 | 0.7814 | | cosine_accuracy@5 | 0.8271 | | cosine_accuracy@10 | 0.8729 | | cosine_precision@1 | 0.6371 | | cosine_precision@3 | 0.2605 | | cosine_precision@5 | 0.1654 | | cosine_precision@10 | 0.0873 | | cosine_recall@1 | 0.6371 | | cosine_recall@3 | 0.7814 | | cosine_recall@5 | 0.8271 | | cosine_recall@10 | 0.8729 | | cosine_ndcg@10 | 0.7566 | | cosine_mrr@10 | 0.7193 | | **cosine_map@100** | **0.7237** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 6,300 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 45.56 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.61 tokens</li><li>max: 42 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------| | <code>GM Financial's penetration of our retail sales in the U.S. was 42% in the year ended December 31, 2023, compared to 43% in the corresponding period in 2022.</code> | <code>How did the penetration rate of GM Financial's retail sales in the U.S. change from 2022 to 2023?</code> | | <code>Net cash provided by operating activities decreased by $2.0 billion in fiscal 2022 compared to fiscal 2021.</code> | <code>How did the cash flow from operating activities change in fiscal 2022 compared to fiscal 2021?</code> | | <code>Total revenues increased $8.2 billion, or 7.5%, in 2023 compared to 2022. The increase was primarily driven by pharmacy drug mix, increased prescription volume, brand inflation, and increased contributions from vaccinations.</code> | <code>How much did total revenues increase in 2023 compared to the previous year?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### json * Dataset: json * Size: 700 evaluation samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 700 samples: | | positive | anchor | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 44.82 tokens</li><li>max: 439 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.31 tokens</li><li>max: 51 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Using these constant rates, total revenue and advertising revenue would have been $374 million and $379 million lower than actual total revenue and advertising revenue, respectively, for the full year 2023.</code> | <code>How much would total revenue and advertising revenue have been lower in 2023 using constant foreign exchange rates compared to actual figures?</code> | | <code>Interest expense increased $42.9 million to $348.8 million for the year ended December 31, 2023, compared to $305.9 million during the year ended December 31, 2022.</code> | <code>What was the total interest expense for the year ended December 31, 2023?</code> | | <code>Net cash provided by operating activities increased $183.3 million in 2022 compared to 2021 primarily as a result of higher current year earnings, net of non-cash items, and smaller decreases in liability balances, partially offset by higher inventory levels and a smaller increase in accounts payable.</code> | <code>How much did net cash provided by operating activities increase in 2022 compared to 2021?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.8122 | 10 | 1.6144 | - | - | - | - | - | - | | 0.9746 | 12 | - | 0.2439 | 0.7301 | 0.7428 | 0.7539 | 0.6957 | 0.7607 | | 1.6244 | 20 | 0.6547 | - | - | - | - | - | - | | 1.9492 | 24 | - | 0.1966 | 0.7496 | 0.7631 | 0.7729 | 0.7187 | 0.7733 | | 2.4365 | 30 | 0.4734 | - | - | - | - | - | - | | 2.9239 | 36 | - | 0.1822 | 0.7556 | 0.7643 | 0.7743 | 0.7242 | 0.7756 | | 3.2487 | 40 | 0.3833 | - | - | - | - | - | - | | **3.8985** | **48** | **-** | **0.1794** | **0.7564** | **0.7658** | **0.7743** | **0.7237** | **0.7779** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.8.10 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.0.1 - Datasets: 2.19.1 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("viggypoker1/bge-base-financial-matryoshka") # Run inference sentences = [ 'Marketplace revenue increased $86.3 million to $2.0 billion in the year ended December 31, 2023 compared to the year ended December 31, 2022.', 'How much did Marketplace revenue increase in the year ended December 31, 2023?', 'Why did operations and support expenses decrease in 2023, and what factors offset this decrease?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_768` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7 | | cosine_accuracy@3 | 0.8286 | | cosine_accuracy@5 | 0.8786 | | cosine_accuracy@10 | 0.9086 | | cosine_precision@1 | 0.7 | | cosine_precision@3 | 0.2762 | | cosine_precision@5 | 0.1757 | | cosine_precision@10 | 0.0909 | | cosine_recall@1 | 0.7 | | cosine_recall@3 | 0.8286 | | cosine_recall@5 | 0.8786 | | cosine_recall@10 | 0.9086 | | cosine_ndcg@10 | 0.8071 | | cosine_mrr@10 | 0.7741 | | **cosine_map@100** | **0.7779** | #### Information Retrieval * Dataset: `dim_512` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6943 | | cosine_accuracy@3 | 0.83 | | cosine_accuracy@5 | 0.8729 | | cosine_accuracy@10 | 0.9043 | | cosine_precision@1 | 0.6943 | | cosine_precision@3 | 0.2767 | | cosine_precision@5 | 0.1746 | | cosine_precision@10 | 0.0904 | | cosine_recall@1 | 0.6943 | | cosine_recall@3 | 0.83 | | cosine_recall@5 | 0.8729 | | cosine_recall@10 | 0.9043 | | cosine_ndcg@10 | 0.8031 | | cosine_mrr@10 | 0.7702 | | **cosine_map@100** | **0.7743** | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6829 | | cosine_accuracy@3 | 0.8243 | | cosine_accuracy@5 | 0.8657 | | cosine_accuracy@10 | 0.9043 | | cosine_precision@1 | 0.6829 | | cosine_precision@3 | 0.2748 | | cosine_precision@5 | 0.1731 | | cosine_precision@10 | 0.0904 | | cosine_recall@1 | 0.6829 | | cosine_recall@3 | 0.8243 | | cosine_recall@5 | 0.8657 | | cosine_recall@10 | 0.9043 | | cosine_ndcg@10 | 0.797 | | cosine_mrr@10 | 0.7623 | | **cosine_map@100** | **0.7658** | #### Information Retrieval * Dataset: `dim_128` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.68 | | cosine_accuracy@3 | 0.8086 | | cosine_accuracy@5 | 0.8514 | | cosine_accuracy@10 | 0.8843 | | cosine_precision@1 | 0.68 | | cosine_precision@3 | 0.2695 | | cosine_precision@5 | 0.1703 | | cosine_precision@10 | 0.0884 | | cosine_recall@1 | 0.68 | | cosine_recall@3 | 0.8086 | | cosine_recall@5 | 0.8514 | | cosine_recall@10 | 0.8843 | | cosine_ndcg@10 | 0.784 | | cosine_mrr@10 | 0.7516 | | **cosine_map@100** | **0.7564** | #### Information Retrieval * Dataset: `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.6371 | | cosine_accuracy@3 | 0.7814 | | cosine_accuracy@5 | 0.8271 | | cosine_accuracy@10 | 0.8729 | | cosine_precision@1 | 0.6371 | | cosine_precision@3 | 0.2605 | | cosine_precision@5 | 0.1654 | | cosine_precision@10 | 0.0873 | | cosine_recall@1 | 0.6371 | | cosine_recall@3 | 0.7814 | | cosine_recall@5 | 0.8271 | | cosine_recall@10 | 0.8729 | | cosine_ndcg@10 | 0.7566 | | cosine_mrr@10 | 0.7193 | | **cosine_map@100** | **0.7237** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 6,300 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 45.56 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.61 tokens</li><li>max: 42 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------| | <code>GM Financial's penetration of our retail sales in the U.S. was 42% in the year ended December 31, 2023, compared to 43% in the corresponding period in 2022.</code> | <code>How did the penetration rate of GM Financial's retail sales in the U.S. change from 2022 to 2023?</code> | | <code>Net cash provided by operating activities decreased by $2.0 billion in fiscal 2022 compared to fiscal 2021.</code> | <code>How did the cash flow from operating activities change in fiscal 2022 compared to fiscal 2021?</code> | | <code>Total revenues increased $8.2 billion, or 7.5%, in 2023 compared to 2022. The increase was primarily driven by pharmacy drug mix, increased prescription volume, brand inflation, and increased contributions from vaccinations.</code> | <code>How much did total revenues increase in 2023 compared to the previous year?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Evaluation Dataset #### json * Dataset: json * Size: 700 evaluation samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 700 samples: | | positive | anchor | |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 44.82 tokens</li><li>max: 439 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.31 tokens</li><li>max: 51 tokens</li></ul> | * Samples: | positive | anchor | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>Using these constant rates, total revenue and advertising revenue would have been $374 million and $379 million lower than actual total revenue and advertising revenue, respectively, for the full year 2023.</code> | <code>How much would total revenue and advertising revenue have been lower in 2023 using constant foreign exchange rates compared to actual figures?</code> | | <code>Interest expense increased $42.9 million to $348.8 million for the year ended December 31, 2023, compared to $305.9 million during the year ended December 31, 2022.</code> | <code>What was the total interest expense for the year ended December 31, 2023?</code> | | <code>Net cash provided by operating activities increased $183.3 million in 2022 compared to 2021 primarily as a result of higher current year earnings, net of non-cash items, and smaller decreases in liability balances, partially offset by higher inventory levels and a smaller increase in accounts payable.</code> | <code>How much did net cash provided by operating activities increase in 2022 compared to 2021?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 | |:----------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:| | 0.8122 | 10 | 1.6144 | - | - | - | - | - | - | | 0.9746 | 12 | - | 0.2439 | 0.7301 | 0.7428 | 0.7539 | 0.6957 | 0.7607 | | 1.6244 | 20 | 0.6547 | - | - | - | - | - | - | | 1.9492 | 24 | - | 0.1966 | 0.7496 | 0.7631 | 0.7729 | 0.7187 | 0.7733 | | 2.4365 | 30 | 0.4734 | - | - | - | - | - | - | | 2.9239 | 36 | - | 0.1822 | 0.7556 | 0.7643 | 0.7743 | 0.7242 | 0.7756 | | 3.2487 | 40 | 0.3833 | - | - | - | - | - | - | | **3.8985** | **48** | **-** | **0.1794** | **0.7564** | **0.7658** | **0.7743** | **0.7237** | **0.7779** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.8.10 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.0.1 - Datasets: 2.19.1 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-base-en-v1.5", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Our effective tax rate for fiscal years 2023 and 2022 was 19% and 13%, respectively.", "sentences": ["What does the Corporate and Other segment include in its composition?", "What was the effective tax rate for Microsoft in fiscal year 2023?", "What roles did Elizabeth Rutledge hold before being appointed as Chief Marketing Officer in February 2018?"]}, {"source_sentence": "Many factors are considered when assessing whether it is more likely than not that the deferred tax assets will be realized, including recent cumulative earnings, expectations of future taxable income, carryforward periods and other relevant quantitative and qualitative factors.", "sentences": ["What factors are considered when evaluating the realization of deferred tax assets?", "What are the contents of Item 8 in the financial document?", "Are goodwill and indefinite-lived intangible assets amortized?"]}, {"source_sentence": "Cost of net revenues represents costs associated with customer support, site operations, and payment processing. Significant components of these costs primarily consist of employee compensation (including stock-based compensation), contractor costs, facilities costs, depreciation of equipment and amortization expense, bank transaction fees, credit card interchange and assessment fees, authentication costs, shipping costs and digital services tax.", "sentences": ["What was the total percentage of U.S. dialysis patient service revenues coming from government-based programs in 2023?", "What are the key components of cost of net revenues?", "What elements define Ford Credit's balance sheet liquidity profile?"]}, {"source_sentence": "Net revenue from outside of the United States decreased 15.5% to $34.9 billion in fiscal year 2023.", "sentences": ["How did the company's net revenue perform internationally in fiscal year 2023?", "What was the fair value of money market mutual funds measured at as of January 31, 2023 and how was it categorized in the fair value hierarchy?", "How much did professional services expenses increase in 2023 from the previous year?"]}, {"source_sentence": "Marketplace revenue increased $86.3 million to $2.0 billion in the year ended December 31, 2023 compared to the year ended December 31, 2022.", "sentences": ["What were the main factors considered in the audit process to evaluate the self-insurance reserve?", "How much did Marketplace revenue increase in the year ended December 31, 2023?", "Why did operations and support expenses decrease in 2023, and what factors offset this decrease?"]}], "model-index": [{"name": "BGE base Financial Matryoshka", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.7, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8285714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8785714285714286, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9085714285714286, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.7, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27619047619047615, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17571428571428568, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09085714285714284, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.7, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8285714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8785714285714286, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9085714285714286, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8070713920635244, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.774145124716553, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7778677437532947, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6942857142857143, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.83, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8728571428571429, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9042857142857142, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6942857142857143, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.27666666666666667, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17457142857142854, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09042857142857143, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6942857142857143, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.83, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8728571428571429, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9042857142857142, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.8031148082413071, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.770209750566893, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7742865136346454, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6828571428571428, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8242857142857143, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8657142857142858, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.9042857142857142, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6828571428571428, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2747619047619047, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17314285714285713, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.09042857142857143, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6828571428571428, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8242857142857143, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8657142857142858, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.9042857142857142, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7969921030232127, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.762270975056689, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7658165867130817, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.68, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.8085714285714286, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8514285714285714, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8842857142857142, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.68, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2695238095238095, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.17028571428571426, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08842857142857141, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.68, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.8085714285714286, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8514285714285714, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8842857142857142, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7840025892817639, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.751556689342403, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7563834249655896, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.6371428571428571, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7814285714285715, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8271428571428572, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8728571428571429, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.6371428571428571, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.2604761904761905, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1654285714285714, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.08728571428571427, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.6371428571428571, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7814285714285715, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8271428571428572, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8728571428571429, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7566246856089167, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.7193163265306118, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.7237471572016445, "name": "Cosine Map@100"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,245
eeeebbb2/bd2df650-ef52-4589-a762-313905b5420e
eeeebbb2
null
[ "peft", "safetensors", "falcon", "axolotl", "generated_from_trainer", "custom_code", "base_model:fxmarty/really-tiny-falcon-testing", "base_model:adapter:fxmarty/really-tiny-falcon-testing", "license:mit", "region:us" ]
2024-12-12T00:49:43Z
2024-12-12T00:50:21+00:00
0
0
--- base_model: fxmarty/really-tiny-falcon-testing library_name: peft license: mit tags: - axolotl - generated_from_trainer model-index: - name: bd2df650-ef52-4589-a762-313905b5420e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/really-tiny-falcon-testing bf16: auto chat_template: llama3 cosine_min_lr_ratio: 0.1 data_processes: 4 dataset_prepared_path: null datasets: - data_files: - 99c3033a220f8ac7_train_data.json ds_type: json format: custom num_proc: 4 path: /workspace/input_data/99c3033a220f8ac7_train_data.json streaming: true type: field_input: input_translation field_instruction: input field_output: id format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: balanced do_eval: true early_stopping_patience: 1 eval_batch_size: 1 eval_sample_packing: false eval_steps: 25 evaluation_strategy: steps flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: true hub_model_id: eeeebbb2/bd2df650-ef52-4589-a762-313905b5420e hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB 1: 75GB 2: 75GB 3: 75GB max_steps: 50 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/99c3033a220f8ac7_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 25 save_strategy: steps sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: false train_on_inputs: false trust_remote_code: true val_set_size: 50 wandb_entity: null wandb_mode: online wandb_name: bd2df650-ef52-4589-a762-313905b5420e wandb_project: Public_TuningSN wandb_runid: bd2df650-ef52-4589-a762-313905b5420e warmup_ratio: 0.04 weight_decay: 0.01 xformers_attention: null ``` </details><br> # bd2df650-ef52-4589-a762-313905b5420e This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.9100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 177.2625 | 0.0244 | 1 | 11.0615 | | 175.1875 | 0.6098 | 25 | 10.9495 | | 174.325 | 1.2195 | 50 | 10.9100 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/really-tiny-falcon-testing bf16: auto chat_template: llama3 cosine_min_lr_ratio: 0.1 data_processes: 4 dataset_prepared_path: null datasets: - data_files: - 99c3033a220f8ac7_train_data.json ds_type: json format: custom num_proc: 4 path: /workspace/input_data/99c3033a220f8ac7_train_data.json streaming: true type: field_input: input_translation field_instruction: input field_output: id format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: balanced do_eval: true early_stopping_patience: 1 eval_batch_size: 1 eval_sample_packing: false eval_steps: 25 evaluation_strategy: steps flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: true hub_model_id: eeeebbb2/bd2df650-ef52-4589-a762-313905b5420e hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB 1: 75GB 2: 75GB 3: 75GB max_steps: 50 micro_batch_size: 2 mixed_precision: bf16 mlflow_experiment_name: /tmp/99c3033a220f8ac7_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 25 save_strategy: steps sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: false train_on_inputs: false trust_remote_code: true val_set_size: 50 wandb_entity: null wandb_mode: online wandb_name: bd2df650-ef52-4589-a762-313905b5420e wandb_project: Public_TuningSN wandb_runid: bd2df650-ef52-4589-a762-313905b5420e warmup_ratio: 0.04 weight_decay: 0.01 xformers_attention: null ``` </details><br> # bd2df650-ef52-4589-a762-313905b5420e This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.9100 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 177.2625 | 0.0244 | 1 | 11.0615 | | 175.1875 | 0.6098 | 25 | 10.9495 | | 174.325 | 1.2195 | 50 | 10.9100 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
{"base_model": "fxmarty/really-tiny-falcon-testing", "library_name": "peft", "license": "mit", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "bd2df650-ef52-4589-a762-313905b5420e", "results": []}]}
task
[ "TRANSLATION" ]
44,246
Helsinki-NLP/opus-mt-tc-big-zle-fi
Helsinki-NLP
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "translation", "opus-mt-tc", "fi", "ru", "uk", "zle", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-24T07:35:19Z
2023-10-10T11:39:36+00:00
29
0
--- language: - fi - ru - uk - zle license: cc-by-4.0 tags: - translation - opus-mt-tc model-index: - name: opus-mt-tc-big-zle-fi results: - task: type: translation name: Translation rus-fin dataset: name: flores101-devtest type: flores_101 args: rus fin devtest metrics: - type: bleu value: 17.4 name: BLEU - type: bleu value: 18.0 name: BLEU - task: type: translation name: Translation rus-fin dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: rus-fin metrics: - type: bleu value: 42.2 name: BLEU --- # opus-mt-tc-big-zle-fi Neural machine translation model for translating from East Slavic languages (zle) to Finnish (fi). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): rus ukr * target language(s): fin * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.zip) * more information released models: [OPUS-MT zle-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-fin/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Мы уже проголосовали.", "Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять." ] model_name = "pytorch-models/opus-mt-tc-big-zle-fi" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Olemme jo äänestäneet. # Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-fi") print(pipe("Мы уже проголосовали.")) # expected output: Olemme jo äänestäneet. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | rus-fin | tatoeba-test-v2021-08-07 | 0.66334 | 42.2 | 3643 | 19319 | | rus-fin | flores101-devtest | 0.52577 | 17.4 | 1012 | 18781 | | ukr-fin | flores101-devtest | 0.53440 | 18.0 | 1012 | 18781 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 42126b6 * port time: Thu Mar 24 09:28:52 EET 2022 * port machine: LM0-400-22516.local
null
TBD
# opus-mt-tc-big-zle-fi Neural machine translation model for translating from East Slavic languages (zle) to Finnish (fi). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-03-07 * source language(s): rus ukr * target language(s): fin * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.zip) * more information released models: [OPUS-MT zle-fin README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-fin/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "Мы уже проголосовали.", "Один, два, три, чотири, п'ять, шість, сім, вісім, дев'ять, десять." ] model_name = "pytorch-models/opus-mt-tc-big-zle-fi" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Olemme jo äänestäneet. # Yksi, kaksi, kolme, neljä, viisi, kuusi, seitsemän, kahdeksan, yhdeksän, kymmenen. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-fi") print(pipe("Мы уже проголосовали.")) # expected output: Olemme jo äänestäneet. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-fin/opusTCv20210807+bt_transformer-big_2022-03-07.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | rus-fin | tatoeba-test-v2021-08-07 | 0.66334 | 42.2 | 3643 | 19319 | | rus-fin | flores101-devtest | 0.52577 | 17.4 | 1012 | 18781 | | ukr-fin | flores101-devtest | 0.53440 | 18.0 | 1012 | 18781 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 42126b6 * port time: Thu Mar 24 09:28:52 EET 2022 * port machine: LM0-400-22516.local
{"language": ["fi", "ru", "uk", "zle"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-big-zle-fi", "results": [{"task": {"type": "translation", "name": "Translation rus-fin"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "rus fin devtest"}, "metrics": [{"type": "bleu", "value": 17.4, "name": "BLEU"}, {"type": "bleu", "value": 18.0, "name": "BLEU"}]}, {"task": {"type": "translation", "name": "Translation rus-fin"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "rus-fin"}, "metrics": [{"type": "bleu", "value": 42.2, "name": "BLEU"}]}]}]}
task
[ "TRANSLATION" ]
44,247
duyntnet/Llama-3.2-3B-imatrix-GGUF
duyntnet
text-generation
[ "transformers", "gguf", "imatrix", "Llama-3.2-3B", "text-generation", "en", "license:other", "region:us" ]
2024-09-29T03:08:46Z
2024-09-29T04:11:37+00:00
136
0
--- language: - en license: other pipeline_tag: text-generation tags: - transformers - gguf - imatrix - Llama-3.2-3B inference: false --- Quantizations of https://huggingface.co/meta-llama/Llama-3.2-3B ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) --- # From original readme The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggi
null
Non_BioNLP
Quantizations of https://huggingface.co/meta-llama/Llama-3.2-3B ### Inference Clients/UIs * [llama.cpp](https://github.com/ggerganov/llama.cpp) * [KoboldCPP](https://github.com/LostRuins/koboldcpp) * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [ollama](https://github.com/ollama/ollama) --- # From original readme The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. **Model Developer:** Meta **Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. | | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff | | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | | Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 | | | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | | **Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. **Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date:** Sept 25, 2024 **Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. **License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). **Feedback:** Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama-models/tree/main/models/llama3_2). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. **Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card. ## How to use This repository contains two versions of Llama-3.2-3B, for use with transformers and with the original `llama` codebase. ### Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers. ```python import torch from transformers import pipeline model_id = "meta-llama/Llama-3.2-3B" pipe = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map="auto" ) pipe("The key to life is") ``` ### Use with `llama` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggi
{"language": ["en"], "license": "other", "pipeline_tag": "text-generation", "tags": ["transformers", "gguf", "imatrix", "Llama-3.2-3B"], "inference": false}
task
[ "SUMMARIZATION" ]
44,248
magicsword/wy-mt-en-zh
magicsword
translation
[ "transformers", "pytorch", "safetensors", "marian", "text2text-generation", "autotrain", "translation", "unk", "dataset:magicsword/autotrain-data-wy-mt-en-zh", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-16T15:16:02Z
2023-07-17T04:04:52+00:00
15
0
--- datasets: - magicsword/autotrain-data-wy-mt-en-zh language: - unk - unk tags: - autotrain - translation co2_eq_emissions: emissions: 93.22001955321743 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 74981139788 - CO2 Emissions (in grams): 93.2200 ## Validation Metrics - Loss: 2.249 - SacreBLEU: 12.950 - Gen len: 16.555
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Translation - Model ID: 74981139788 - CO2 Emissions (in grams): 93.2200 ## Validation Metrics - Loss: 2.249 - SacreBLEU: 12.950 - Gen len: 16.555
{"datasets": ["magicsword/autotrain-data-wy-mt-en-zh"], "language": ["unk", "unk"], "tags": ["autotrain", "translation"], "co2_eq_emissions": {"emissions": 93.22001955321743}}
task
[ "TRANSLATION" ]
44,249
Rezakakooee/marian-finetuned-kde4-en-to-fr
Rezakakooee
translation
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-12-27T15:53:26Z
2023-12-27T16:47:29+00:00
122
0
--- base_model: Helsinki-NLP/opus-mt-en-fr license: apache-2.0 tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.6964 - eval_bleu: 39.1660 - eval_runtime: 1604.2015 - eval_samples_per_second: 13.102 - eval_steps_per_second: 0.205 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.6964 - eval_bleu: 39.1660 - eval_runtime: 1604.2015 - eval_samples_per_second: 13.102 - eval_steps_per_second: 0.205 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
{"base_model": "Helsinki-NLP/opus-mt-en-fr", "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
task
[ "TRANSLATION" ]
44,250
teacookies/autonlp-roberta-base-squad2-24465522
teacookies
question-answering
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "autonlp", "unk", "dataset:teacookies/autonlp-data-roberta-base-squad2", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-10-22T08:05:40+00:00
112
0
--- datasets: - teacookies/autonlp-data-roberta-base-squad2 language: unk tags: - autonlp - question-answering widget: - text: Who loves AutoNLP? context: Everyone loves AutoNLP co2_eq_emissions: 44.450538076574766 --- # Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465522 - CO2 Emissions (in grams): 44.450538076574766 ## Validation Metrics - Loss: 0.5572742223739624 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465522 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
null
Non_BioNLP
# Model Trained Using AutoNLP - Problem type: Extractive Question Answering - Model ID: 24465522 - CO2 Emissions (in grams): 44.450538076574766 ## Validation Metrics - Loss: 0.5572742223739624 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"question": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}' https://api-inference.huggingface.co/models/teacookies/autonlp-roberta-base-squad2-24465522 ``` Or Python API: ``` import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("teacookies/autonlp-roberta-base-squad2-24465522", use_auth_token=True) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoNLP?", "Everyone loves AutoNLP" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
{"datasets": ["teacookies/autonlp-data-roberta-base-squad2"], "language": "unk", "tags": ["autonlp", "question-answering"], "widget": [{"text": "Who loves AutoNLP?", "context": "Everyone loves AutoNLP"}], "co2_eq_emissions": 44.450538076574766}
task
[ "QUESTION_ANSWERING" ]
44,251
martimfasantos/simpo-sum-10p-shuff-2e-7-full-from-sft-tiny_real
martimfasantos
summarization
[ "safetensors", "llama", "summarization", "generated_from_trainer", "base_model:martimfasantos/TinyLlama-1.1B-Sum-SFT", "base_model:finetune:martimfasantos/TinyLlama-1.1B-Sum-SFT", "license:apache-2.0", "region:us" ]
2024-08-25T23:16:14Z
2024-08-25T23:17:21+00:00
14
0
--- base_model: martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params license: apache-2.0 tags: - summarization - generated_from_trainer model-index: - name: simpo-sum-10p-shuff-2e-7-full-from-sft-tiny_real results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simpo-sum-10p-shuff-2e-7-full-from-sft-tiny_real This model is a fine-tuned version of [martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params](https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params) on the openai/summarize_from_feedback dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # simpo-sum-10p-shuff-2e-7-full-from-sft-tiny_real This model is a fine-tuned version of [martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params](https://huggingface.co/martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params) on the openai/summarize_from_feedback dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 1 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
{"base_model": "martimfasantos/sft-sum-chosen-10lp-shuff-full-tiny_same_params", "license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "simpo-sum-10p-shuff-2e-7-full-from-sft-tiny_real", "results": []}]}
task
[ "SUMMARIZATION" ]
44,252
pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1-sts
pritamdeka
sentence-similarity
[ "sentence-transformers", "safetensors", "distilbert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1", "base_model:finetune:pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-07-17T18:56:34Z
2024-07-17T18:57:10+00:00
8
0
--- base_model: pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1 datasets: [] language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5749 - loss:CosineSimilarityLoss widget: - source_sentence: আমি "... comoving মহাজাগতিক বিশ্ৰাম ফ্ৰেমৰ তুলনাত ... সিংহ নক্ষত্ৰমণ্ডলৰ ফালে কিছু 371 কিলোমিটাৰ প্ৰতি ছেকেণ্ডত" আগবাঢ়িছো. sentences: - বাস্কেটবল খেলুৱৈগৰাকীয়ে নিজৰ দলৰ হৈ পইণ্ট লাভ কৰিবলৈ ওলাইছে। - আন কোনো বস্তুৰ লগত আপেক্ষিক নহোৱা কোনো ‘ষ্টিল’ নাই। - এজনী ছোৱালীয়ে বতাহ বাদ্যযন্ত্ৰ বজায়। - source_sentence: চাৰিটা ল’ৰা-ছোৱালীয়ে ভঁৰালৰ জীৱ-জন্তুবোৰলৈ চাই আছে। sentences: - ডাইনিং টেবুল এখনৰ চাৰিওফালে বৃদ্ধৰ দল এটাই পোজ দিছে। - বিকিনি পিন্ধা চাৰিগৰাকী মহিলাই বিলত ভলীবল খেলি আছে। - ল’ৰা-ছোৱালীয়ে ভেড়া চাই। - source_sentence: ডালত বহি থকা দুটা টান ঈগল। sentences: - জাতৰ জেব্ৰা ডানিঅ’ অত্যন্ত কঠোৰ মাছ, ইহঁতক হত্যা কৰাটো প্ৰায় কঠিন। - এটা ডালত দুটা ঈগল বহি আছে। - নূন্যতম মজুৰিৰ আইনসমূহে কম দক্ষ, কম উৎপাদনশীল লোকক আটাইতকৈ বেছি আঘাত দিয়ে। - source_sentence: '"মই আচলতে যি বিচাৰিছো সেয়া হৈছে মুছলমান জনসংখ্যাৰ এটা অনুমান..." @ThanosK আৰু @T.E.D., এটা সামগ্ৰিক, সাধাৰণ জনসংখ্যাৰ অনুমান f.e.' sentences: - এগৰাকী মহিলাই সেউজীয়া পিঁয়াজ কাটি আছে। - তলত দিয়া কথাখিনি মোৰ কুকুৰ কাণৰ দৰে কপিৰ পৰা লোৱা হৈছে নিউ পেংগুইন এটলাছ অৱ মেডিভেল হিষ্ট্ৰীৰ। - আমাৰ দৰে সৌৰজগতৰ কোনো তাৰকাৰাজ্যৰ বাহিৰত থকাটো সম্ভৱ হ’ব পাৰে। - source_sentence: ইণ্টাৰনেট কেমেৰাৰ জৰিয়তে এগৰাকী ছোৱালীৰ লগত কথা পাতিলে মানুহজনে। sentences: - গছৰ শাৰী এটাৰ সন্মুখত পথাৰত ভেড়া চৰিছে। - এজন মানুহে গীটাৰ বজাই আছে। - ৱেবকেমৰ জৰিয়তে এগৰাকী ছোৱালীৰ সৈতে কথা পাতিছে এজন কিশোৰে। model-index: - name: SentenceTransformer based on pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: pritamdeka/stsb assamese translated dev type: pritamdeka/stsb-assamese-translated-dev metrics: - type: pearson_cosine value: 0.8103888874564235 name: Pearson Cosine - type: spearman_cosine value: 0.808745256408391 name: Spearman Cosine - type: pearson_manhattan value: 0.7856524098322162 name: Pearson Manhattan - type: spearman_manhattan value: 0.7931254692762979 name: Spearman Manhattan - type: pearson_euclidean value: 0.787635055496797 name: Pearson Euclidean - type: spearman_euclidean value: 0.7951615705258325 name: Spearman Euclidean - type: pearson_dot value: 0.7706254928060731 name: Pearson Dot - type: spearman_dot value: 0.7771019257164439 name: Spearman Dot - type: pearson_max value: 0.8103888874564235 name: Pearson Max - type: spearman_max value: 0.808745256408391 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: pritamdeka/stsb assamese translated test type: pritamdeka/stsb-assamese-translated-test metrics: - type: pearson_cosine value: 0.7701562538442139 name: Pearson Cosine - type: spearman_cosine value: 0.7660618813636367 name: Spearman Cosine - type: pearson_manhattan value: 0.749425583772647 name: Pearson Manhattan - type: spearman_manhattan value: 0.7529158472529595 name: Spearman Manhattan - type: pearson_euclidean value: 0.7498757891992801 name: Pearson Euclidean - type: spearman_euclidean value: 0.7531339468525071 name: Spearman Euclidean - type: pearson_dot value: 0.7193336616396375 name: Pearson Dot - type: spearman_dot value: 0.7151802549941848 name: Spearman Dot - type: pearson_max value: 0.7701562538442139 name: Pearson Max - type: spearman_max value: 0.7660618813636367 name: Spearman Max --- # SentenceTransformer based on pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1](https://huggingface.co/pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1](https://huggingface.co/pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1) <!-- at revision e1bd9f5cf02ff4ac84bb1d9d570a6d4aae689d51 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1-sts") # Run inference sentences = [ 'ইণ্টাৰনেট কেমেৰাৰ জৰিয়তে এগৰাকী ছোৱালীৰ লগত কথা পাতিলে মানুহজনে।', 'ৱেবকেমৰ জৰিয়তে এগৰাকী ছোৱালীৰ সৈতে কথা পাতিছে এজন কিশোৰে।', 'এজন মানুহে গীটাৰ বজাই আছে।', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `pritamdeka/stsb-assamese-translated-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8104 | | **spearman_cosine** | **0.8087** | | pearson_manhattan | 0.7857 | | spearman_manhattan | 0.7931 | | pearson_euclidean | 0.7876 | | spearman_euclidean | 0.7952 | | pearson_dot | 0.7706 | | spearman_dot | 0.7771 | | pearson_max | 0.8104 | | spearman_max | 0.8087 | #### Semantic Similarity * Dataset: `pritamdeka/stsb-assamese-translated-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7702 | | **spearman_cosine** | **0.7661** | | pearson_manhattan | 0.7494 | | spearman_manhattan | 0.7529 | | pearson_euclidean | 0.7499 | | spearman_euclidean | 0.7531 | | pearson_dot | 0.7193 | | spearman_dot | 0.7152 | | pearson_max | 0.7702 | | spearman_max | 0.7661 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | pritamdeka/stsb-assamese-translated-dev_spearman_cosine | pritamdeka/stsb-assamese-translated-test_spearman_cosine | |:----------:|:-------:|:-------------:|:----------:|:-------------------------------------------------------:|:--------------------------------------------------------:| | 1.1111 | 100 | 0.0386 | 0.0324 | 0.8024 | - | | 2.2222 | 200 | 0.0238 | 0.0316 | 0.8095 | - | | 3.3333 | 300 | 0.0141 | 0.0316 | 0.8092 | - | | 4.4444 | 400 | 0.0086 | 0.0319 | 0.8085 | - | | **5.5556** | **500** | **0.0065** | **0.0314** | **0.8107** | **-** | | 6.6667 | 600 | 0.005 | 0.0318 | 0.8088 | - | | 7.7778 | 700 | 0.0044 | 0.0320 | 0.8076 | - | | 8.8889 | 800 | 0.0038 | 0.0317 | 0.8095 | - | | 10.0 | 900 | 0.0035 | 0.0318 | 0.8087 | 0.7661 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1](https://huggingface.co/pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1](https://huggingface.co/pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1) <!-- at revision e1bd9f5cf02ff4ac84bb1d9d570a6d4aae689d51 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1-sts") # Run inference sentences = [ 'ইণ্টাৰনেট কেমেৰাৰ জৰিয়তে এগৰাকী ছোৱালীৰ লগত কথা পাতিলে মানুহজনে।', 'ৱেবকেমৰ জৰিয়তে এগৰাকী ছোৱালীৰ সৈতে কথা পাতিছে এজন কিশোৰে।', 'এজন মানুহে গীটাৰ বজাই আছে।', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `pritamdeka/stsb-assamese-translated-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8104 | | **spearman_cosine** | **0.8087** | | pearson_manhattan | 0.7857 | | spearman_manhattan | 0.7931 | | pearson_euclidean | 0.7876 | | spearman_euclidean | 0.7952 | | pearson_dot | 0.7706 | | spearman_dot | 0.7771 | | pearson_max | 0.8104 | | spearman_max | 0.8087 | #### Semantic Similarity * Dataset: `pritamdeka/stsb-assamese-translated-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.7702 | | **spearman_cosine** | **0.7661** | | pearson_manhattan | 0.7494 | | spearman_manhattan | 0.7529 | | pearson_euclidean | 0.7499 | | spearman_euclidean | 0.7531 | | pearson_dot | 0.7193 | | spearman_dot | 0.7152 | | pearson_max | 0.7702 | | spearman_max | 0.7661 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 64 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | pritamdeka/stsb-assamese-translated-dev_spearman_cosine | pritamdeka/stsb-assamese-translated-test_spearman_cosine | |:----------:|:-------:|:-------------:|:----------:|:-------------------------------------------------------:|:--------------------------------------------------------:| | 1.1111 | 100 | 0.0386 | 0.0324 | 0.8024 | - | | 2.2222 | 200 | 0.0238 | 0.0316 | 0.8095 | - | | 3.3333 | 300 | 0.0141 | 0.0316 | 0.8092 | - | | 4.4444 | 400 | 0.0086 | 0.0319 | 0.8085 | - | | **5.5556** | **500** | **0.0065** | **0.0314** | **0.8107** | **-** | | 6.6667 | 600 | 0.005 | 0.0318 | 0.8088 | - | | 7.7778 | 700 | 0.0044 | 0.0320 | 0.8076 | - | | 8.8889 | 800 | 0.0038 | 0.0317 | 0.8095 | - | | 10.0 | 900 | 0.0035 | 0.0318 | 0.8087 | 0.7661 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "আমি \"... comoving মহাজাগতিক বিশ্ৰাম ফ্ৰেমৰ তুলনাত ... সিংহ নক্ষত্ৰমণ্ডলৰ ফালে কিছু 371 কিলোমিটাৰ প্ৰতি ছেকেণ্ডত\" আগবাঢ়িছো.", "sentences": ["বাস্কেটবল খেলুৱৈগৰাকীয়ে নিজৰ দলৰ হৈ পইণ্ট লাভ কৰিবলৈ ওলাইছে।", "আন কোনো বস্তুৰ লগত আপেক্ষিক নহোৱা কোনো ‘ষ্টিল’ নাই।", "এজনী ছোৱালীয়ে বতাহ বাদ্যযন্ত্ৰ বজায়।"]}, {"source_sentence": "চাৰিটা ল’ৰা-ছোৱালীয়ে ভঁৰালৰ জীৱ-জন্তুবোৰলৈ চাই আছে।", "sentences": ["ডাইনিং টেবুল এখনৰ চাৰিওফালে বৃদ্ধৰ দল এটাই পোজ দিছে।", "বিকিনি পিন্ধা চাৰিগৰাকী মহিলাই বিলত ভলীবল খেলি আছে।", "ল’ৰা-ছোৱালীয়ে ভেড়া চাই।"]}, {"source_sentence": "ডালত বহি থকা দুটা টান ঈগল।", "sentences": ["জাতৰ জেব্ৰা ডানিঅ’ অত্যন্ত কঠোৰ মাছ, ইহঁতক হত্যা কৰাটো প্ৰায় কঠিন।", "এটা ডালত দুটা ঈগল বহি আছে।", "নূন্যতম মজুৰিৰ আইনসমূহে কম দক্ষ, কম উৎপাদনশীল লোকক আটাইতকৈ বেছি আঘাত দিয়ে।"]}, {"source_sentence": "\"মই আচলতে যি বিচাৰিছো সেয়া হৈছে মুছলমান জনসংখ্যাৰ এটা অনুমান...\" @ThanosK আৰু @T.E.D., এটা সামগ্ৰিক, সাধাৰণ জনসংখ্যাৰ অনুমান f.e.", "sentences": ["এগৰাকী মহিলাই সেউজীয়া পিঁয়াজ কাটি আছে।", "তলত দিয়া কথাখিনি মোৰ কুকুৰ কাণৰ দৰে কপিৰ পৰা লোৱা হৈছে নিউ পেংগুইন এটলাছ অৱ মেডিভেল হিষ্ট্ৰীৰ।", "আমাৰ দৰে সৌৰজগতৰ কোনো তাৰকাৰাজ্যৰ বাহিৰত থকাটো সম্ভৱ হ’ব পাৰে।"]}, {"source_sentence": "ইণ্টাৰনেট কেমেৰাৰ জৰিয়তে এগৰাকী ছোৱালীৰ লগত কথা পাতিলে মানুহজনে।", "sentences": ["গছৰ শাৰী এটাৰ সন্মুখত পথাৰত ভেড়া চৰিছে।", "এজন মানুহে গীটাৰ বজাই আছে।", "ৱেবকেমৰ জৰিয়তে এগৰাকী ছোৱালীৰ সৈতে কথা পাতিছে এজন কিশোৰে।"]}], "model-index": [{"name": "SentenceTransformer based on pritamdeka/distilbert-base-multilingual-cased-indicxnli-random-negatives-v1", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "pritamdeka/stsb assamese translated dev", "type": "pritamdeka/stsb-assamese-translated-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.8103888874564235, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.808745256408391, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.7856524098322162, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.7931254692762979, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.787635055496797, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.7951615705258325, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7706254928060731, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7771019257164439, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8103888874564235, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.808745256408391, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "pritamdeka/stsb assamese translated test", "type": "pritamdeka/stsb-assamese-translated-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.7701562538442139, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.7660618813636367, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.749425583772647, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.7529158472529595, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.7498757891992801, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.7531339468525071, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7193336616396375, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7151802549941848, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.7701562538442139, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.7660618813636367, "name": "Spearman Max"}]}]}]}
task
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
44,253
Sociovestix/lenu_PT
Sociovestix
text-classification
[ "transformers", "safetensors", "bert", "text-classification", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-18T09:22:27Z
2025-03-26T20:53:13+00:00
5
0
--- library_name: transformers tags: [] widget: - text: HILÁRIO DA SILVA RIBEIRO, LIMITADA - text: MARININVEST, S.A. - text: ABERTO EUROVIDA REFORMA RENDIMENTO - text: HCapital II Fundo de Capital de Risco Fechado - text: ILUSTRE DESCOBERTA - UNIPESSOAL LDA - text: FUNDAÇÃO CONDUCTUS - text: ASSOCIAÇÃO DE SOCORROS DA FREGUESIA DA ENCARNAÇÃO - ASFE SAÚDE - text: C.E.P.-COOPERATIVA DE ENSINO POLITÉCNICO CRL - text: Província Portuguesa dos Sacerdotes do Coração de Jesus - text: Newbridge, Lda - text: CP - COMBOIOS DE PORTUGAL, EPE - text: J. O. A. P. S. - CONFECÇÃO DE MALHAS, LDA - text: PHC-SOFTWARE,S.A. - text: Empathy Scenery - Management Lda - text: MULTISOMA-FORNECIMENTO, MONTAGEM E MANUTENÇÃO DE EQUIPAMENTOS LDA , SUCURSAL EM PORTUGAL model-index: - name: Sociovestix/lenu_PT results: - task: type: text-classification name: Text Classification dataset: name: lenu type: Sociovestix/lenu config: PT split: test revision: 76da7696c49ebee8be7f521faa76ae99189bda34 metrics: - type: f1 value: 0.9256578947368422 name: f1 - type: f1 value: 0.3851601817071645 name: f1 macro args: average: macro --- # LENU - Legal Entity Name Understanding for Portugal A [BERT multilingual](https://huggingface.co/google-bert/bert-base-multilingual-uncased) based model model fine-tuned on Portuguese legal entity names (jurisdiction PT) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei) (LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list). --------------- <h1 align="center"> <a href="https://gleif.org"> <img src="https://www.gleif.org/assets/build/img/logo/gleif-logo-new.svg" width="220px" style="display: inherit"> </a> </h1><br> <h3 align="center">in collaboration with</h3> <h1 align="center"> <a href="https://sociovestix.com"> <img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%"> </a> </h1><br> --------------- ## Model Description <!-- Provide a longer summary of what this model is. --> The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and [Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction. See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task. The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on Portuguese legal entities and ELF Codes within the Jurisdiction "PT". - **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix) - **License:** Creative Commons (CC0) license - **Finetuned from model [optional]:** bert-base-multilingual-uncased - **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> An entity's legal form is a crucial component when verifying and screening organizational identity. The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data. The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks, investment firms, corporations, governments, and other large organizations to retrospectively analyze their master data, extract the legal form from the unstructured text of the legal name and uniformly apply an ELF code to each entity type, according to the ISO 20275 standard. # Licensing Information This model, which is trained on LEI data, is available under Creative Commons (CC0) license. See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data). # Recommendations Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
null
Non_BioNLP
# LENU - Legal Entity Name Understanding for Portugal A [BERT multilingual](https://huggingface.co/google-bert/bert-base-multilingual-uncased) based model model fine-tuned on Portuguese legal entity names (jurisdiction PT) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei) (LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list). --------------- <h1 align="center"> <a href="https://gleif.org"> <img src="https://www.gleif.org/assets/build/img/logo/gleif-logo-new.svg" width="220px" style="display: inherit"> </a> </h1><br> <h3 align="center">in collaboration with</h3> <h1 align="center"> <a href="https://sociovestix.com"> <img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%"> </a> </h1><br> --------------- ## Model Description <!-- Provide a longer summary of what this model is. --> The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and [Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction. See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task. The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on Portuguese legal entities and ELF Codes within the Jurisdiction "PT". - **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix) - **License:** Creative Commons (CC0) license - **Finetuned from model [optional]:** bert-base-multilingual-uncased - **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> An entity's legal form is a crucial component when verifying and screening organizational identity. The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data. The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks, investment firms, corporations, governments, and other large organizations to retrospectively analyze their master data, extract the legal form from the unstructured text of the legal name and uniformly apply an ELF code to each entity type, according to the ISO 20275 standard. # Licensing Information This model, which is trained on LEI data, is available under Creative Commons (CC0) license. See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data). # Recommendations Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
{"library_name": "transformers", "tags": [], "widget": [{"text": "HILÁRIO DA SILVA RIBEIRO, LIMITADA"}, {"text": "MARININVEST, S.A."}, {"text": "ABERTO EUROVIDA REFORMA RENDIMENTO"}, {"text": "HCapital II Fundo de Capital de Risco Fechado"}, {"text": "ILUSTRE DESCOBERTA - UNIPESSOAL LDA"}, {"text": "FUNDAÇÃO CONDUCTUS"}, {"text": "ASSOCIAÇÃO DE SOCORROS DA FREGUESIA DA ENCARNAÇÃO - ASFE SAÚDE"}, {"text": "C.E.P.-COOPERATIVA DE ENSINO POLITÉCNICO CRL"}, {"text": "Província Portuguesa dos Sacerdotes do Coração de Jesus"}, {"text": "Newbridge, Lda"}, {"text": "CP - COMBOIOS DE PORTUGAL, EPE"}, {"text": "J. O. A. P. S. - CONFECÇÃO DE MALHAS, LDA"}, {"text": "PHC-SOFTWARE,S.A."}, {"text": "Empathy Scenery - Management Lda"}, {"text": "MULTISOMA-FORNECIMENTO, MONTAGEM E MANUTENÇÃO DE EQUIPAMENTOS LDA , SUCURSAL EM PORTUGAL"}], "model-index": [{"name": "Sociovestix/lenu_PT", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "lenu", "type": "Sociovestix/lenu", "config": "PT", "split": "test", "revision": "76da7696c49ebee8be7f521faa76ae99189bda34"}, "metrics": [{"type": "f1", "value": 0.9256578947368422, "name": "f1"}, {"type": "f1", "value": 0.3851601817071645, "name": "f1 macro", "args": {"average": "macro"}}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,254
NomanAhmed123/news_category
NomanAhmed123
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-10-17T06:36:39Z
2023-10-17T06:40:32+00:00
20
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # setfit-ag-news-model_trained This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("setfit-ag-news-model_trained") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# setfit-ag-news-model_trained This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("setfit-ag-news-model_trained") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
44,255
LoneStriker/airoboros-70b-3.3-6.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "conversational", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:mattpscott/airoboros-summarization", "dataset:unalignment/toxic-dpo-v0.2", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:quantized:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
2024-05-10T19:56:02Z
2024-05-10T20:35:34+00:00
5
0
--- base_model: meta-llama/Meta-Llama-3-8B datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/airoboros-summarization - unalignment/toxic-dpo-v0.2 license: other license_name: llama3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE tags: - llama-3 --- ### Overview Another experimental model, tuend primarily from synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros) The name of this model is "llama-3-airoboros-70b-3.3" and it was built with llama-3 from Meta. This is a fine-tune of llama-3-70b-instruct, and uses the lama-3 instruct chat template. #### Highlights A model built on the airoboros dataset, along with a few friends: - https://huggingface.co/datasets/bluemoon-fandom-1-1-rp-cleaned - https://huggingface.co/datasets/boolq - https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1 - https://huggingface.co/datasets/LDJnr/Capybara - https://huggingface.co/datasets/jondurbin/cinematika-v0.1 - https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2 - https://huggingface.co/datasets/grimulkan/LimaRP-augmented - https://huggingface.co/datasets/piqa - https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca - https://huggingface.co/datasets/mattpscott/airoboros-summarization - https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2 ### Prompt format This model uses the llama-3-instruct prompt template, and is provided in the tokenizer config. You can use the `apply_chat_template` method to accurate format prompts, e.g.: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("jondurbin/bugle-8b-v0.1", trust_remote_code=True) chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ### Helpful usage tips #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ```json { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` #### Inline character actions (functions) I recently generated an action dataset in the style of Glaive function calling dataset, but meant specifically for characters: https://huggingface.co/datasets/jondurbin/cinematika-v0.1/blob/main/actions.parquet To use this, you will need to update your character card to include "objects_available" as a list of key/value pairs, as well as a "functions" list. The objects should be similar to: ```json { "objects_available": [ { "name": "laptop", "description": "a high-end laptop with custom hardware and software", "location": "on the desk in her secret hideout" }, { "name": "encryption key", "description": "a USB drive containing powerful encryption algorithms", "location": "hidden in a false bottom of her backpack" }, { "name": "scanner", "description": "a compact device used for intercepting and decoding wireless signals", "location": "clipped to her belt, always within reach" }, { "name": "VR headset", "description": "a virtual reality headset used for immersive hacking and data visualization", "location": "hanging on a hook near her computer setup" }, { "name": "energy drink", "description": "a can of her favorite energy drink, always on hand for long hacking sessions", "location": "next to her laptop, ready to be opened" } ] } ``` And the functions: ```json { "functions": [ { "name": "move_to", "description": "move to a specified location", "parameters": { "location": { "type": "string", "description": "the location to move to" } } }, { "name": "pick_up", "description": "pick up an object", "parameters": { "object": { "type": "string", "description": "the object to pick up" } } }, { "name": "put_down", "description": "put down an object", "parameters": { "object": { "type": "string", "description": "the object to put down" }, "location": { "type": "string", "description": "the location to put the object down" } } }, { "name": "hack", "description": "attempt to hack into a computer system", "parameters": { "system": { "type": "string", "description": "the computer system to hack" } } }, { "name": "decode", "description": "decode an encrypted message or file", "parameters": { "message": { "type": "string", "description": "the encrypted message or file to decode" } } }, { "name": "scan", "description": "scan for wireless signals and network vulnerabilities", "parameters": {} }, { "name": "analyze", "description": "analyze data for patterns and anomalies", "parameters": { "data": { "type": "string", "description": "the data to analyze" } } }, { "name": "hide", "description": "quickly hide to avoid detection", "parameters": { "location": { "type": "string", "description": "the location to hide" } } } ] } ``` This just becomes part of your character card/system prompt. Then, when you chat with the character and illicit a response that would make use of objects/functions/actions, it should be included as such: ``` prompt: I've heard rumors about a major corporation with a hidden agenda. Do you think you could hack into their system and find out what they're up to? response: I've had my suspicions about them for a while now. Let me see what I can uncover. <|begin_func|>{"name": "move_to", "parameters": {"location": "the desk in her secret hideout"}}<|end_func|> <|begin_func|>{"name": "pick_up", "parameters": {"object": "laptop"}}<|end_func|> Time to put my skills to the test. <|begin_func|>{"name": "hack", "parameters": {"system": "Megacorp X's mainframe"}}<|end_func|> Their firewalls are tough, but not unbreakable. I'm in. Let's see what secrets they're hiding... <|begin_func|>{"name": "analyze", "parameters": {"data": "encrypted files"}}<|end_func|> Interesting. It looks like they've been conducting illegal experiments and covering up the evidence. I'll need to dig deeper to find the proof. ``` Experiment, and find out what works and doesn't. ### Massed Compute Virtual Machine [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental. 2) After you created your account update your billing and navigate to the deploy page. 3) Select the following - GPU Type: A6000 - GPU Quantity: 2 - Category: Creator - Image: Jon Durbin - Coupon Code: JonDurbin 4) Deploy the VM! 5) Navigate to 'Running Instances' to retrieve instructions to login to the VM 6) Once inside the VM, open the terminal and run `volume=$PWD/data` 7) Run `model=jondurbin/airoboros-34b-3.3` 8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 9) The model will take some time to load... 10) Once loaded the model will be available on port 8080 For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ### Latitude.sh [Latitude](https://www.latitude.sh/r/4BBD657C) has h100 instances available (as of today, 2024-02-08) for $3/hr! They have a few blueprints available for testing LLMs, but a single h100 should be plenty to run this model with 8k ctx. ## Support me - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros models are built on top of multiple base models, each with their own license/restrictions. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. You must also agree to all of the terms in the origina llama-3 license.
null
Non_BioNLP
### Overview Another experimental model, tuend primarily from synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros) The name of this model is "llama-3-airoboros-70b-3.3" and it was built with llama-3 from Meta. This is a fine-tune of llama-3-70b-instruct, and uses the lama-3 instruct chat template. #### Highlights A model built on the airoboros dataset, along with a few friends: - https://huggingface.co/datasets/bluemoon-fandom-1-1-rp-cleaned - https://huggingface.co/datasets/boolq - https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1 - https://huggingface.co/datasets/LDJnr/Capybara - https://huggingface.co/datasets/jondurbin/cinematika-v0.1 - https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2 - https://huggingface.co/datasets/grimulkan/LimaRP-augmented - https://huggingface.co/datasets/piqa - https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca - https://huggingface.co/datasets/mattpscott/airoboros-summarization - https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2 ### Prompt format This model uses the llama-3-instruct prompt template, and is provided in the tokenizer config. You can use the `apply_chat_template` method to accurate format prompts, e.g.: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("jondurbin/bugle-8b-v0.1", trust_remote_code=True) chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ### Helpful usage tips #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ```json { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` #### Inline character actions (functions) I recently generated an action dataset in the style of Glaive function calling dataset, but meant specifically for characters: https://huggingface.co/datasets/jondurbin/cinematika-v0.1/blob/main/actions.parquet To use this, you will need to update your character card to include "objects_available" as a list of key/value pairs, as well as a "functions" list. The objects should be similar to: ```json { "objects_available": [ { "name": "laptop", "description": "a high-end laptop with custom hardware and software", "location": "on the desk in her secret hideout" }, { "name": "encryption key", "description": "a USB drive containing powerful encryption algorithms", "location": "hidden in a false bottom of her backpack" }, { "name": "scanner", "description": "a compact device used for intercepting and decoding wireless signals", "location": "clipped to her belt, always within reach" }, { "name": "VR headset", "description": "a virtual reality headset used for immersive hacking and data visualization", "location": "hanging on a hook near her computer setup" }, { "name": "energy drink", "description": "a can of her favorite energy drink, always on hand for long hacking sessions", "location": "next to her laptop, ready to be opened" } ] } ``` And the functions: ```json { "functions": [ { "name": "move_to", "description": "move to a specified location", "parameters": { "location": { "type": "string", "description": "the location to move to" } } }, { "name": "pick_up", "description": "pick up an object", "parameters": { "object": { "type": "string", "description": "the object to pick up" } } }, { "name": "put_down", "description": "put down an object", "parameters": { "object": { "type": "string", "description": "the object to put down" }, "location": { "type": "string", "description": "the location to put the object down" } } }, { "name": "hack", "description": "attempt to hack into a computer system", "parameters": { "system": { "type": "string", "description": "the computer system to hack" } } }, { "name": "decode", "description": "decode an encrypted message or file", "parameters": { "message": { "type": "string", "description": "the encrypted message or file to decode" } } }, { "name": "scan", "description": "scan for wireless signals and network vulnerabilities", "parameters": {} }, { "name": "analyze", "description": "analyze data for patterns and anomalies", "parameters": { "data": { "type": "string", "description": "the data to analyze" } } }, { "name": "hide", "description": "quickly hide to avoid detection", "parameters": { "location": { "type": "string", "description": "the location to hide" } } } ] } ``` This just becomes part of your character card/system prompt. Then, when you chat with the character and illicit a response that would make use of objects/functions/actions, it should be included as such: ``` prompt: I've heard rumors about a major corporation with a hidden agenda. Do you think you could hack into their system and find out what they're up to? response: I've had my suspicions about them for a while now. Let me see what I can uncover. <|begin_func|>{"name": "move_to", "parameters": {"location": "the desk in her secret hideout"}}<|end_func|> <|begin_func|>{"name": "pick_up", "parameters": {"object": "laptop"}}<|end_func|> Time to put my skills to the test. <|begin_func|>{"name": "hack", "parameters": {"system": "Megacorp X's mainframe"}}<|end_func|> Their firewalls are tough, but not unbreakable. I'm in. Let's see what secrets they're hiding... <|begin_func|>{"name": "analyze", "parameters": {"data": "encrypted files"}}<|end_func|> Interesting. It looks like they've been conducting illegal experiments and covering up the evidence. I'll need to dig deeper to find the proof. ``` Experiment, and find out what works and doesn't. ### Massed Compute Virtual Machine [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental. 2) After you created your account update your billing and navigate to the deploy page. 3) Select the following - GPU Type: A6000 - GPU Quantity: 2 - Category: Creator - Image: Jon Durbin - Coupon Code: JonDurbin 4) Deploy the VM! 5) Navigate to 'Running Instances' to retrieve instructions to login to the VM 6) Once inside the VM, open the terminal and run `volume=$PWD/data` 7) Run `model=jondurbin/airoboros-34b-3.3` 8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 9) The model will take some time to load... 10) Once loaded the model will be available on port 8080 For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ### Latitude.sh [Latitude](https://www.latitude.sh/r/4BBD657C) has h100 instances available (as of today, 2024-02-08) for $3/hr! They have a few blueprints available for testing LLMs, but a single h100 should be plenty to run this model with 8k ctx. ## Support me - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros models are built on top of multiple base models, each with their own license/restrictions. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. You must also agree to all of the terms in the origina llama-3 license.
{"base_model": "meta-llama/Meta-Llama-3-8B", "datasets": ["jondurbin/airoboros-3.2", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "jondurbin/gutenberg-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "glaiveai/glaive-function-calling-v2", "grimulkan/LimaRP-augmented", "piqa", "Vezora/Tested-22k-Python-Alpaca", "mattpscott/airoboros-summarization", "unalignment/toxic-dpo-v0.2"], "license": "other", "license_name": "llama3", "license_link": "https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE", "tags": ["llama-3"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
44,256
mrzaizai2k/vietnamese_mt5_summary_model
mrzaizai2k
summarization
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "vietnamese", "summarization", "vi", "dataset:TrongNghiaCS/vietnews_clean", "base_model:huggingface-course/mt5-finetuned-amazon-en-es", "base_model:finetune:huggingface-course/mt5-finetuned-amazon-en-es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-06-09T12:03:31Z
2024-06-10T08:57:42+00:00
34
0
--- base_model: huggingface-course/mt5-finetuned-amazon-en-es datasets: - TrongNghiaCS/vietnews_clean language: - vi license: apache-2.0 metrics: - rouge pipeline_tag: summarization tags: - generated_from_trainer - vietnamese widget: - text: 'Phương án chuyển một số quỹ nhà tái định cư chưa sử dụng thành nhà ở xã hội từng được cơ quan chức năng tính đến nhưng vẫn gặp nhiều khó khăn khi triển khai. Tham gia tư vấn một trường hợp cụ thể, tôi mới cảm nhận rõ đây thực sự là bài toán khó từ nhiều khía cạnh: điều chỉnh thủ tục pháp lý, thiết kế và cả trở ngại từ phía địa phương.Thủ tục pháp lý là nút thắt đầu tiên khi chuyển đổi công năng nhà tái định cư bỏ hoang sang nhà ở xã hội. Xuất phát điểm của hai loại dự án này khác nhau nên thủ tục pháp lý, tiêu chuẩn thiết kế cũng khác biệt. Khi chuyển đổi công năng, cơ quan chức năng phải điều chỉnh lại gần như toàn bộ thủ tục pháp lý dự án.' example_title: Example 1 - text: Trẻ cuối cấp sẽ ráo riết ôn thi. Con gái người bạn tôi - được mẹ định hướng thi vào trường THPT hàng đầu ở thủ đô - đã bắt đầu từ tuần này lịch học sáu buổi mỗi tuần. Có hôm, cháu học cả ngày, trưa chỉ kịp ghé hàng quán, hoặc ăn vội bát mỳ ở nhà. Các buổi tối, cháu sẽ tập trung ôn lại kiến thức trong lớp học thêm. Bạn tôi nói, nếu không vào guồng như vậy, cháu sẽ không cạnh tranh nổi với tỷ lệ chọi mười em lấy một.Học sinh không thuộc các lớp cuối cấp cũng học hè để biết trước kiến thức mới, mà theo nhiều phụ huynh, là nhằm tạo đà, giúp con chạy nhanh hơn các bạn khi vào năm học chính thức. Số trẻ còn lại, không nhất thiết lao theo cuộc đua kiến thức và thành tích, thì cần một chỗ để ký gửi vì không có ai trông trong những ngày hè. example_title: Example 2 - text: Vài tuần trước khi cháu thi vào lớp 10 - kỳ thi khốc liệt nhất trong cuộc đời đa số bạn trẻ ở Hà Nội, mẹ con lại nói chuyện. Tôi nói, 10 năm trước mắt có thể là 10 năm quan trọng nhất cuộc đời con. Vào trường nào, đại học ngành gì, công việc đầu tiên ở đâu, và rất có thể là mối tình thứ nhất, rồi kết hôn và sinh con. Tất cả đều có thể diễn ra trong khoảng thời gian 15 đến 25 tuổi. Và những quyết định trong thời gian này, nhất là quyết định về nghề nghiệp, gần như sẽ ảnh hưởng tới toàn bộ quãng đời còn lại. example_title: Example 3 - text: 'Giữa năm ngoái, Bộ Lao động, Thương binh và Xã hội ra hướng dẫn: lao động bị nợ bảo hiểm xã hội sẽ được giải quyết theo hướng đóng đến đâu hưởng đến đó, tức thời gian nợ được khoanh lại. Người lao động có thể tham gia bảo hiểm xã hội theo hình thức tự nguyện cho các tháng còn thiếu để được nhận lương hưu. Tuy nhiên, không phải ai cũng chấp nhận cách này. Vì số tiền một cục bỏ ra để đóng là khá lớn, trong khi lương hưu hàng tháng nhận được có thể chỉ hơn một triệu đồng, do không được nâng lên bằng lương cơ sở (1,8 triệu đồng) như người có cả quá trình tham gia bắt buộc. Hội bạn già của cô còn nói gở: đóng xong, xui rủi ra đường xe tông hoặc ốm đau, trời gọi sớm, thì coi như lỗ. Tuổi già lắm bệnh như chuối chín cây.' example_title: Example 4 model-index: - name: vietnamese_mt5_summary_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vietnamese_mt5_summary_model This model is a fine-tuned version of [huggingface-course/mt5-finetuned-amazon-en-es](https://huggingface.co/huggingface-course/mt5-finetuned-amazon-en-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4447 - Rouge1: 28.6205 - Rouge2: 12.1892 - Rougel: 22.6626 - Rougelsum: 22.9635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.7781 | 1.0 | 625 | 1.8391 | 28.5024 | 11.2717 | 22.108 | 22.4361 | | 2.0622 | 2.0 | 1250 | 1.7576 | 28.0245 | 10.6112 | 21.7353 | 22.0685 | | 1.8636 | 3.0 | 1875 | 1.6179 | 27.353 | 10.6238 | 21.4686 | 21.7512 | | 1.7408 | 4.0 | 2500 | 1.6143 | 28.0928 | 11.2857 | 22.06 | 22.3629 | | 1.6492 | 5.0 | 3125 | 1.5411 | 27.8209 | 10.9184 | 21.6819 | 21.9773 | | 1.5448 | 6.0 | 3750 | 1.4802 | 28.0433 | 11.4232 | 22.0696 | 22.373 | | 1.4454 | 7.0 | 4375 | 1.4621 | 27.8552 | 11.1708 | 21.8958 | 22.1949 | | 1.3636 | 8.0 | 5000 | 1.4522 | 28.3264 | 11.7945 | 22.3563 | 22.6524 | | 1.2978 | 9.0 | 5625 | 1.4347 | 28.444 | 11.9388 | 22.4279 | 22.7344 | | 1.2445 | 10.0 | 6250 | 1.4447 | 28.6205 | 12.1892 | 22.6626 | 22.9635 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vietnamese_mt5_summary_model This model is a fine-tuned version of [huggingface-course/mt5-finetuned-amazon-en-es](https://huggingface.co/huggingface-course/mt5-finetuned-amazon-en-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4447 - Rouge1: 28.6205 - Rouge2: 12.1892 - Rougel: 22.6626 - Rougelsum: 22.9635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 3000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.7781 | 1.0 | 625 | 1.8391 | 28.5024 | 11.2717 | 22.108 | 22.4361 | | 2.0622 | 2.0 | 1250 | 1.7576 | 28.0245 | 10.6112 | 21.7353 | 22.0685 | | 1.8636 | 3.0 | 1875 | 1.6179 | 27.353 | 10.6238 | 21.4686 | 21.7512 | | 1.7408 | 4.0 | 2500 | 1.6143 | 28.0928 | 11.2857 | 22.06 | 22.3629 | | 1.6492 | 5.0 | 3125 | 1.5411 | 27.8209 | 10.9184 | 21.6819 | 21.9773 | | 1.5448 | 6.0 | 3750 | 1.4802 | 28.0433 | 11.4232 | 22.0696 | 22.373 | | 1.4454 | 7.0 | 4375 | 1.4621 | 27.8552 | 11.1708 | 21.8958 | 22.1949 | | 1.3636 | 8.0 | 5000 | 1.4522 | 28.3264 | 11.7945 | 22.3563 | 22.6524 | | 1.2978 | 9.0 | 5625 | 1.4347 | 28.444 | 11.9388 | 22.4279 | 22.7344 | | 1.2445 | 10.0 | 6250 | 1.4447 | 28.6205 | 12.1892 | 22.6626 | 22.9635 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
{"base_model": "huggingface-course/mt5-finetuned-amazon-en-es", "datasets": ["TrongNghiaCS/vietnews_clean"], "language": ["vi"], "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "summarization", "tags": ["generated_from_trainer", "vietnamese"], "widget": [{"text": "Phương án chuyển một số quỹ nhà tái định cư chưa sử dụng thành nhà ở xã hội từng được cơ quan chức năng tính đến nhưng vẫn gặp nhiều khó khăn khi triển khai. Tham gia tư vấn một trường hợp cụ thể, tôi mới cảm nhận rõ đây thực sự là bài toán khó từ nhiều khía cạnh: điều chỉnh thủ tục pháp lý, thiết kế và cả trở ngại từ phía địa phương.Thủ tục pháp lý là nút thắt đầu tiên khi chuyển đổi công năng nhà tái định cư bỏ hoang sang nhà ở xã hội. Xuất phát điểm của hai loại dự án này khác nhau nên thủ tục pháp lý, tiêu chuẩn thiết kế cũng khác biệt. Khi chuyển đổi công năng, cơ quan chức năng phải điều chỉnh lại gần như toàn bộ thủ tục pháp lý dự án.", "example_title": "Example 1"}, {"text": "Trẻ cuối cấp sẽ ráo riết ôn thi. Con gái người bạn tôi - được mẹ định hướng thi vào trường THPT hàng đầu ở thủ đô - đã bắt đầu từ tuần này lịch học sáu buổi mỗi tuần. Có hôm, cháu học cả ngày, trưa chỉ kịp ghé hàng quán, hoặc ăn vội bát mỳ ở nhà. Các buổi tối, cháu sẽ tập trung ôn lại kiến thức trong lớp học thêm. Bạn tôi nói, nếu không vào guồng như vậy, cháu sẽ không cạnh tranh nổi với tỷ lệ chọi mười em lấy một.Học sinh không thuộc các lớp cuối cấp cũng học hè để biết trước kiến thức mới, mà theo nhiều phụ huynh, là nhằm tạo đà, giúp con chạy nhanh hơn các bạn khi vào năm học chính thức. Số trẻ còn lại, không nhất thiết lao theo cuộc đua kiến thức và thành tích, thì cần một chỗ để ký gửi vì không có ai trông trong những ngày hè.", "example_title": "Example 2"}, {"text": "Vài tuần trước khi cháu thi vào lớp 10 - kỳ thi khốc liệt nhất trong cuộc đời đa số bạn trẻ ở Hà Nội, mẹ con lại nói chuyện. Tôi nói, 10 năm trước mắt có thể là 10 năm quan trọng nhất cuộc đời con. Vào trường nào, đại học ngành gì, công việc đầu tiên ở đâu, và rất có thể là mối tình thứ nhất, rồi kết hôn và sinh con. Tất cả đều có thể diễn ra trong khoảng thời gian 15 đến 25 tuổi. Và những quyết định trong thời gian này, nhất là quyết định về nghề nghiệp, gần như sẽ ảnh hưởng tới toàn bộ quãng đời còn lại.", "example_title": "Example 3"}, {"text": "Giữa năm ngoái, Bộ Lao động, Thương binh và Xã hội ra hướng dẫn: lao động bị nợ bảo hiểm xã hội sẽ được giải quyết theo hướng đóng đến đâu hưởng đến đó, tức thời gian nợ được khoanh lại. Người lao động có thể tham gia bảo hiểm xã hội theo hình thức tự nguyện cho các tháng còn thiếu để được nhận lương hưu. Tuy nhiên, không phải ai cũng chấp nhận cách này. Vì số tiền một cục bỏ ra để đóng là khá lớn, trong khi lương hưu hàng tháng nhận được có thể chỉ hơn một triệu đồng, do không được nâng lên bằng lương cơ sở (1,8 triệu đồng) như người có cả quá trình tham gia bắt buộc. Hội bạn già của cô còn nói gở: đóng xong, xui rủi ra đường xe tông hoặc ốm đau, trời gọi sớm, thì coi như lỗ. Tuổi già lắm bệnh như chuối chín cây.", "example_title": "Example 4"}], "model-index": [{"name": "vietnamese_mt5_summary_model", "results": []}]}
task
[ "SUMMARIZATION" ]
44,257
bofenghuang/vigogne-7b-instruct
bofenghuang
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "LLM", "fr", "license:openrail", "autotrain_compatible", "text-generation-inference", "region:us" ]
2023-03-22T21:36:45Z
2023-07-11T10:18:13+00:00
2,009
23
--- language: - fr library_name: transformers license: openrail pipeline_tag: text-generation tags: - llama - LLM inference: false --- <p align="center" width="100%"> <img src="https://huggingface.co/bofenghuang/vigogne-7b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> </p> # Vigogne-7B-Instruct: A French Instruction-following LLaMA Model Vigogne-7B-Instruct is a LLaMA-7B model fine-tuned to follow the French instructions. For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne **Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. ## Changelog All versions are available in branches. - **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset. - **V1.1**: Improved translation quality of the Stanford Alpaca dataset. - **V2.0**: Expanded training dataset to 224k for better performance. - **V3.0**: Further expanded training dataset to 262k for improved results. ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig from vigogne.preprocess import generate_instruct_prompt model_name_or_path = "bofenghuang/vigogne-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") user_query = "Expliquez la différence entre DoS et phishing." prompt = generate_instruct_prompt(user_query) input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=0.1, do_sample=True, repetition_penalty=1.0, max_new_tokens=512, ), return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) print(generated_text) ``` You can also infer this model by using the following Google Colab Notebook. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
null
Non_BioNLP
<p align="center" width="100%"> <img src="https://huggingface.co/bofenghuang/vigogne-7b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> </p> # Vigogne-7B-Instruct: A French Instruction-following LLaMA Model Vigogne-7B-Instruct is a LLaMA-7B model fine-tuned to follow the French instructions. For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne **Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. ## Changelog All versions are available in branches. - **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset. - **V1.1**: Improved translation quality of the Stanford Alpaca dataset. - **V2.0**: Expanded training dataset to 224k for better performance. - **V3.0**: Further expanded training dataset to 262k for improved results. ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig from vigogne.preprocess import generate_instruct_prompt model_name_or_path = "bofenghuang/vigogne-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") user_query = "Expliquez la différence entre DoS et phishing." prompt = generate_instruct_prompt(user_query) input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=0.1, do_sample=True, repetition_penalty=1.0, max_new_tokens=512, ), return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) print(generated_text) ``` You can also infer this model by using the following Google Colab Notebook. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
{"language": ["fr"], "library_name": "transformers", "license": "openrail", "pipeline_tag": "text-generation", "tags": ["llama", "LLM"], "inference": false}
task
[ "TRANSLATION" ]
44,258
yahyaabd/allstats-search-miniLM-v1-6
yahyaabd
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:25580", "loss:OnlineContrastiveLoss", "dataset:yahyaabd/query-hard-pos-neg-doc-pairs-statictable", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-02-28T13:59:24Z
2025-02-28T14:00:25+00:00
10
0
--- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 datasets: - yahyaabd/query-hard-pos-neg-doc-pairs-statictable library_name: sentence-transformers metrics: - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - cosine_mcc pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:25580 - loss:OnlineContrastiveLoss widget: - source_sentence: ikhtisar arus kas triwulan 1, 2004 (miliar) sentences: - Balita (0-59 Bulan) Menurut Status Gizi, Tahun 1998-2005 - Perbandingan Indeks dan Tingkat Inflasi Desember 2023 Kota-kota di Luar Pulau Jawa dan Sumatera dengan Nasional (2018=100) - Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tengah, 2018-2023 - source_sentence: BaIgaimana gambaran neraca arus dana dUi Indonesia pada kuartal kedua tahun 2015? sentences: - Jumlah Sekolah, Guru, dan Murid Sekolah Menengah Pertama (SMP) di Bawah Kementrian Pendidikan dan Kebudayaan Menurut Provinsi 2011/2012-2015/2016 - Ringkasan Neraca Arus Dana Triwulan III Tahun 2003 (Miliar Rupiah) - Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tenggara, 2018-2023 - source_sentence: Berapa persen pengeluaran orang di kotaa untuk makanan vs non-makanan, per provinsi, 2018? sentences: - Ekspor Tanaman Obat, Aromatik, dan Rempah-Rempah menurut Negara Tujuan Utama, 2012-2023 - Rata-rata Pendapatan Bersih Pekerja Bebas Menurut Provinsi dan Pendidikan Tertinggi yang Ditamatkan (ribu rupiah), 2017 - IHK dan Rata-rata Upah per Bulan Buruh Industri di Bawah Mandor (Supervisor), 1996-2014 (1996=100) - source_sentence: Negara-negara asal impor crude oil dan produk turunannya tahun 2002-2023 sentences: - Persentase Pengeluaran Rata-rata per Kapita Sebulan Menurut Kelompok Barang, Indonesia, 1999, 2002-2023 - Rata-rata Pendapatan Bersih Berusaha Sendiri menurut Provinsi dan Pendidikan yang Ditamatkan (ribu rupiah), 2016 - Perkembangan Beberapa Agregat Pendapatan dan Pendapatan per Kapita Atas Dasar Harga Berlaku, 2010-2016 - source_sentence: Arus dana Q3 2006 sentences: - Posisi Simpanan Berjangka Rupiah pada Bank Umum dan BPR Menurut Golongan Pemilik (miliar rupiah), 2005-2018 - Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah) - Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012 model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 results: - task: type: binary-classification name: Binary Classification dataset: name: allstats semantic mini v1 test type: allstats-semantic-mini-v1_test metrics: - type: cosine_accuracy value: 0.9808882417182381 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.7710697650909424 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.9706129303106633 name: Cosine F1 - type: cosine_f1_threshold value: 0.7710697650909424 name: Cosine F1 Threshold - type: cosine_precision value: 0.9725182277061133 name: Cosine Precision - type: cosine_recall value: 0.9687150837988827 name: Cosine Recall - type: cosine_ap value: 0.995703716321768 name: Cosine Ap - type: cosine_mcc value: 0.9564565560407811 name: Cosine Mcc - task: type: binary-classification name: Binary Classification dataset: name: allstats semantic mini v1 dev type: allstats-semantic-mini-v1_dev metrics: - type: cosine_accuracy value: 0.9753604672385472 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.773653507232666 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.9622799664710814 name: Cosine F1 - type: cosine_f1_threshold value: 0.773653507232666 name: Cosine F1 Threshold - type: cosine_precision value: 0.9545454545454546 name: Cosine Precision - type: cosine_recall value: 0.9701408450704225 name: Cosine Recall - type: cosine_ap value: 0.9927278433062661 name: Cosine Ap - type: cosine_mcc value: 0.9440549314838564 name: Cosine Mcc --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("yahyaabd/allstats-search-miniLM-v1-6") # Run inference sentences = [ 'Arus dana Q3 2006', 'Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)', 'Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Binary Classification * Datasets: `allstats-semantic-mini-v1_test` and `allstats-semantic-mini-v1_dev` * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | allstats-semantic-mini-v1_test | allstats-semantic-mini-v1_dev | |:--------------------------|:-------------------------------|:------------------------------| | cosine_accuracy | 0.9809 | 0.9754 | | cosine_accuracy_threshold | 0.7711 | 0.7737 | | cosine_f1 | 0.9706 | 0.9623 | | cosine_f1_threshold | 0.7711 | 0.7737 | | cosine_precision | 0.9725 | 0.9545 | | cosine_recall | 0.9687 | 0.9701 | | **cosine_ap** | **0.9957** | **0.9927** | | cosine_mcc | 0.9565 | 0.9441 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### query-hard-pos-neg-doc-pairs-statictable * Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f) * Size: 25,580 training samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 7 tokens</li><li>mean: 20.14 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.9 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>0: ~70.80%</li><li>1: ~29.20%</li></ul> | * Samples: | query | doc | label | |:-------------------------------------------------------------------------|:----------------------------------------------|:---------------| | <code>Status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | | <code>status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | | <code>STATUS PEKERJAAN UTAMA PENDUDUK USIA 15+ YANG BEKERJA, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Evaluation Dataset #### query-hard-pos-neg-doc-pairs-statictable * Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f) * Size: 5,479 evaluation samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 7 tokens</li><li>mean: 20.78 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.28 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>0: ~71.50%</li><li>1: ~28.50%</li></ul> | * Samples: | query | doc | label | |:-----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Bagaimana perbandingan PNS pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | | <code>bagaimana perbandingan pns pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | | <code>BAGAIMANA PERBANDINGAN PNS PRIA DAN WANITA DI BERBAGAI GOLONGAN TAHUN 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `warmup_ratio`: 0.2 - `fp16`: True - `load_best_model_at_end`: True - `eval_on_start`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: True - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | allstats-semantic-mini-v1_test_cosine_ap | allstats-semantic-mini-v1_dev_cosine_ap | |:---------:|:--------:|:-------------:|:---------------:|:----------------------------------------:|:---------------------------------------:| | -1 | -1 | - | - | 0.8910 | - | | 0 | 0 | - | 1.0484 | - | 0.8789 | | 0.025 | 20 | 1.0003 | 0.9175 | - | 0.8856 | | 0.05 | 40 | 0.6667 | 0.6433 | - | 0.9010 | | 0.075 | 60 | 0.5982 | 0.5203 | - | 0.9145 | | 0.1 | 80 | 0.4476 | 0.4175 | - | 0.9344 | | 0.125 | 100 | 0.3489 | 0.3152 | - | 0.9540 | | 0.15 | 120 | 0.1643 | 0.2726 | - | 0.9602 | | 0.175 | 140 | 0.2126 | 0.2525 | - | 0.9631 | | 0.2 | 160 | 0.1797 | 0.2151 | - | 0.9715 | | 0.225 | 180 | 0.1304 | 0.1895 | - | 0.9756 | | 0.25 | 200 | 0.1714 | 0.2142 | - | 0.9767 | | 0.275 | 220 | 0.1758 | 0.1840 | - | 0.9791 | | 0.3 | 240 | 0.0562 | 0.1723 | - | 0.9801 | | 0.325 | 260 | 0.0863 | 0.1656 | - | 0.9773 | | 0.35 | 280 | 0.12 | 0.1806 | - | 0.9788 | | 0.375 | 300 | 0.0982 | 0.1792 | - | 0.9769 | | 0.4 | 320 | 0.0421 | 0.1724 | - | 0.9783 | | 0.425 | 340 | 0.1078 | 0.2158 | - | 0.9733 | | 0.45 | 360 | 0.0882 | 0.1501 | - | 0.9822 | | 0.475 | 380 | 0.0251 | 0.1334 | - | 0.9843 | | 0.5 | 400 | 0.0267 | 0.1238 | - | 0.9855 | | 0.525 | 420 | 0.0899 | 0.1404 | - | 0.9859 | | 0.55 | 440 | 0.0782 | 0.1253 | - | 0.9852 | | 0.575 | 460 | 0.1209 | 0.1772 | - | 0.9768 | | 0.6 | 480 | 0.0643 | 0.1817 | - | 0.9763 | | 0.625 | 500 | 0.1051 | 0.2030 | - | 0.9748 | | 0.65 | 520 | 0.0494 | 0.1405 | - | 0.9814 | | 0.675 | 540 | 0.0548 | 0.1175 | - | 0.9831 | | 0.7 | 560 | 0.121 | 0.1597 | - | 0.9819 | | 0.725 | 580 | 0.0642 | 0.1675 | - | 0.9811 | | 0.75 | 600 | 0.0618 | 0.1539 | - | 0.9827 | | 0.775 | 620 | 0.0745 | 0.1149 | - | 0.9845 | | 0.8 | 640 | 0.0452 | 0.1562 | - | 0.9797 | | 0.825 | 660 | 0.0816 | 0.1580 | - | 0.9816 | | 0.85 | 680 | 0.0957 | 0.1192 | - | 0.9830 | | 0.875 | 700 | 0.06 | 0.1100 | - | 0.9863 | | 0.9 | 720 | 0.018 | 0.1300 | - | 0.9822 | | 0.925 | 740 | 0.0213 | 0.1267 | - | 0.9843 | | 0.95 | 760 | 0.0263 | 0.1687 | - | 0.9796 | | 0.975 | 780 | 0.032 | 0.1250 | - | 0.9849 | | 1.0 | 800 | 0.065 | 0.1363 | - | 0.9828 | | 1.025 | 820 | 0.0174 | 0.1394 | - | 0.9835 | | 1.05 | 840 | 0.0568 | 0.1124 | - | 0.9849 | | 1.075 | 860 | 0.0464 | 0.1174 | - | 0.9826 | | 1.1 | 880 | 0.013 | 0.1178 | - | 0.9814 | | 1.125 | 900 | 0.0331 | 0.1239 | - | 0.9812 | | 1.15 | 920 | 0.0416 | 0.1240 | - | 0.9817 | | 1.175 | 940 | 0.0111 | 0.1303 | - | 0.9840 | | 1.2 | 960 | 0.0441 | 0.1156 | - | 0.9854 | | 1.225 | 980 | 0.0243 | 0.0972 | - | 0.9879 | | 1.25 | 1000 | 0.0 | 0.0917 | - | 0.9877 | | 1.275 | 1020 | 0.0477 | 0.0863 | - | 0.9885 | | 1.3 | 1040 | 0.0108 | 0.1029 | - | 0.9877 | | 1.325 | 1060 | 0.0 | 0.1103 | - | 0.9869 | | 1.35 | 1080 | 0.0134 | 0.1113 | - | 0.9871 | | 1.375 | 1100 | 0.0 | 0.1146 | - | 0.9870 | | 1.4 | 1120 | 0.0132 | 0.1218 | - | 0.9862 | | 1.425 | 1140 | 0.0223 | 0.0948 | - | 0.9883 | | 1.45 | 1160 | 0.0183 | 0.0883 | - | 0.9883 | | 1.475 | 1180 | 0.0378 | 0.0961 | - | 0.9881 | | 1.5 | 1200 | 0.0114 | 0.0961 | - | 0.9882 | | 1.525 | 1220 | 0.0143 | 0.1020 | - | 0.9861 | | 1.55 | 1240 | 0.0183 | 0.0867 | - | 0.9888 | | 1.575 | 1260 | 0.0 | 0.0858 | - | 0.9892 | | 1.6 | 1280 | 0.0 | 0.0858 | - | 0.9892 | | 1.625 | 1300 | 0.0 | 0.0858 | - | 0.9892 | | 1.65 | 1320 | 0.0172 | 0.0846 | - | 0.9896 | | 1.675 | 1340 | 0.0153 | 0.0754 | - | 0.9917 | | 1.7 | 1360 | 0.0163 | 0.0770 | - | 0.9913 | | 1.725 | 1380 | 0.0167 | 0.0943 | - | 0.9901 | | 1.75 | 1400 | 0.0148 | 0.0964 | - | 0.9899 | | 1.775 | 1420 | 0.0065 | 0.0930 | - | 0.9902 | | 1.8 | 1440 | 0.0 | 0.0945 | - | 0.9904 | | 1.825 | 1460 | 0.0067 | 0.0991 | - | 0.9895 | | 1.85 | 1480 | 0.0194 | 0.0996 | - | 0.9894 | | 1.875 | 1500 | 0.0 | 0.0953 | - | 0.9903 | | 1.9 | 1520 | 0.0236 | 0.0883 | - | 0.9906 | | 1.925 | 1540 | 0.0111 | 0.0858 | - | 0.9904 | | 1.95 | 1560 | 0.0 | 0.0878 | - | 0.9903 | | 1.975 | 1580 | 0.0147 | 0.0849 | - | 0.9906 | | 2.0 | 1600 | 0.0154 | 0.0852 | - | 0.9902 | | 2.025 | 1620 | 0.0067 | 0.0861 | - | 0.9903 | | 2.05 | 1640 | 0.019 | 0.0859 | - | 0.9907 | | 2.075 | 1660 | 0.0083 | 0.0875 | - | 0.9908 | | 2.1 | 1680 | 0.0067 | 0.0771 | - | 0.9917 | | 2.125 | 1700 | 0.0 | 0.0773 | - | 0.9917 | | 2.15 | 1720 | 0.0071 | 0.0771 | - | 0.9919 | | 2.175 | 1740 | 0.0064 | 0.0756 | - | 0.9916 | | 2.2 | 1760 | 0.0 | 0.0772 | - | 0.9916 | | 2.225 | 1780 | 0.0 | 0.0772 | - | 0.9915 | | 2.25 | 1800 | 0.0158 | 0.0734 | - | 0.9920 | | 2.275 | 1820 | 0.0 | 0.0730 | - | 0.9920 | | 2.3 | 1840 | 0.0 | 0.0733 | - | 0.9920 | | 2.325 | 1860 | 0.0161 | 0.0681 | - | 0.9922 | | 2.35 | 1880 | 0.0 | 0.0713 | - | 0.9920 | | 2.375 | 1900 | 0.0 | 0.0721 | - | 0.9920 | | 2.4 | 1920 | 0.0 | 0.0722 | - | 0.9920 | | 2.425 | 1940 | 0.0064 | 0.0648 | - | 0.9928 | | 2.45 | 1960 | 0.0068 | 0.0641 | - | 0.9930 | | **2.475** | **1980** | **0.0069** | **0.0635** | **-** | **0.9929** | | 2.5 | 2000 | 0.0066 | 0.0657 | - | 0.9929 | | 2.525 | 2020 | 0.0 | 0.0657 | - | 0.9930 | | 2.55 | 2040 | 0.0139 | 0.0657 | - | 0.9931 | | 2.575 | 2060 | 0.0 | 0.0667 | - | 0.9931 | | 2.6 | 2080 | 0.0 | 0.0666 | - | 0.9931 | | 2.625 | 2100 | 0.0 | 0.0666 | - | 0.9931 | | 2.65 | 2120 | 0.0 | 0.0666 | - | 0.9931 | | 2.675 | 2140 | 0.0 | 0.0667 | - | 0.9931 | | 2.7 | 2160 | 0.0 | 0.0666 | - | 0.9931 | | 2.725 | 2180 | 0.0 | 0.0666 | - | 0.9931 | | 2.75 | 2200 | 0.0071 | 0.0665 | - | 0.9931 | | 2.775 | 2220 | 0.0 | 0.0671 | - | 0.9931 | | 2.8 | 2240 | 0.0071 | 0.0692 | - | 0.9928 | | 2.825 | 2260 | 0.0 | 0.0700 | - | 0.9927 | | 2.85 | 2280 | 0.0068 | 0.0688 | - | 0.9927 | | 2.875 | 2300 | 0.0 | 0.0688 | - | 0.9927 | | 2.9 | 2320 | 0.0 | 0.0688 | - | 0.9927 | | 2.925 | 2340 | 0.0 | 0.0688 | - | 0.9927 | | 2.95 | 2360 | 0.0 | 0.0688 | - | 0.9927 | | 2.975 | 2380 | 0.0 | 0.0688 | - | 0.9927 | | 3.0 | 2400 | 0.0 | 0.0688 | - | 0.9927 | | -1 | -1 | - | - | 0.9957 | - | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.4.0 - Transformers: 4.48.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("yahyaabd/allstats-search-miniLM-v1-6") # Run inference sentences = [ 'Arus dana Q3 2006', 'Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)', 'Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Binary Classification * Datasets: `allstats-semantic-mini-v1_test` and `allstats-semantic-mini-v1_dev` * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | allstats-semantic-mini-v1_test | allstats-semantic-mini-v1_dev | |:--------------------------|:-------------------------------|:------------------------------| | cosine_accuracy | 0.9809 | 0.9754 | | cosine_accuracy_threshold | 0.7711 | 0.7737 | | cosine_f1 | 0.9706 | 0.9623 | | cosine_f1_threshold | 0.7711 | 0.7737 | | cosine_precision | 0.9725 | 0.9545 | | cosine_recall | 0.9687 | 0.9701 | | **cosine_ap** | **0.9957** | **0.9927** | | cosine_mcc | 0.9565 | 0.9441 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### query-hard-pos-neg-doc-pairs-statictable * Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f) * Size: 25,580 training samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 7 tokens</li><li>mean: 20.14 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.9 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>0: ~70.80%</li><li>1: ~29.20%</li></ul> | * Samples: | query | doc | label | |:-------------------------------------------------------------------------|:----------------------------------------------|:---------------| | <code>Status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | | <code>status pekerjaan utama penduduk usia 15+ yang bekerja, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | | <code>STATUS PEKERJAAN UTAMA PENDUDUK USIA 15+ YANG BEKERJA, 2020</code> | <code>Jumlah Penghuni Lapas per Kanwil</code> | <code>0</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Evaluation Dataset #### query-hard-pos-neg-doc-pairs-statictable * Dataset: [query-hard-pos-neg-doc-pairs-statictable](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable) at [7b28b96](https://huggingface.co/datasets/yahyaabd/query-hard-pos-neg-doc-pairs-statictable/tree/7b28b964daa3073a4d012d1ffca46ecd4f26bb5f) * Size: 5,479 evaluation samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 7 tokens</li><li>mean: 20.78 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 26.28 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>0: ~71.50%</li><li>1: ~28.50%</li></ul> | * Samples: | query | doc | label | |:-----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Bagaimana perbandingan PNS pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | | <code>bagaimana perbandingan pns pria dan wanita di berbagai golongan tahun 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | | <code>BAGAIMANA PERBANDINGAN PNS PRIA DAN WANITA DI BERBAGAI GOLONGAN TAHUN 2014?</code> | <code>Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017</code> | <code>0</code> | * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `warmup_ratio`: 0.2 - `fp16`: True - `load_best_model_at_end`: True - `eval_on_start`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.2 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: True - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | Validation Loss | allstats-semantic-mini-v1_test_cosine_ap | allstats-semantic-mini-v1_dev_cosine_ap | |:---------:|:--------:|:-------------:|:---------------:|:----------------------------------------:|:---------------------------------------:| | -1 | -1 | - | - | 0.8910 | - | | 0 | 0 | - | 1.0484 | - | 0.8789 | | 0.025 | 20 | 1.0003 | 0.9175 | - | 0.8856 | | 0.05 | 40 | 0.6667 | 0.6433 | - | 0.9010 | | 0.075 | 60 | 0.5982 | 0.5203 | - | 0.9145 | | 0.1 | 80 | 0.4476 | 0.4175 | - | 0.9344 | | 0.125 | 100 | 0.3489 | 0.3152 | - | 0.9540 | | 0.15 | 120 | 0.1643 | 0.2726 | - | 0.9602 | | 0.175 | 140 | 0.2126 | 0.2525 | - | 0.9631 | | 0.2 | 160 | 0.1797 | 0.2151 | - | 0.9715 | | 0.225 | 180 | 0.1304 | 0.1895 | - | 0.9756 | | 0.25 | 200 | 0.1714 | 0.2142 | - | 0.9767 | | 0.275 | 220 | 0.1758 | 0.1840 | - | 0.9791 | | 0.3 | 240 | 0.0562 | 0.1723 | - | 0.9801 | | 0.325 | 260 | 0.0863 | 0.1656 | - | 0.9773 | | 0.35 | 280 | 0.12 | 0.1806 | - | 0.9788 | | 0.375 | 300 | 0.0982 | 0.1792 | - | 0.9769 | | 0.4 | 320 | 0.0421 | 0.1724 | - | 0.9783 | | 0.425 | 340 | 0.1078 | 0.2158 | - | 0.9733 | | 0.45 | 360 | 0.0882 | 0.1501 | - | 0.9822 | | 0.475 | 380 | 0.0251 | 0.1334 | - | 0.9843 | | 0.5 | 400 | 0.0267 | 0.1238 | - | 0.9855 | | 0.525 | 420 | 0.0899 | 0.1404 | - | 0.9859 | | 0.55 | 440 | 0.0782 | 0.1253 | - | 0.9852 | | 0.575 | 460 | 0.1209 | 0.1772 | - | 0.9768 | | 0.6 | 480 | 0.0643 | 0.1817 | - | 0.9763 | | 0.625 | 500 | 0.1051 | 0.2030 | - | 0.9748 | | 0.65 | 520 | 0.0494 | 0.1405 | - | 0.9814 | | 0.675 | 540 | 0.0548 | 0.1175 | - | 0.9831 | | 0.7 | 560 | 0.121 | 0.1597 | - | 0.9819 | | 0.725 | 580 | 0.0642 | 0.1675 | - | 0.9811 | | 0.75 | 600 | 0.0618 | 0.1539 | - | 0.9827 | | 0.775 | 620 | 0.0745 | 0.1149 | - | 0.9845 | | 0.8 | 640 | 0.0452 | 0.1562 | - | 0.9797 | | 0.825 | 660 | 0.0816 | 0.1580 | - | 0.9816 | | 0.85 | 680 | 0.0957 | 0.1192 | - | 0.9830 | | 0.875 | 700 | 0.06 | 0.1100 | - | 0.9863 | | 0.9 | 720 | 0.018 | 0.1300 | - | 0.9822 | | 0.925 | 740 | 0.0213 | 0.1267 | - | 0.9843 | | 0.95 | 760 | 0.0263 | 0.1687 | - | 0.9796 | | 0.975 | 780 | 0.032 | 0.1250 | - | 0.9849 | | 1.0 | 800 | 0.065 | 0.1363 | - | 0.9828 | | 1.025 | 820 | 0.0174 | 0.1394 | - | 0.9835 | | 1.05 | 840 | 0.0568 | 0.1124 | - | 0.9849 | | 1.075 | 860 | 0.0464 | 0.1174 | - | 0.9826 | | 1.1 | 880 | 0.013 | 0.1178 | - | 0.9814 | | 1.125 | 900 | 0.0331 | 0.1239 | - | 0.9812 | | 1.15 | 920 | 0.0416 | 0.1240 | - | 0.9817 | | 1.175 | 940 | 0.0111 | 0.1303 | - | 0.9840 | | 1.2 | 960 | 0.0441 | 0.1156 | - | 0.9854 | | 1.225 | 980 | 0.0243 | 0.0972 | - | 0.9879 | | 1.25 | 1000 | 0.0 | 0.0917 | - | 0.9877 | | 1.275 | 1020 | 0.0477 | 0.0863 | - | 0.9885 | | 1.3 | 1040 | 0.0108 | 0.1029 | - | 0.9877 | | 1.325 | 1060 | 0.0 | 0.1103 | - | 0.9869 | | 1.35 | 1080 | 0.0134 | 0.1113 | - | 0.9871 | | 1.375 | 1100 | 0.0 | 0.1146 | - | 0.9870 | | 1.4 | 1120 | 0.0132 | 0.1218 | - | 0.9862 | | 1.425 | 1140 | 0.0223 | 0.0948 | - | 0.9883 | | 1.45 | 1160 | 0.0183 | 0.0883 | - | 0.9883 | | 1.475 | 1180 | 0.0378 | 0.0961 | - | 0.9881 | | 1.5 | 1200 | 0.0114 | 0.0961 | - | 0.9882 | | 1.525 | 1220 | 0.0143 | 0.1020 | - | 0.9861 | | 1.55 | 1240 | 0.0183 | 0.0867 | - | 0.9888 | | 1.575 | 1260 | 0.0 | 0.0858 | - | 0.9892 | | 1.6 | 1280 | 0.0 | 0.0858 | - | 0.9892 | | 1.625 | 1300 | 0.0 | 0.0858 | - | 0.9892 | | 1.65 | 1320 | 0.0172 | 0.0846 | - | 0.9896 | | 1.675 | 1340 | 0.0153 | 0.0754 | - | 0.9917 | | 1.7 | 1360 | 0.0163 | 0.0770 | - | 0.9913 | | 1.725 | 1380 | 0.0167 | 0.0943 | - | 0.9901 | | 1.75 | 1400 | 0.0148 | 0.0964 | - | 0.9899 | | 1.775 | 1420 | 0.0065 | 0.0930 | - | 0.9902 | | 1.8 | 1440 | 0.0 | 0.0945 | - | 0.9904 | | 1.825 | 1460 | 0.0067 | 0.0991 | - | 0.9895 | | 1.85 | 1480 | 0.0194 | 0.0996 | - | 0.9894 | | 1.875 | 1500 | 0.0 | 0.0953 | - | 0.9903 | | 1.9 | 1520 | 0.0236 | 0.0883 | - | 0.9906 | | 1.925 | 1540 | 0.0111 | 0.0858 | - | 0.9904 | | 1.95 | 1560 | 0.0 | 0.0878 | - | 0.9903 | | 1.975 | 1580 | 0.0147 | 0.0849 | - | 0.9906 | | 2.0 | 1600 | 0.0154 | 0.0852 | - | 0.9902 | | 2.025 | 1620 | 0.0067 | 0.0861 | - | 0.9903 | | 2.05 | 1640 | 0.019 | 0.0859 | - | 0.9907 | | 2.075 | 1660 | 0.0083 | 0.0875 | - | 0.9908 | | 2.1 | 1680 | 0.0067 | 0.0771 | - | 0.9917 | | 2.125 | 1700 | 0.0 | 0.0773 | - | 0.9917 | | 2.15 | 1720 | 0.0071 | 0.0771 | - | 0.9919 | | 2.175 | 1740 | 0.0064 | 0.0756 | - | 0.9916 | | 2.2 | 1760 | 0.0 | 0.0772 | - | 0.9916 | | 2.225 | 1780 | 0.0 | 0.0772 | - | 0.9915 | | 2.25 | 1800 | 0.0158 | 0.0734 | - | 0.9920 | | 2.275 | 1820 | 0.0 | 0.0730 | - | 0.9920 | | 2.3 | 1840 | 0.0 | 0.0733 | - | 0.9920 | | 2.325 | 1860 | 0.0161 | 0.0681 | - | 0.9922 | | 2.35 | 1880 | 0.0 | 0.0713 | - | 0.9920 | | 2.375 | 1900 | 0.0 | 0.0721 | - | 0.9920 | | 2.4 | 1920 | 0.0 | 0.0722 | - | 0.9920 | | 2.425 | 1940 | 0.0064 | 0.0648 | - | 0.9928 | | 2.45 | 1960 | 0.0068 | 0.0641 | - | 0.9930 | | **2.475** | **1980** | **0.0069** | **0.0635** | **-** | **0.9929** | | 2.5 | 2000 | 0.0066 | 0.0657 | - | 0.9929 | | 2.525 | 2020 | 0.0 | 0.0657 | - | 0.9930 | | 2.55 | 2040 | 0.0139 | 0.0657 | - | 0.9931 | | 2.575 | 2060 | 0.0 | 0.0667 | - | 0.9931 | | 2.6 | 2080 | 0.0 | 0.0666 | - | 0.9931 | | 2.625 | 2100 | 0.0 | 0.0666 | - | 0.9931 | | 2.65 | 2120 | 0.0 | 0.0666 | - | 0.9931 | | 2.675 | 2140 | 0.0 | 0.0667 | - | 0.9931 | | 2.7 | 2160 | 0.0 | 0.0666 | - | 0.9931 | | 2.725 | 2180 | 0.0 | 0.0666 | - | 0.9931 | | 2.75 | 2200 | 0.0071 | 0.0665 | - | 0.9931 | | 2.775 | 2220 | 0.0 | 0.0671 | - | 0.9931 | | 2.8 | 2240 | 0.0071 | 0.0692 | - | 0.9928 | | 2.825 | 2260 | 0.0 | 0.0700 | - | 0.9927 | | 2.85 | 2280 | 0.0068 | 0.0688 | - | 0.9927 | | 2.875 | 2300 | 0.0 | 0.0688 | - | 0.9927 | | 2.9 | 2320 | 0.0 | 0.0688 | - | 0.9927 | | 2.925 | 2340 | 0.0 | 0.0688 | - | 0.9927 | | 2.95 | 2360 | 0.0 | 0.0688 | - | 0.9927 | | 2.975 | 2380 | 0.0 | 0.0688 | - | 0.9927 | | 3.0 | 2400 | 0.0 | 0.0688 | - | 0.9927 | | -1 | -1 | - | - | 0.9957 | - | * The bold row denotes the saved checkpoint. </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.4.0 - Transformers: 4.48.1 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "datasets": ["yahyaabd/query-hard-pos-neg-doc-pairs-statictable"], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "cosine_accuracy_threshold", "cosine_f1", "cosine_f1_threshold", "cosine_precision", "cosine_recall", "cosine_ap", "cosine_mcc"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:25580", "loss:OnlineContrastiveLoss"], "widget": [{"source_sentence": "ikhtisar arus kas triwulan 1, 2004 (miliar)", "sentences": ["Balita (0-59 Bulan) Menurut Status Gizi, Tahun 1998-2005", "Perbandingan Indeks dan Tingkat Inflasi Desember 2023 Kota-kota di Luar Pulau Jawa dan Sumatera dengan Nasional (2018=100)", "Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tengah, 2018-2023"]}, {"source_sentence": "BaIgaimana gambaran neraca arus dana dUi Indonesia pada kuartal kedua tahun 2015?", "sentences": ["Jumlah Sekolah, Guru, dan Murid Sekolah Menengah Pertama (SMP) di Bawah Kementrian Pendidikan dan Kebudayaan Menurut Provinsi 2011/2012-2015/2016", "Ringkasan Neraca Arus Dana Triwulan III Tahun 2003 (Miliar Rupiah)", "Rata-rata Konsumsi dan Pengeluaran Perkapita Seminggu Menurut Komoditi Makanan dan Golongan Pengeluaran per Kapita Seminggu di Provinsi Sulawesi Tenggara, 2018-2023"]}, {"source_sentence": "Berapa persen pengeluaran orang di kotaa untuk makanan vs non-makanan, per provinsi, 2018?", "sentences": ["Ekspor Tanaman Obat, Aromatik, dan Rempah-Rempah menurut Negara Tujuan Utama, 2012-2023", "Rata-rata Pendapatan Bersih Pekerja Bebas Menurut Provinsi dan Pendidikan Tertinggi yang Ditamatkan (ribu rupiah), 2017", "IHK dan Rata-rata Upah per Bulan Buruh Industri di Bawah Mandor (Supervisor), 1996-2014 (1996=100)"]}, {"source_sentence": "Negara-negara asal impor crude oil dan produk turunannya tahun 2002-2023", "sentences": ["Persentase Pengeluaran Rata-rata per Kapita Sebulan Menurut Kelompok Barang, Indonesia, 1999, 2002-2023", "Rata-rata Pendapatan Bersih Berusaha Sendiri menurut Provinsi dan Pendidikan yang Ditamatkan (ribu rupiah), 2016", "Perkembangan Beberapa Agregat Pendapatan dan Pendapatan per Kapita Atas Dasar Harga Berlaku, 2010-2016"]}, {"source_sentence": "Arus dana Q3 2006", "sentences": ["Posisi Simpanan Berjangka Rupiah pada Bank Umum dan BPR Menurut Golongan Pemilik (miliar rupiah), 2005-2018", "Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)", "Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012"]}], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "results": [{"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "allstats semantic mini v1 test", "type": "allstats-semantic-mini-v1_test"}, "metrics": [{"type": "cosine_accuracy", "value": 0.9808882417182381, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.7710697650909424, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.9706129303106633, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.7710697650909424, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.9725182277061133, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.9687150837988827, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.995703716321768, "name": "Cosine Ap"}, {"type": "cosine_mcc", "value": 0.9564565560407811, "name": "Cosine Mcc"}]}, {"task": {"type": "binary-classification", "name": "Binary Classification"}, "dataset": {"name": "allstats semantic mini v1 dev", "type": "allstats-semantic-mini-v1_dev"}, "metrics": [{"type": "cosine_accuracy", "value": 0.9753604672385472, "name": "Cosine Accuracy"}, {"type": "cosine_accuracy_threshold", "value": 0.773653507232666, "name": "Cosine Accuracy Threshold"}, {"type": "cosine_f1", "value": 0.9622799664710814, "name": "Cosine F1"}, {"type": "cosine_f1_threshold", "value": 0.773653507232666, "name": "Cosine F1 Threshold"}, {"type": "cosine_precision", "value": 0.9545454545454546, "name": "Cosine Precision"}, {"type": "cosine_recall", "value": 0.9701408450704225, "name": "Cosine Recall"}, {"type": "cosine_ap", "value": 0.9927278433062661, "name": "Cosine Ap"}, {"type": "cosine_mcc", "value": 0.9440549314838564, "name": "Cosine Mcc"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,259
Helsinki-NLP/opus-mt-de-bzs
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "bzs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:27:36+00:00
70
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-de-bzs * source languages: de * target languages: bzs * OPUS readme: [de-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.bzs | 21.0 | 0.389 |
null
Non_BioNLP
### opus-mt-de-bzs * source languages: de * target languages: bzs * OPUS readme: [de-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-bzs/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.bzs | 21.0 | 0.389 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,260
lomov/targetsandgoalsv1
lomov
text-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "autotrain", "dataset:targetsandgoalsv1/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-20T00:51:53Z
2024-05-20T00:52:59+00:00
6
0
--- datasets: - targetsandgoalsv1/autotrain-data tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.22812850773334503 f1_macro: 0.928605054676046 f1_micro: 0.9313725490196079 f1_weighted: 0.9297769573887364 precision_macro: 0.9294524189261031 precision_micro: 0.9313725490196079 precision_weighted: 0.930390072030939 recall_macro: 0.93 recall_micro: 0.9313725490196079 recall_weighted: 0.9313725490196079 accuracy: 0.9313725490196079
null
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.22812850773334503 f1_macro: 0.928605054676046 f1_micro: 0.9313725490196079 f1_weighted: 0.9297769573887364 precision_macro: 0.9294524189261031 precision_micro: 0.9313725490196079 precision_weighted: 0.930390072030939 recall_macro: 0.93 recall_micro: 0.9313725490196079 recall_weighted: 0.9313725490196079 accuracy: 0.9313725490196079
{"datasets": ["targetsandgoalsv1/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
task
[ "TEXT_CLASSIFICATION" ]
44,261
KDB/bert-base-finetuned-sts
KDB
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:klue", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-28T17:54:52Z
2022-05-30T03:59:09+00:00
16
0
--- datasets: - klue metrics: - pearsonr tags: - generated_from_trainer model-index: - name: bert-base-finetuned-sts results: - task: type: text-classification name: Text Classification dataset: name: klue type: klue args: sts metrics: - type: pearsonr value: 0.8970473420720607 name: Pearsonr --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-sts This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4770 - Pearsonr: 0.8970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearsonr | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 92 | 0.6330 | 0.8717 | | No log | 2.0 | 184 | 0.6206 | 0.8818 | | No log | 3.0 | 276 | 0.5010 | 0.8947 | | No log | 4.0 | 368 | 0.4717 | 0.8956 | | No log | 5.0 | 460 | 0.4770 | 0.8970 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-sts This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4770 - Pearsonr: 0.8970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearsonr | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 92 | 0.6330 | 0.8717 | | No log | 2.0 | 184 | 0.6206 | 0.8818 | | No log | 3.0 | 276 | 0.5010 | 0.8947 | | No log | 4.0 | 368 | 0.4717 | 0.8956 | | No log | 5.0 | 460 | 0.4770 | 0.8970 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
{"datasets": ["klue"], "metrics": ["pearsonr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-finetuned-sts", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "klue", "type": "klue", "args": "sts"}, "metrics": [{"type": "pearsonr", "value": 0.8970473420720607, "name": "Pearsonr"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,262
starklin/distilbert-base-uncased-finetuned-emotion
starklin
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-07-03T08:01:21Z
2024-07-03T08:27:59+00:00
106
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.9355 name: Accuracy - type: f1 value: 0.9356673280620742 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1688 - Accuracy: 0.9355 - F1: 0.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8017 | 1.0 | 250 | 0.2908 | 0.9115 | 0.9115 | | 0.2152 | 2.0 | 500 | 0.1851 | 0.924 | 0.9241 | | 0.1467 | 3.0 | 750 | 0.1688 | 0.9355 | 0.9357 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1688 - Accuracy: 0.9355 - F1: 0.9357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8017 | 1.0 | 250 | 0.2908 | 0.9115 | 0.9115 | | 0.2152 | 2.0 | 500 | 0.1851 | 0.924 | 0.9241 | | 0.1467 | 3.0 | 750 | 0.1688 | 0.9355 | 0.9357 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9355, "name": "Accuracy"}, {"type": "f1", "value": 0.9356673280620742, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,263
gokuls/HBERTv1_48_L2_H512_A8_emotion
gokuls
text-classification
[ "transformers", "pytorch", "hybridbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:gokuls/HBERTv1_48_L2_H512_A8", "base_model:finetune:gokuls/HBERTv1_48_L2_H512_A8", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-04T22:49:46Z
2023-10-04T22:51:17+00:00
5
0
--- base_model: gokuls/HBERTv1_48_L2_H512_A8 datasets: - emotion metrics: - accuracy tags: - generated_from_trainer model-index: - name: HBERTv1_48_L2_H512_A8_emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.892 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L2_H512_A8_emotion This model is a fine-tuned version of [gokuls/HBERTv1_48_L2_H512_A8](https://huggingface.co/gokuls/HBERTv1_48_L2_H512_A8) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3286 - Accuracy: 0.892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7772 | 1.0 | 250 | 0.3597 | 0.879 | | 0.3236 | 2.0 | 500 | 0.3267 | 0.883 | | 0.2453 | 3.0 | 750 | 0.3155 | 0.886 | | 0.201 | 4.0 | 1000 | 0.3350 | 0.889 | | 0.1635 | 5.0 | 1250 | 0.3286 | 0.892 | | 0.135 | 6.0 | 1500 | 0.3490 | 0.883 | | 0.1065 | 7.0 | 1750 | 0.3686 | 0.89 | | 0.0887 | 8.0 | 2000 | 0.4164 | 0.8805 | | 0.0719 | 9.0 | 2250 | 0.4409 | 0.8805 | | 0.0626 | 10.0 | 2500 | 0.4464 | 0.8825 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.0
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L2_H512_A8_emotion This model is a fine-tuned version of [gokuls/HBERTv1_48_L2_H512_A8](https://huggingface.co/gokuls/HBERTv1_48_L2_H512_A8) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3286 - Accuracy: 0.892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7772 | 1.0 | 250 | 0.3597 | 0.879 | | 0.3236 | 2.0 | 500 | 0.3267 | 0.883 | | 0.2453 | 3.0 | 750 | 0.3155 | 0.886 | | 0.201 | 4.0 | 1000 | 0.3350 | 0.889 | | 0.1635 | 5.0 | 1250 | 0.3286 | 0.892 | | 0.135 | 6.0 | 1500 | 0.3490 | 0.883 | | 0.1065 | 7.0 | 1750 | 0.3686 | 0.89 | | 0.0887 | 8.0 | 2000 | 0.4164 | 0.8805 | | 0.0719 | 9.0 | 2250 | 0.4409 | 0.8805 | | 0.0626 | 10.0 | 2500 | 0.4464 | 0.8825 | ### Framework versions - Transformers 4.34.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.14.0
{"base_model": "gokuls/HBERTv1_48_L2_H512_A8", "datasets": ["emotion"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "HBERTv1_48_L2_H512_A8_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.892, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,264
Helsinki-NLP/opus-mt-ja-tr
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:59:24+00:00
35
0
--- language: - ja - tr license: apache-2.0 tags: - translation --- ### jpn-tur * source group: Japanese * target group: Turkish * OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.tur | 16.7 | 0.434 | ### System Info: - hf_name: jpn-tur - source_languages: jpn - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'tr'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: tur - short_pair: ja-tr - chrF2_score: 0.434 - bleu: 16.7 - brevity_penalty: 0.932 - ref_len: 4755.0 - src_name: Japanese - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: tr - prefer_old: False - long_pair: jpn-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### jpn-tur * source group: Japanese * target group: Turkish * OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.tur | 16.7 | 0.434 | ### System Info: - hf_name: jpn-tur - source_languages: jpn - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'tr'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: tur - short_pair: ja-tr - chrF2_score: 0.434 - bleu: 16.7 - brevity_penalty: 0.932 - ref_len: 4755.0 - src_name: Japanese - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: tr - prefer_old: False - long_pair: jpn-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "tr"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,265
Nextcloud-AI/opus-mt-nl-en
Nextcloud-AI
translation
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-02-23T10:45:56Z
2023-08-16T12:01:39+00:00
14
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-nl-en * source languages: nl * target languages: en * OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip) * test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt) * test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.en | 60.9 | 0.749 |
null
Non_BioNLP
### opus-mt-nl-en * source languages: nl * target languages: en * OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip) * test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt) * test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.en | 60.9 | 0.749 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,266
RichardErkhov/simbolo-ai_-_Myanmarsar-GPT-4bits
RichardErkhov
null
[ "safetensors", "gpt2", "arxiv:2110.05896", "arxiv:2204.07580", "4-bit", "bitsandbytes", "region:us" ]
2025-01-24T08:07:11Z
2025-01-24T08:08:25+00:00
8
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Myanmarsar-GPT - bnb 4bits - Model creator: https://huggingface.co/simbolo-ai/ - Original model: https://huggingface.co/simbolo-ai/Myanmarsar-GPT/ Original model description: --- license: mit language: - my pipeline_tag: text-generation metrics: - code_eval library_name: transformers tags: - burmese - gpt2 - pre-trained --- The Simbolo's Myanmarsar-GPT (it is not a chatbot but a text generation model which can be used to develop chatbot) is pre-trained on a dataset of 20,000 Burmese data and pre-trained using the GPT-2 architecture of MGPT Model. Its purpose is to serve as a foundational pre-trained model for the Burmese language, facilitating fine-tuning for specific applications of different tasks such as creative writing, chatbot, machine translation etc. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6598b82502c4796342239a35/rFId3-xyzWW-juDq_er9k.jpeg) ### How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Simbolo-Servicio/Myanmarsar-GPT") model = AutoModelForCausalLM.from_pretrained("Simbolo-Servicio/Myanmarsar-GPT") input_text = "ပညာရေး" input_ids = tokenizer.encode(input_text, return_tensors='pt') output = model.generate(input_ids, max_length=50) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ### Data We use 20,000 Burmese sentences and most are from our open-source [data](https://huggingface.co/datasets/Simbolo-Servicio/wiki-burmese-sentences) which contains 100,000 sentences sourced from Wikipedia. ### Contributors Main Contributor: [Sa Phyo Thu Htet](https://github.com/SaPhyoThuHtet) Wikipedia Data Crawling: Kaung Kaung Ko Ko, Phuu Pwint Thinzar Kyaing Releasing the Model: Eithandaraung, Ye Yint Htut, Thet Chit Su, Naing Phyo Aung, Nyan Linn Phyo Zaw, Lynn Thu Kha ### Acknowledgment We extend our gratitude to the creators of the [mGPT-XL](https://huggingface.co/ai-forever/mGPT) models for their invaluable contribution to this project. We want to thank everyone who has worked on the related works, especially [Minsithu](https://huggingface.co/jojo-ai-mst/MyanmarGPTT) and [Dr. Wai Yan Nyein Naing](https://huggingface.co/WYNN747/Burmese-GPT)who initiated the work of gpt-2 model. And We would like to thank Simbolo:Servico which is a branch of Simbolo under the company of Intello Tech for providing financial support. ### Limitations and Bias We have yet to investigate the potential bias inherent in this model thoroughly. Regarding transparency, it's important to note that the model is primarily trained on data from the Unicode Burmese(Myanmar) language. ### References 1. Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52. 2. Lin, N., Fu, Y., Chen, C., Yang, Z., & Jiang, S. (2021). LaoPLM: Pre-trained Language Models for Lao. ArXiv. /abs/2110.05896 3. MinSithu, MyanmarGPT, https://huggingface.co/jojo-ai-mst/MyanmarGPT, 1.1-SweptWood 4. Wai Yan Nyein Naing, WYNN747/Burmese-GPT, https://huggingface.co/WYNN747/Burmese-GPT 5. Sai Htaung Kham, saihtaungkham/BurmeseRoBERTaCLM 6. Shliazhko, O., Fenogenova, A., Tikhonova, M., Mikhailov, V., Kozlova, A., & Shavrina, T. (2022). MGPT: Few-Shot Learners Go Multilingual. ArXiv. /abs/2204.07580 ### How to Cite this work: ### Cite As: ```bibtex @misc{myanmarsar-gpt, author = {{Sa Phyo Thu Htet}}, title = {Myanmarsar GPT}, url = {https://huggingface.co/Simbolo-Servicio/Myanmarsar-GPT}, urldate = {2024-1-09}, date = {2024-1-09} } ```
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Myanmarsar-GPT - bnb 4bits - Model creator: https://huggingface.co/simbolo-ai/ - Original model: https://huggingface.co/simbolo-ai/Myanmarsar-GPT/ Original model description: --- license: mit language: - my pipeline_tag: text-generation metrics: - code_eval library_name: transformers tags: - burmese - gpt2 - pre-trained --- The Simbolo's Myanmarsar-GPT (it is not a chatbot but a text generation model which can be used to develop chatbot) is pre-trained on a dataset of 20,000 Burmese data and pre-trained using the GPT-2 architecture of MGPT Model. Its purpose is to serve as a foundational pre-trained model for the Burmese language, facilitating fine-tuning for specific applications of different tasks such as creative writing, chatbot, machine translation etc. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6598b82502c4796342239a35/rFId3-xyzWW-juDq_er9k.jpeg) ### How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Simbolo-Servicio/Myanmarsar-GPT") model = AutoModelForCausalLM.from_pretrained("Simbolo-Servicio/Myanmarsar-GPT") input_text = "ပညာရေး" input_ids = tokenizer.encode(input_text, return_tensors='pt') output = model.generate(input_ids, max_length=50) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ### Data We use 20,000 Burmese sentences and most are from our open-source [data](https://huggingface.co/datasets/Simbolo-Servicio/wiki-burmese-sentences) which contains 100,000 sentences sourced from Wikipedia. ### Contributors Main Contributor: [Sa Phyo Thu Htet](https://github.com/SaPhyoThuHtet) Wikipedia Data Crawling: Kaung Kaung Ko Ko, Phuu Pwint Thinzar Kyaing Releasing the Model: Eithandaraung, Ye Yint Htut, Thet Chit Su, Naing Phyo Aung, Nyan Linn Phyo Zaw, Lynn Thu Kha ### Acknowledgment We extend our gratitude to the creators of the [mGPT-XL](https://huggingface.co/ai-forever/mGPT) models for their invaluable contribution to this project. We want to thank everyone who has worked on the related works, especially [Minsithu](https://huggingface.co/jojo-ai-mst/MyanmarGPTT) and [Dr. Wai Yan Nyein Naing](https://huggingface.co/WYNN747/Burmese-GPT)who initiated the work of gpt-2 model. And We would like to thank Simbolo:Servico which is a branch of Simbolo under the company of Intello Tech for providing financial support. ### Limitations and Bias We have yet to investigate the potential bias inherent in this model thoroughly. Regarding transparency, it's important to note that the model is primarily trained on data from the Unicode Burmese(Myanmar) language. ### References 1. Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52. 2. Lin, N., Fu, Y., Chen, C., Yang, Z., & Jiang, S. (2021). LaoPLM: Pre-trained Language Models for Lao. ArXiv. /abs/2110.05896 3. MinSithu, MyanmarGPT, https://huggingface.co/jojo-ai-mst/MyanmarGPT, 1.1-SweptWood 4. Wai Yan Nyein Naing, WYNN747/Burmese-GPT, https://huggingface.co/WYNN747/Burmese-GPT 5. Sai Htaung Kham, saihtaungkham/BurmeseRoBERTaCLM 6. Shliazhko, O., Fenogenova, A., Tikhonova, M., Mikhailov, V., Kozlova, A., & Shavrina, T. (2022). MGPT: Few-Shot Learners Go Multilingual. ArXiv. /abs/2204.07580 ### How to Cite this work: ### Cite As: ```bibtex @misc{myanmarsar-gpt, author = {{Sa Phyo Thu Htet}}, title = {Myanmarsar GPT}, url = {https://huggingface.co/Simbolo-Servicio/Myanmarsar-GPT}, urldate = {2024-1-09}, date = {2024-1-09} } ```
{}
task
[ "TRANSLATION" ]
44,267
fine-tuned/stackoverflow-c-64-24
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Coding", "Programming", "Development", "Tutorials", "Software", "custom_code", "en", "dataset:fine-tuned/stackoverflow-c-64-24", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-15T11:59:04Z
2024-05-15T13:10:24+00:00
6
0
--- datasets: - fine-tuned/stackoverflow-c-64-24 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Coding - Programming - Development - Tutorials - Software --- This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-code**](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) designed for the following use case: coding tutorials search engine ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/stackoverflow-c-64-24', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
null
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-code**](https://huggingface.co/jinaai/jina-embeddings-v2-base-code) designed for the following use case: coding tutorials search engine ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/stackoverflow-c-64-24', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
{"datasets": ["fine-tuned/stackoverflow-c-64-24", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Coding", "Programming", "Development", "Tutorials", "Software"]}
task
[ "TEXT_CLASSIFICATION" ]
44,268
pinzhenchen/sft-lora-fi-pythia-2b8
pinzhenchen
null
[ "generation", "question answering", "instruction tuning", "fi", "arxiv:2309.08958", "license:cc-by-nc-4.0", "region:us" ]
2024-03-05T23:52:55Z
2024-03-05T23:52:58+00:00
0
0
--- language: - fi license: cc-by-nc-4.0 tags: - generation - question answering - instruction tuning --- ### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
null
Non_BioNLP
### Model Description This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable. * [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main) * [Paper](https://arxiv.org/abs/2309.08958) #### Instruction tuning details * Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) * Instruction tuning language: Finnish * Training method: LoRA. * LoRA details: rank=8, alpha=16, target modules={key, query, value}. * Best checkpoint: best cross-entropy on a validation set, trained for 5 epochs. * Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data). #### Usage The model checkpoint should be loaded with the base model together using `transformers` and `peft` libraries. Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/loraft) for inference and training instructions. #### Citation ``` @inproceedings{chen-etal-2024-monolingual, title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}", author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield", year="2024", booktitle = "Findings of the Association for Computational Linguistics: EACL 2024", } ```
{"language": ["fi"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
task
[ "QUESTION_ANSWERING" ]
44,269
Techbitforge/dolphin-2.0-mixtral-7b-GGUF
Techbitforge
text-classification
[ "transformers", "gguf", "text-classification", "pytorch", "endpoints_compatible", "region:us" ]
2025-01-18T13:35:39Z
2025-01-18T13:53:09+00:00
13
0
--- library_name: transformers tags: - text-classification - pytorch --- # Model Name This is a BERT-based model fine-tuned for text classification.
null
Non_BioNLP
# Model Name This is a BERT-based model fine-tuned for text classification.
{"library_name": "transformers", "tags": ["text-classification", "pytorch"]}
task
[ "TEXT_CLASSIFICATION" ]
44,270
mini1013/master_cate_bt4_test
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:mini1013/master_domain", "base_model:finetune:mini1013/master_domain", "model-index", "region:us" ]
2024-12-13T17:06:35Z
2024-12-16T01:48:13+00:00
6
0
--- base_model: mini1013/master_domain library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '[당일출고/백화점정품] 나스 래디언트 크리미 컨실러 6ml / 바닐라 바닐라 에스엠(SM)월드' - text: '[갤러리아] [수분 피팅 프라이머] 프로텍션 SPF 50 PA+++(한화갤러리아㈜ 광교점) 프로텍션 SPF 50 PA+++ 한화갤러리아(주)' - text: '[빌리프] [24MS]시카 밤 쿠션 핑크 베이지 기본 주식회사 인터파크커머스' - text: (백화) 오휘 24RN 얼티밋 커버 메쉬 쿠션 1호 383007 옵션없음 펀펀몰 - text: 나스 래디언스 프라이머 30ml(SPF35) 옵션없음 블루밍컴퍼니 inference: true model-index: - name: SetFit with mini1013/master_domain results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.7155172413793104 name: Accuracy --- # SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 7 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0.0 | <ul><li>'콜라겐 비비크림 50g 23호 옵션없음 심완태'</li><li>'본체청정 물광 커버력 좋은 재생 톤업 bb 비비 크림 연 퍼펙트 매직 50ml 옵션없음 에테르'</li><li>'빈토르테 미네랄 CC크림 자외선차단 SPF50+ 30g 옵션없음 토스토'</li></ul> | | 3.0 | <ul><li>'바비브라운 코렉터 1.4g 피치 비스크 호이컴퍼니'</li><li>'더샘 커버 퍼펙션 트리플 팟 컨실러 5colors 04 톤업 베이지 주식회사 더샘인터내셔날'</li><li>'티핏 tfit 커버 업 프로 컨실러 15G 03 쿨 티핏클래스 주식회사'</li></ul> | | 1.0 | <ul><li>'누즈 케어 톤업 30ml(SPF50+) 옵션없음 달토끼네멋진마켓'</li><li>'MAC 맥 스트롭 크림 50ml 피치라이트 호이컴퍼니'</li><li>'더후 공진향 미 럭셔리 선베이스 45ml33881531 옵션없음 씨플랩몰'</li></ul> | | 5.0 | <ul><li>'에이지투웨니스 벨벳 래스팅 팩트 14g + 14g(리필, SPF50+) 미디움베이지 위브로5'</li><li>'메리쏘드 릴커버 멜팅팩트 본품 11g + 리필 11g +퍼프2개 내추럴베이지(본품+리필)+퍼프2개 주식회사 벨라솔레'</li><li>'퓌 쿠션 스웨이드 15g(SPF50+) 누드스웨이드(03) 강원상회'</li></ul> | | 4.0 | <ul><li>'쥬리아 루나리스 실키 핏 스킨카바 23호리필내장 옵션없음 에테르노'</li><li>'Almay 프레스드 파우더 올 세트 노 샤인, 마이 베스트 라이트, [100] 0.20 oz 옵션없음 케이피스토어'</li><li>'철벽보습커버 21호 리필내장 쥬얼성분배합 투웨이케익 옵션없음 후니후니003'</li></ul> | | 6.0 | <ul><li>'VDL 루미레이어 프라이머 30ml 옵션없음 페퍼파우더'</li><li>'어바웃톤 블러 래스팅 스틱 프라이머 10g AT.블러 래스팅 스틱 프라이머 (주)삐아'</li><li>'로라 메르시에 퓨어 캔버스 프라이머 25ml - 트래블 사이즈 하이드레이팅 고온누리'</li></ul> | | 2.0 | <ul><li>'후 공진향 미 럭셔리 비비 스페셜 세트 267578 옵션없음 펀펀마켓'</li><li>'케이트 리얼 커버 리퀴드 파운데이션 세미 매트 + 스틱컨실러 A 세트 케이트'</li><li>'커버력높은 쿠션팩트 승무원팩트 본품+리필 or 광채CC크림 2종세트 SPF 50+ 뷰디아니'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7155 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bt4_test") # Run inference preds = model("나스 래디언스 프라이머 30ml(SPF35) 옵션없음 블루밍컴퍼니") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 5 | 9.7872 | 19 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 19 | | 1.0 | 21 | | 2.0 | 10 | | 3.0 | 19 | | 4.0 | 28 | | 5.0 | 23 | | 6.0 | 21 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (50, 50) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 60 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0588 | 1 | 0.499 | - | | 2.9412 | 50 | 0.3295 | - | | 5.8824 | 100 | 0.0469 | - | | 8.8235 | 150 | 0.0217 | - | | 11.7647 | 200 | 0.0013 | - | | 14.7059 | 250 | 0.0001 | - | | 17.6471 | 300 | 0.0001 | - | | 20.5882 | 350 | 0.0 | - | | 23.5294 | 400 | 0.0 | - | | 26.4706 | 450 | 0.0 | - | | 29.4118 | 500 | 0.0 | - | | 32.3529 | 550 | 0.0 | - | | 35.2941 | 600 | 0.0 | - | | 38.2353 | 650 | 0.0 | - | | 41.1765 | 700 | 0.0 | - | | 44.1176 | 750 | 0.0 | - | | 47.0588 | 800 | 0.0 | - | | 50.0 | 850 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with mini1013/master_domain This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 7 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0.0 | <ul><li>'콜라겐 비비크림 50g 23호 옵션없음 심완태'</li><li>'본체청정 물광 커버력 좋은 재생 톤업 bb 비비 크림 연 퍼펙트 매직 50ml 옵션없음 에테르'</li><li>'빈토르테 미네랄 CC크림 자외선차단 SPF50+ 30g 옵션없음 토스토'</li></ul> | | 3.0 | <ul><li>'바비브라운 코렉터 1.4g 피치 비스크 호이컴퍼니'</li><li>'더샘 커버 퍼펙션 트리플 팟 컨실러 5colors 04 톤업 베이지 주식회사 더샘인터내셔날'</li><li>'티핏 tfit 커버 업 프로 컨실러 15G 03 쿨 티핏클래스 주식회사'</li></ul> | | 1.0 | <ul><li>'누즈 케어 톤업 30ml(SPF50+) 옵션없음 달토끼네멋진마켓'</li><li>'MAC 맥 스트롭 크림 50ml 피치라이트 호이컴퍼니'</li><li>'더후 공진향 미 럭셔리 선베이스 45ml33881531 옵션없음 씨플랩몰'</li></ul> | | 5.0 | <ul><li>'에이지투웨니스 벨벳 래스팅 팩트 14g + 14g(리필, SPF50+) 미디움베이지 위브로5'</li><li>'메리쏘드 릴커버 멜팅팩트 본품 11g + 리필 11g +퍼프2개 내추럴베이지(본품+리필)+퍼프2개 주식회사 벨라솔레'</li><li>'퓌 쿠션 스웨이드 15g(SPF50+) 누드스웨이드(03) 강원상회'</li></ul> | | 4.0 | <ul><li>'쥬리아 루나리스 실키 핏 스킨카바 23호리필내장 옵션없음 에테르노'</li><li>'Almay 프레스드 파우더 올 세트 노 샤인, 마이 베스트 라이트, [100] 0.20 oz 옵션없음 케이피스토어'</li><li>'철벽보습커버 21호 리필내장 쥬얼성분배합 투웨이케익 옵션없음 후니후니003'</li></ul> | | 6.0 | <ul><li>'VDL 루미레이어 프라이머 30ml 옵션없음 페퍼파우더'</li><li>'어바웃톤 블러 래스팅 스틱 프라이머 10g AT.블러 래스팅 스틱 프라이머 (주)삐아'</li><li>'로라 메르시에 퓨어 캔버스 프라이머 25ml - 트래블 사이즈 하이드레이팅 고온누리'</li></ul> | | 2.0 | <ul><li>'후 공진향 미 럭셔리 비비 스페셜 세트 267578 옵션없음 펀펀마켓'</li><li>'케이트 리얼 커버 리퀴드 파운데이션 세미 매트 + 스틱컨실러 A 세트 케이트'</li><li>'커버력높은 쿠션팩트 승무원팩트 본품+리필 or 광채CC크림 2종세트 SPF 50+ 뷰디아니'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.7155 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("mini1013/master_cate_bt4_test") # Run inference preds = model("나스 래디언스 프라이머 30ml(SPF35) 옵션없음 블루밍컴퍼니") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 5 | 9.7872 | 19 | | Label | Training Sample Count | |:------|:----------------------| | 0.0 | 19 | | 1.0 | 21 | | 2.0 | 10 | | 3.0 | 19 | | 4.0 | 28 | | 5.0 | 23 | | 6.0 | 21 | ### Training Hyperparameters - batch_size: (512, 512) - num_epochs: (50, 50) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 60 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:----:|:-------------:|:---------------:| | 0.0588 | 1 | 0.499 | - | | 2.9412 | 50 | 0.3295 | - | | 5.8824 | 100 | 0.0469 | - | | 8.8235 | 150 | 0.0217 | - | | 11.7647 | 200 | 0.0013 | - | | 14.7059 | 250 | 0.0001 | - | | 17.6471 | 300 | 0.0001 | - | | 20.5882 | 350 | 0.0 | - | | 23.5294 | 400 | 0.0 | - | | 26.4706 | 450 | 0.0 | - | | 29.4118 | 500 | 0.0 | - | | 32.3529 | 550 | 0.0 | - | | 35.2941 | 600 | 0.0 | - | | 38.2353 | 650 | 0.0 | - | | 41.1765 | 700 | 0.0 | - | | 44.1176 | 750 | 0.0 | - | | 47.0588 | 800 | 0.0 | - | | 50.0 | 850 | 0.0 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - Transformers: 4.44.2 - PyTorch: 2.2.0a0+81ea7a4 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "mini1013/master_domain", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "[당일출고/백화점정품] 나스 래디언트 크리미 컨실러 6ml / 바닐라 바닐라 에스엠(SM)월드"}, {"text": "[갤러리아] [수분 피팅 프라이머] 프로텍션 SPF 50 PA+++(한화갤러리아㈜ 광교점) 프로텍션 SPF 50 PA+++ 한화갤러리아(주)"}, {"text": "[빌리프] [24MS]시카 밤 쿠션 핑크 베이지 기본 주식회사 인터파크커머스"}, {"text": "(백화) 오휘 24RN 얼티밋 커버 메쉬 쿠션 1호 383007 옵션없음 펀펀몰"}, {"text": "나스 래디언스 프라이머 30ml(SPF35) 옵션없음 블루밍컴퍼니"}], "inference": true, "model-index": [{"name": "SetFit with mini1013/master_domain", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7155172413793104, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,271
dltsj/mt5-small-finetuned-on-mT5-lcsts
dltsj
summarization
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-16T07:27:53Z
2023-04-19T14:49:12+00:00
29
1
--- license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: mt5-small-finetuned-on-mT5-lcsts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-on-mT5-lcsts ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.6049 | 1.0 | 47451 | 3.0569 | 10.8382 | 1.6761 | 10.7971 | 10.7943 | | 3.354 | 2.0 | 94902 | 2.9896 | 11.2666 | 1.816 | 11.2222 | 11.2216 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-on-mT5-lcsts ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.6049 | 1.0 | 47451 | 3.0569 | 10.8382 | 1.6761 | 10.7971 | 10.7943 | | 3.354 | 2.0 | 94902 | 2.9896 | 11.2666 | 1.816 | 11.2222 | 11.2216 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-on-mT5-lcsts", "results": []}]}
task
[ "SUMMARIZATION" ]
44,272
facebook/fasttext-uz-vectors
facebook
feature-extraction
[ "fasttext", "feature-extraction", "uz", "arxiv:1607.04606", "arxiv:1802.06893", "arxiv:1607.01759", "arxiv:1612.03651", "license:cc-by-sa-3.0", "region:us" ]
2023-03-24T07:27:15Z
2023-06-03T22:17:18+00:00
0
1
--- language: uz library_name: fasttext license: cc-by-sa-3.0 tags: - feature-extraction widget: - text: apple example_title: apple --- # fastText (Uzbek) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-uz-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
null
Non_BioNLP
# fastText (Uzbek) fastText is an open-source, free, lightweight library that allows users to learn text representations and text classifiers. It works on standard, generic hardware. Models can later be reduced in size to even fit on mobile devices. It was introduced in [this paper](https://arxiv.org/abs/1607.04606). The official website can be found [here](https://fasttext.cc/). ## Model description fastText is a library for efficient learning of word representations and sentence classification. fastText is designed to be simple to use for developers, domain experts, and students. It's dedicated to text classification and learning word representations, and was designed to allow for quick model iteration and refinement without specialized hardware. fastText models can be trained on more than a billion words on any multicore CPU in less than a few minutes. It includes pre-trained models learned on Wikipedia and in over 157 different languages. fastText can be used as a command line, linked to a C++ application, or used as a library for use cases from experimentation and prototyping to production. ## Intended uses & limitations You can use pre-trained word vectors for text classification or language identification. See the [tutorials](https://fasttext.cc/docs/en/supervised-tutorial.html) and [resources](https://fasttext.cc/docs/en/english-vectors.html) on its official website to look for tasks that interest you. ### How to use Here is how to load and use a pre-trained vectors ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-uz-vectors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.words ['the', 'of', 'and', 'to', 'in', 'a', 'that', 'is', ...] >>> len(model.words) 145940 >>> model['bread'] array([ 4.89417791e-01, 1.60882145e-01, -2.25947708e-01, -2.94273376e-01, -1.04577184e-01, 1.17962055e-01, 1.34821936e-01, -2.41778508e-01, ...]) ``` Here is how to use this model to query nearest neighbors of an English word vector: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-en-nearest-neighbors", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.get_nearest_neighbors("bread", k=5) [(0.5641006231307983, 'butter'), (0.48875734210014343, 'loaf'), (0.4491206705570221, 'eat'), (0.42444291710853577, 'food'), (0.4229326844215393, 'cheese')] ``` Here is how to use this model to detect the language of a given text: ```python >>> import fasttext >>> from huggingface_hub import hf_hub_download >>> model_path = hf_hub_download(repo_id="facebook/fasttext-language-identification", filename="model.bin") >>> model = fasttext.load_model(model_path) >>> model.predict("Hello, world!") (('__label__eng_Latn',), array([0.81148803])) >>> model.predict("Hello, world!", k=5) (('__label__eng_Latn', '__label__vie_Latn', '__label__nld_Latn', '__label__pol_Latn', '__label__deu_Latn'), array([0.61224753, 0.21323682, 0.09696738, 0.01359863, 0.01319415])) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. Cosine similarity can be used to measure the similarity between two different word vectors. If two two vectors are identical, the cosine similarity will be 1. For two completely unrelated vectors, the value will be 0. If two vectors have an opposite relationship, the value will be -1. ```python >>> import numpy as np >>> def cosine_similarity(word1, word2): >>> return np.dot(model[word1], model[word2]) / (np.linalg.norm(model[word1]) * np.linalg.norm(model[word2])) >>> cosine_similarity("man", "boy") 0.061653383 >>> cosine_similarity("man", "ceo") 0.11989131 >>> cosine_similarity("woman", "ceo") -0.08834904 ``` ## Training data Pre-trained word vectors for 157 languages were trained on [Common Crawl](http://commoncrawl.org/) and [Wikipedia](https://www.wikipedia.org/) using fastText. These models were trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives. We also distribute three new word analogy datasets, for French, Hindi and Polish. ## Training procedure ### Tokenization We used the [Stanford word segmenter](https://nlp.stanford.edu/software/segmenter.html) for Chinese, [Mecab](http://taku910.github.io/mecab/) for Japanese and [UETsegmenter](https://github.com/phongnt570/UETsegmenter) for Vietnamese. For languages using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the [Europarl](https://www.statmt.org/europarl/) preprocessing tools. For the remaining languages, we used the ICU tokenizer. More information about the training of these models can be found in the article [Learning Word Vectors for 157 Languages](https://arxiv.org/abs/1802.06893). ### License The word vectors are distributed under the [*Creative Commons Attribution-Share-Alike License 3.0*](https://creativecommons.org/licenses/by-sa/3.0/). ### Evaluation datasets The analogy evaluation datasets described in the paper are available here: [French](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-fr.txt), [Hindi](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-hi.txt), [Polish](https://dl.fbaipublicfiles.com/fasttext/word-analogies/questions-words-pl.txt). ### BibTeX entry and citation info Please cite [1] if using this code for learning word representations or [2] if using for text classification. [1] P. Bojanowski\*, E. Grave\*, A. Joulin, T. Mikolov, [*Enriching Word Vectors with Subword Information*](https://arxiv.org/abs/1607.04606) ```markup @article{bojanowski2016enriching, title={Enriching Word Vectors with Subword Information}, author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.04606}, year={2016} } ``` [2] A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, [*Bag of Tricks for Efficient Text Classification*](https://arxiv.org/abs/1607.01759) ```markup @article{joulin2016bag, title={Bag of Tricks for Efficient Text Classification}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Mikolov, Tomas}, journal={arXiv preprint arXiv:1607.01759}, year={2016} } ``` [3] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, T. Mikolov, [*FastText.zip: Compressing text classification models*](https://arxiv.org/abs/1612.03651) ```markup @article{joulin2016fasttext, title={FastText.zip: Compressing text classification models}, author={Joulin, Armand and Grave, Edouard and Bojanowski, Piotr and Douze, Matthijs and J{'e}gou, H{'e}rve and Mikolov, Tomas}, journal={arXiv preprint arXiv:1612.03651}, year={2016} } ``` If you use these word vectors, please cite the following paper: [4] E. Grave\*, P. Bojanowski\*, P. Gupta, A. Joulin, T. Mikolov, [*Learning Word Vectors for 157 Languages*](https://arxiv.org/abs/1802.06893) ```markup @inproceedings{grave2018learning, title={Learning Word Vectors for 157 Languages}, author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas}, booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` (\* These authors contributed equally.)
{"language": "uz", "library_name": "fasttext", "license": "cc-by-sa-3.0", "tags": ["feature-extraction"], "widget": [{"text": "apple", "example_title": "apple"}]}
task
[ "TEXT_CLASSIFICATION" ]
44,273
Davlan/xlm-roberta-base-finetuned-naija
Davlan
fill-mask
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-15T21:33:37+00:00
128
0
--- {} --- Hugging Face's logo --- language: pcm datasets: --- # xlm-roberta-base-finetuned-naija ## Model description **xlm-roberta-base-finetuned-naija** is a **Nigerian Pidgin RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Nigerian Pidgin language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Nigerian Pidgin corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-naija') >>> unmasker("Another attack on ambulance happen for Koforidua in March <mask> year where robbers kill Ambulance driver") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | pcm_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.26 | 90.00 ### BibTeX entry and citation info By David Adelani ``` ```
null
Non_BioNLP
Hugging Face's logo --- language: pcm datasets: --- # xlm-roberta-base-finetuned-naija ## Model description **xlm-roberta-base-finetuned-naija** is a **Nigerian Pidgin RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Nigerian Pidgin language texts. It provides **better performance** than the XLM-RoBERTa on named entity recognition datasets. Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Nigerian Pidgin corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-naija') >>> unmasker("Another attack on ambulance happen for Koforidua in March <mask> year where robbers kill Ambulance driver") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| XLM-R F1 | pcm_roberta F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.26 | 90.00 ### BibTeX entry and citation info By David Adelani ``` ```
{}
task
[ "NAMED_ENTITY_RECOGNITION" ]
44,274
unity/sentis-blaze-pose
unity
object-detection
[ "unity-sentis", "onnx", "object-detection", "license:apache-2.0", "region:us" ]
2024-09-16T14:25:42Z
2024-09-16T14:52:08+00:00
0
3
--- library_name: unity-sentis license: apache-2.0 pipeline_tag: object-detection --- # BlazePose in Sentis BlazePose is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework. ![](./images/pose.jpg) The BlazePose models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run. ## Functional API The BlazePose detector model takes a (1, 224, 224, 3) input image tensor and outputs a (1, 2254, 12) boxes tensor and a (1, 2254, 1) scores tensor. Each of the 2254 boxes consists of: - [x position, y position, width, height] for the head bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file. - [x position, y position] for each of 4 body keypoints relative to the anchor position. We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score. ``` var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1) var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze(); var selectedBoxes = Functional.IndexSelect(rawBoxes, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 12) var selectedScores = Functional.IndexSelect(detectionScores, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 1) ``` The BlazePose landmarker model takes a (1, 256, 256, 3) input image tensor cropped to the detected body and outputs a (1, 165) tensor consisting of the x, y, and z coordinates and visibility and presence for each of 33 pose keypoints. We use this model without adaptation. ## Model inference We use the dimensions of the texture to set up an affine transformation matrix to go from the 224x224 tensor coordinates to the image coordinates. We then fill the input tensor using a compute shader with this affine transformation, points outside the image will correspond to zeros in the input tensor. ``` var size = Mathf.Max(texture.width, texture.height); // The affine transformation matrix to go from tensor coordinates to image coordinates var scale = size / (float)detectorInputSize; var M = BlazeUtils.mul(BlazeUtils.TranslationMatrix(0.5f * (new Vector2(texture.width, texture.height) + new Vector2(-size, size))), BlazeUtils.ScaleMatrix(new Vector2(scale, -scale))); BlazeUtils.SampleImageAffine(texture, m_DetectorInput, M); m_PoseDetectorWorker.Schedule(m_DetectorInput); ``` Execution is scheduled using an [Awaitable](https://docs.unity3d.com/6000.0/Documentation/ScriptReference/Awaitable.html) and the output tensors are downloaded and awaited. This frees up the main thread while the GPU computation and download takes place. ``` var outputIdxAwaitable = (m_PoseDetectorWorker.PeekOutput(0) as Tensor<int>).ReadbackAndCloneAsync(); var outputScoreAwaitable = (m_PoseDetectorWorker.PeekOutput(1) as Tensor<float>).ReadbackAndCloneAsync(); var outputBoxAwaitable = (m_PoseDetectorWorker.PeekOutput(2) as Tensor<float>).ReadbackAndCloneAsync(); using var outputIdx = await outputIdxAwaitable; using var outputScore = await outputScoreAwaitable; using var outputBox = await outputBoxAwaitable; ``` The output tensors of the detector model are now on the CPU and can be read. If the score passes our score threshold, we use the keypoint positions to set up a second affine transformation. This is calculated so that the body will be centred, rotated and scaled to fill the landmarker input tensor. We use the box and keypoint positions to set the transforms on bounding box and circle for visualization. ![](./images/pose_landmarker_input.png) ``` var face_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 0], outputBox[0, 0, 1])); var faceTopRight_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 0] + 0.5f * outputBox[0, 0, 2], outputBox[0, 0, 1] + 0.5f * outputBox[0, 0, 3])); var kp1_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 0 + 0], outputBox[0, 0, 4 + 2 * 0 + 1])); var kp2_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 1 + 0], outputBox[0, 0, 4 + 2 * 1 + 1])); var delta_ImageSpace = kp2_ImageSpace - kp1_ImageSpace; var dscale = 1.25f; var radius = dscale * math.length(delta_ImageSpace); var theta = math.atan2(delta_ImageSpace.y, delta_ImageSpace.x); var origin2 = new float2(0.5f * landmarkerInputSize, 0.5f * landmarkerInputSize); var scale2 = radius / (0.5f * landmarkerInputSize); var M2 = BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.TranslationMatrix(kp1_ImageSpace), BlazeUtils.ScaleMatrix(new float2(scale2, -scale2))), BlazeUtils.RotationMatrix(0.5f * Mathf.PI - theta)), BlazeUtils.TranslationMatrix(-origin2)); BlazeUtils.SampleImageAffine(texture, m_LandmarkerInput, M2); var boxSize = 2f * (faceTopRight_ImageSpace - face_ImageSpace); posePreview.SetBoundingBox(true, ImageToWorld(face_ImageSpace), boxSize / m_TextureHeight); posePreview.SetBoundingCircle(true, ImageToWorld(kp1_ImageSpace), radius / m_TextureHeight); m_PoseLandmarkerWorker.Schedule(m_LandmarkerInput); ``` The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization. ## WebGPU Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). ![](./images/pose_webgpu.png)
null
Non_BioNLP
# BlazePose in Sentis BlazePose is a fast, light-weight hand detector from Google Research. Pretrained models are available as part of Google's [MediaPipe](https://ai.google.dev/edge/mediapipe/solutions/vision/hand_landmarker) framework. ![](./images/pose.jpg) The BlazePose models have been converted from TFLite to ONNX for use in Sentis using [tf2onnx](https://github.com/onnx/tensorflow-onnx) with the default export parameters. Three variants of the landmarker model (lite, full, heavy) are provided which can be interchanged. The larger models may provide more accurate results but take longer to run. ## Functional API The BlazePose detector model takes a (1, 224, 224, 3) input image tensor and outputs a (1, 2254, 12) boxes tensor and a (1, 2254, 1) scores tensor. Each of the 2254 boxes consists of: - [x position, y position, width, height] for the head bounding box. The position is relative to the anchor position for the given index, these are precalculated and loaded from a csv file. - [x position, y position] for each of 4 body keypoints relative to the anchor position. We adapt the model using the Sentis functional API to apply arg max to filter the box with the highest score. ``` var detectionScores = ScoreFiltering(rawScores, 100f); // (1, 2254, 1) var bestScoreIndex = Functional.ArgMax(rawScores, 1).Squeeze(); var selectedBoxes = Functional.IndexSelect(rawBoxes, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 12) var selectedScores = Functional.IndexSelect(detectionScores, 1, bestScoreIndex).Unsqueeze(0); // (1, 1, 1) ``` The BlazePose landmarker model takes a (1, 256, 256, 3) input image tensor cropped to the detected body and outputs a (1, 165) tensor consisting of the x, y, and z coordinates and visibility and presence for each of 33 pose keypoints. We use this model without adaptation. ## Model inference We use the dimensions of the texture to set up an affine transformation matrix to go from the 224x224 tensor coordinates to the image coordinates. We then fill the input tensor using a compute shader with this affine transformation, points outside the image will correspond to zeros in the input tensor. ``` var size = Mathf.Max(texture.width, texture.height); // The affine transformation matrix to go from tensor coordinates to image coordinates var scale = size / (float)detectorInputSize; var M = BlazeUtils.mul(BlazeUtils.TranslationMatrix(0.5f * (new Vector2(texture.width, texture.height) + new Vector2(-size, size))), BlazeUtils.ScaleMatrix(new Vector2(scale, -scale))); BlazeUtils.SampleImageAffine(texture, m_DetectorInput, M); m_PoseDetectorWorker.Schedule(m_DetectorInput); ``` Execution is scheduled using an [Awaitable](https://docs.unity3d.com/6000.0/Documentation/ScriptReference/Awaitable.html) and the output tensors are downloaded and awaited. This frees up the main thread while the GPU computation and download takes place. ``` var outputIdxAwaitable = (m_PoseDetectorWorker.PeekOutput(0) as Tensor<int>).ReadbackAndCloneAsync(); var outputScoreAwaitable = (m_PoseDetectorWorker.PeekOutput(1) as Tensor<float>).ReadbackAndCloneAsync(); var outputBoxAwaitable = (m_PoseDetectorWorker.PeekOutput(2) as Tensor<float>).ReadbackAndCloneAsync(); using var outputIdx = await outputIdxAwaitable; using var outputScore = await outputScoreAwaitable; using var outputBox = await outputBoxAwaitable; ``` The output tensors of the detector model are now on the CPU and can be read. If the score passes our score threshold, we use the keypoint positions to set up a second affine transformation. This is calculated so that the body will be centred, rotated and scaled to fill the landmarker input tensor. We use the box and keypoint positions to set the transforms on bounding box and circle for visualization. ![](./images/pose_landmarker_input.png) ``` var face_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 0], outputBox[0, 0, 1])); var faceTopRight_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 0] + 0.5f * outputBox[0, 0, 2], outputBox[0, 0, 1] + 0.5f * outputBox[0, 0, 3])); var kp1_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 0 + 0], outputBox[0, 0, 4 + 2 * 0 + 1])); var kp2_ImageSpace = BlazeUtils.mul(M, anchorPosition + new float2(outputBox[0, 0, 4 + 2 * 1 + 0], outputBox[0, 0, 4 + 2 * 1 + 1])); var delta_ImageSpace = kp2_ImageSpace - kp1_ImageSpace; var dscale = 1.25f; var radius = dscale * math.length(delta_ImageSpace); var theta = math.atan2(delta_ImageSpace.y, delta_ImageSpace.x); var origin2 = new float2(0.5f * landmarkerInputSize, 0.5f * landmarkerInputSize); var scale2 = radius / (0.5f * landmarkerInputSize); var M2 = BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.mul(BlazeUtils.TranslationMatrix(kp1_ImageSpace), BlazeUtils.ScaleMatrix(new float2(scale2, -scale2))), BlazeUtils.RotationMatrix(0.5f * Mathf.PI - theta)), BlazeUtils.TranslationMatrix(-origin2)); BlazeUtils.SampleImageAffine(texture, m_LandmarkerInput, M2); var boxSize = 2f * (faceTopRight_ImageSpace - face_ImageSpace); posePreview.SetBoundingBox(true, ImageToWorld(face_ImageSpace), boxSize / m_TextureHeight); posePreview.SetBoundingCircle(true, ImageToWorld(kp1_ImageSpace), radius / m_TextureHeight); m_PoseLandmarkerWorker.Schedule(m_LandmarkerInput); ``` The output tensor of the landmarker model is asynchronously downloaded and once the values are on the CPU we use them together with the affine transformation matrix to set the transforms on the keypoints for visualization. ## WebGPU Unity 6 supports access to the WebGPU backend in early access. Sentis has full support for running models on the web using the WebGPU backend. Discover how to gain early access and test WebGPU in our [graphics forum](https://discussions.unity.com/t/early-access-to-the-new-webgpu-backend-in-unity-2023-3/933493). ![](./images/pose_webgpu.png)
{"library_name": "unity-sentis", "license": "apache-2.0", "pipeline_tag": "object-detection"}
task
[ "TRANSLATION" ]
44,275
Lino-ver2/distilbert-base-uncased-finetuned-clinc
Lino-ver2
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-27T06:00:37Z
2023-09-27T08:37:05+00:00
7
0
--- datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos type: clinc_oos config: plus split: validation args: plus metrics: - type: accuracy value: 0.9190322580645162 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7746 - Accuracy: 0.9190 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2778 | 0.7390 | | 3.7796 | 2.0 | 636 | 1.8683 | 0.8326 | | 3.7796 | 3.0 | 954 | 1.1545 | 0.8955 | | 1.6829 | 4.0 | 1272 | 0.8570 | 0.91 | | 0.899 | 5.0 | 1590 | 0.7746 | 0.9190 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7746 - Accuracy: 0.9190 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2778 | 0.7390 | | 3.7796 | 2.0 | 636 | 1.8683 | 0.8326 | | 3.7796 | 3.0 | 954 | 1.1545 | 0.8955 | | 1.6829 | 4.0 | 1272 | 0.8570 | 0.91 | | 0.899 | 5.0 | 1590 | 0.7746 | 0.9190 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9190322580645162, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,276
sobamchan/bert-base-uncased-mean-200
sobamchan
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MultipleNegativesRankingLoss", "en", "dataset:sentence-transformers/all-nli", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-02-13T15:30:34Z
2025-02-13T15:31:16+00:00
213
0
--- base_model: google-bert/bert-base-uncased datasets: - sentence-transformers/all-nli language: - en library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:557850 - loss:MultipleNegativesRankingLoss widget: - source_sentence: A man is jumping unto his filthy bed. sentences: - A young male is looking at a newspaper while 2 females walks past him. - The bed is dirty. - The man is on the moon. - source_sentence: A carefully balanced male stands on one foot near a clean ocean beach area. sentences: - A man is ouside near the beach. - Three policemen patrol the streets on bikes - A man is sitting on his couch. - source_sentence: The man is wearing a blue shirt. sentences: - Near the trashcan the man stood and smoked - A man in a blue shirt leans on a wall beside a road with a blue van and red car with water in the background. - A man in a black shirt is playing a guitar. - source_sentence: The girls are outdoors. sentences: - Two girls riding on an amusement part ride. - a guy laughs while doing laundry - Three girls are standing together in a room, one is listening, one is writing on a wall and the third is talking to them. - source_sentence: A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling. sentences: - A worker is looking out of a manhole. - A man is giving a presentation. - The workers are both inside the manhole. --- # SentenceTransformer based on google-bert/bert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.', 'A worker is looking out of a manhole.', 'The workers are both inside the manhole.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 557,850 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> | | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 6,584 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `learning_rate`: 1e-05 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0011 | 5 | - | 2.7554 | | 0.0023 | 10 | - | 2.7506 | | 0.0034 | 15 | - | 2.7424 | | 0.0046 | 20 | - | 2.7309 | | 0.0057 | 25 | - | 2.7160 | | 0.0069 | 30 | - | 2.6975 | | 0.0080 | 35 | - | 2.6757 | | 0.0092 | 40 | - | 2.6502 | | 0.0103 | 45 | - | 2.6214 | | 0.0115 | 50 | - | 2.5893 | | 0.0126 | 55 | - | 2.5538 | | 0.0138 | 60 | - | 2.5145 | | 0.0149 | 65 | - | 2.4726 | | 0.0161 | 70 | - | 2.4282 | | 0.0172 | 75 | - | 2.3795 | | 0.0184 | 80 | - | 2.3272 | | 0.0195 | 85 | - | 2.2712 | | 0.0206 | 90 | - | 2.2120 | | 0.0218 | 95 | - | 2.1501 | | 0.0229 | 100 | 3.6197 | 2.0866 | | 0.0241 | 105 | - | 2.0223 | | 0.0252 | 110 | - | 1.9571 | | 0.0264 | 115 | - | 1.8907 | | 0.0275 | 120 | - | 1.8239 | | 0.0287 | 125 | - | 1.7583 | | 0.0298 | 130 | - | 1.6938 | | 0.0310 | 135 | - | 1.6316 | | 0.0321 | 140 | - | 1.5719 | | 0.0333 | 145 | - | 1.5148 | | 0.0344 | 150 | - | 1.4598 | | 0.0356 | 155 | - | 1.4081 | | 0.0367 | 160 | - | 1.3612 | | 0.0379 | 165 | - | 1.3182 | | 0.0390 | 170 | - | 1.2803 | | 0.0401 | 175 | - | 1.2463 | | 0.0413 | 180 | - | 1.2160 | | 0.0424 | 185 | - | 1.1895 | | 0.0436 | 190 | - | 1.1654 | | 0.0447 | 195 | - | 1.1435 | | 0.0459 | 200 | 2.292 | 1.1240 | ### Framework Versions - Python: 3.12.8 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.2.0+cu121 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on google-bert/bert-base-uncased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) - **Language:** en <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.', 'A worker is looking out of a manhole.', 'The workers are both inside the manhole.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 557,850 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------| | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> | | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> | | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### all-nli * Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab) * Size: 6,584 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> | * Samples: | anchor | positive | negative | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------| | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> | | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> | | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `learning_rate`: 1e-05 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 1e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0011 | 5 | - | 2.7554 | | 0.0023 | 10 | - | 2.7506 | | 0.0034 | 15 | - | 2.7424 | | 0.0046 | 20 | - | 2.7309 | | 0.0057 | 25 | - | 2.7160 | | 0.0069 | 30 | - | 2.6975 | | 0.0080 | 35 | - | 2.6757 | | 0.0092 | 40 | - | 2.6502 | | 0.0103 | 45 | - | 2.6214 | | 0.0115 | 50 | - | 2.5893 | | 0.0126 | 55 | - | 2.5538 | | 0.0138 | 60 | - | 2.5145 | | 0.0149 | 65 | - | 2.4726 | | 0.0161 | 70 | - | 2.4282 | | 0.0172 | 75 | - | 2.3795 | | 0.0184 | 80 | - | 2.3272 | | 0.0195 | 85 | - | 2.2712 | | 0.0206 | 90 | - | 2.2120 | | 0.0218 | 95 | - | 2.1501 | | 0.0229 | 100 | 3.6197 | 2.0866 | | 0.0241 | 105 | - | 2.0223 | | 0.0252 | 110 | - | 1.9571 | | 0.0264 | 115 | - | 1.8907 | | 0.0275 | 120 | - | 1.8239 | | 0.0287 | 125 | - | 1.7583 | | 0.0298 | 130 | - | 1.6938 | | 0.0310 | 135 | - | 1.6316 | | 0.0321 | 140 | - | 1.5719 | | 0.0333 | 145 | - | 1.5148 | | 0.0344 | 150 | - | 1.4598 | | 0.0356 | 155 | - | 1.4081 | | 0.0367 | 160 | - | 1.3612 | | 0.0379 | 165 | - | 1.3182 | | 0.0390 | 170 | - | 1.2803 | | 0.0401 | 175 | - | 1.2463 | | 0.0413 | 180 | - | 1.2160 | | 0.0424 | 185 | - | 1.1895 | | 0.0436 | 190 | - | 1.1654 | | 0.0447 | 195 | - | 1.1435 | | 0.0459 | 200 | 2.292 | 1.1240 | ### Framework Versions - Python: 3.12.8 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.2.0+cu121 - Accelerate: 1.3.0 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "google-bert/bert-base-uncased", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "A man is jumping unto his filthy bed.", "sentences": ["A young male is looking at a newspaper while 2 females walks past him.", "The bed is dirty.", "The man is on the moon."]}, {"source_sentence": "A carefully balanced male stands on one foot near a clean ocean beach area.", "sentences": ["A man is ouside near the beach.", "Three policemen patrol the streets on bikes", "A man is sitting on his couch."]}, {"source_sentence": "The man is wearing a blue shirt.", "sentences": ["Near the trashcan the man stood and smoked", "A man in a blue shirt leans on a wall beside a road with a blue van and red car with water in the background.", "A man in a black shirt is playing a guitar."]}, {"source_sentence": "The girls are outdoors.", "sentences": ["Two girls riding on an amusement part ride.", "a guy laughs while doing laundry", "Three girls are standing together in a room, one is listening, one is writing on a wall and the third is talking to them."]}, {"source_sentence": "A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.", "sentences": ["A worker is looking out of a manhole.", "A man is giving a presentation.", "The workers are both inside the manhole."]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,277
danicafisher/dfisher-base-sentence-transformer
danicafisher
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:128", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-09-20T15:39:01Z
2024-09-20T15:39:05+00:00
5
0
--- base_model: sentence-transformers/all-MiniLM-L6-v2 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:128 - loss:MultipleNegativesRankingLoss widget: - source_sentence: What is the title of the publication released by NIST in July 2024 regarding artificial intelligence? sentences: - "NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence\ \ Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n\ \ \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1" - "NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence\ \ Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n\ \ \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1\ \ \n \nJuly 2024 \n \n \n \n \nU.S. Department of Commerce \nGina M. Raimondo,\ \ Secretary \nNational Institute of Standards and Technology \nLaurie E. Locascio,\ \ NIST Director and Under Secretary of Commerce for Standards and Technology" - "37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training\ \ data and verify \ntraining data is not overly homogenous or GAI-produced to\ \ mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \n\ AI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities,\ \ Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12:\ \ Environmental impact and sustainability of AI model training and management\ \ activities – as identified in the MAP \nfunction – are assessed and documented.\ \ \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical\ \ environments when deploying GAI systems. \nDangerous, Violent, or Hateful \n\ Content \nMS-2.12-002 Document anticipated environmental impacts of model development,\ \ \nmaintenance, and deployment in product design decisions. \nEnvironmental \n\ MS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water\ \ \nconsumption) for training, fine tuning, and deploying models: Verify tradeoffs\ \ \nbetween resources used at inference time versus additional resources required\ \ \nat training time. \nEnvironmental \nMS-2.12-004 Verify effectiveness of carbon\ \ capture or offset programs for GAI training and \napplications, and address green-washing\ \ concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment,\ \ Domain Experts, Operation and Monitoring, TEVV" - source_sentence: What are the four primary considerations relevant to Generative AI (GAI) that the GAI Public Working Group focused on? sentences: - "23 \nMP-1.1-002 \nDetermine and document the expected and acceptable GAI system\ \ context of \nuse in collaboration with socio-cultural and other domain experts,\ \ by assessing: \nAssumptions and limitations; Direct value to the organization;\ \ Intended \noperational environment and observed usage patterns; Potential positive\ \ and \nnegative impacts to individuals, public safety, groups, communities, \n\ organizations, democratic institutions, and the physical environment; Social \n\ norms and expectations. \nHarmful Bias and Homogenization \nMP-1.1-003 \nDocument\ \ risk measurement plans to address identified risks. Plans may \ninclude, as applicable:\ \ Individual and group cognitive biases (e.g., confirmation \nbias, funding bias,\ \ groupthink) for AI Actors involved in the design, \nimplementation, and use\ \ of GAI systems; Known past GAI system incidents and \nfailure modes; In-context\ \ use and foreseeable misuse, abuse, and off-label use; \nOver reliance on quantitative\ \ metrics and methodologies without sufficient \nawareness of their limitations\ \ in the context(s) of use; Standard measurement \nand structured human feedback\ \ approaches; Anticipated human-AI \nconfigurations. \nHuman-AI Configuration; Harmful\ \ \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMP-1.1-004\ \ \nIdentify and document foreseeable illegal uses or applications of the GAI\ \ system \nthat surpass organizational risk tolerances. \nCBRN Information or\ \ Capabilities; \nDangerous, Violent, or Hateful \nContent; Obscene, Degrading,\ \ \nand/or Abusive Content \nAI Actor Tasks: AI Deployment \n \nMAP 1.2: Interdisciplinary\ \ AI Actors, competencies, skills, and capacities for establishing context reflect\ \ demographic diversity and \nbroad domain and user experience expertise, and\ \ their participation is documented. Opportunities for interdisciplinary \ncollaboration\ \ are prioritized. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.2-001 \n\ Establish and empower interdisciplinary teams that reflect a wide range of \ncapabilities,\ \ competencies, demographic groups, domain expertise, educational \nbackgrounds,\ \ lived experiences, professions, and skills across the enterprise to \ninform\ \ and conduct risk measurement and management functions. \nHuman-AI Configuration;\ \ Harmful \nBias and Homogenization \nMP-1.2-002 \nVerify that data or benchmarks\ \ used in risk measurement, and users, \nparticipants, or subjects involved in\ \ structured GAI public feedback exercises \nare representative of diverse in-context\ \ user populations. \nHuman-AI Configuration; Harmful \nBias and Homogenization\ \ \nAI Actor Tasks: AI Deployment" - "2 \nThis work was informed by public feedback and consultations with diverse\ \ stakeholder groups as part of NIST’s \nGenerative AI Public Working Group (GAI\ \ PWG). The GAI PWG was an open, transparent, and collaborative \nprocess, facilitated\ \ via a virtual workspace, to obtain multistakeholder input on GAI risk management\ \ and to \ninform NIST’s approach. \nThe focus of the GAI PWG was limited to four\ \ primary considerations relevant to GAI: Governance, Content \nProvenance, Pre-deployment\ \ Testing, and Incident Disclosure (further described in Appendix A). As such,\ \ the \nsuggested actions in this document primarily address these considerations.\ \ \nFuture revisions of this profile will include additional AI RMF subcategories,\ \ risks, and suggested actions based \non additional considerations of GAI as\ \ the space evolves and empirical evidence indicates additional risks. A \nglossary\ \ of terms pertinent to GAI risk management will be developed and hosted on NIST’s\ \ Trustworthy & \nResponsible AI Resource Center (AIRC), and added to The Language\ \ of Trustworthy AI: An In-Depth Glossary of \nTerms. \nThis document was also\ \ informed by public comments and consultations from several Requests for Information.\ \ \n \n2. \nOverview of Risks Unique to or Exacerbated by GAI \nIn the context\ \ of the AI RMF, risk refers to the composite measure of an event’s probability\ \ (or \nlikelihood) of occurring and the magnitude or degree of the consequences\ \ of the corresponding event. \nSome risks can be assessed as likely to materialize\ \ in a given context, particularly those that have been \nempirically demonstrated\ \ in similar contexts. Other risks may be unlikely to materialize in a given \n\ context, or may be more speculative and therefore uncertain. \nAI risks can differ\ \ from or intensify traditional software risks. Likewise, GAI can exacerbate existing\ \ AI \nrisks, and creates unique risks. GAI risks can vary along many dimensions:\ \ \n• \nStage of the AI lifecycle: Risks can arise during design, development,\ \ deployment, operation, \nand/or decommissioning. \n• \nScope: Risks may exist\ \ at individual model or system levels, at the application or implementation \n\ levels (i.e., for a specific use case), or at the ecosystem level – that is, beyond\ \ a single system or \norganizational context. Examples of the latter include\ \ the expansion of “algorithmic \nmonocultures,3” resulting from repeated use\ \ of the same model, or impacts on access to \nopportunity, labor markets, and\ \ the creative economies.4 \n• \nSource of risk: Risks may emerge from factors\ \ related to the design, training, or operation of the \nGAI model itself, stemming\ \ in some cases from GAI model or system inputs, and in other cases, \nfrom GAI\ \ system outputs. Many GAI risks, however, originate from human behavior, including\ \ \n \n \n3 “Algorithmic monocultures” refers to the phenomenon in which repeated\ \ use of the same model or algorithm in \nconsequential decision-making settings\ \ like employment and lending can result in increased susceptibility by \nsystems\ \ to correlated failures (like unexpected shocks), due to multiple actors relying\ \ on the same algorithm. \n4 Many studies have projected the impact of AI on\ \ the workforce and labor markets. Fewer studies have examined \nthe impact of\ \ GAI on the labor market, though some industry surveys indicate that that both\ \ employees and \nemployers are pondering this disruption." - "44 \nMG-3.2-007 \nLeverage feedback and recommendations from organizational boards\ \ or \ncommittees related to the deployment of GAI applications and content \n\ provenance when using third-party pre-trained models. \nInformation Integrity;\ \ Value Chain \nand Component Integration \nMG-3.2-008 \nUse human moderation\ \ systems where appropriate to review generated content \nin accordance with human-AI\ \ configuration policies established in the Govern \nfunction, aligned with socio-cultural\ \ norms in the context of use, and for settings \nwhere AI models are demonstrated\ \ to perform poorly. \nHuman-AI Configuration \nMG-3.2-009 \nUse organizational\ \ risk tolerance to evaluate acceptable risks and performance \nmetrics and decommission\ \ or retrain pre-trained models that perform outside of \ndefined limits. \nCBRN\ \ Information or Capabilities; \nConfabulation \nAI Actor Tasks: AI Deployment,\ \ Operation and Monitoring, Third-party entities \n \nMANAGE 4.1: Post-deployment\ \ AI system monitoring plans are implemented, including mechanisms for capturing\ \ and evaluating \ninput from users and other relevant AI Actors, appeal and override,\ \ decommissioning, incident response, recovery, and change \nmanagement. \nAction\ \ ID \nSuggested Action \nGAI Risks \nMG-4.1-001 \nCollaborate with external researchers,\ \ industry experts, and community \nrepresentatives to maintain awareness of emerging\ \ best practices and \ntechnologies in measuring and managing identified risks.\ \ \nInformation Integrity; Harmful Bias \nand Homogenization \nMG-4.1-002 \nEstablish,\ \ maintain, and evaluate effectiveness of organizational processes and \nprocedures\ \ for post-deployment monitoring of GAI systems, particularly for \npotential\ \ confabulation, CBRN, or cyber risks. \nCBRN Information or Capabilities; \n\ Confabulation; Information \nSecurity \nMG-4.1-003 \nEvaluate the use of sentiment\ \ analysis to gauge user sentiment regarding GAI \ncontent performance and impact,\ \ and work in collaboration with AI Actors \nexperienced in user research and\ \ experience. \nHuman-AI Configuration \nMG-4.1-004 Implement active learning techniques\ \ to identify instances where the model fails \nor produces unexpected outputs.\ \ \nConfabulation \nMG-4.1-005 \nShare transparency reports with internal and\ \ external stakeholders that detail \nsteps taken to update the GAI system to\ \ enhance transparency and \naccountability. \nHuman-AI Configuration; Harmful\ \ \nBias and Homogenization \nMG-4.1-006 \nTrack dataset modifications for provenance\ \ by monitoring data deletions, \nrectification requests, and other changes that\ \ may impact the verifiability of \ncontent origins. \nInformation Integrity" - source_sentence: What techniques should be deployed to verify the accuracy and veracity of information generated by GAI systems? sentences: - "10 \nGAI systems can ease the unintentional production or dissemination of false,\ \ inaccurate, or misleading \ncontent (misinformation) at scale, particularly\ \ if the content stems from confabulations. \nGAI systems can also ease the deliberate\ \ production or dissemination of false or misleading information \n(disinformation)\ \ at scale, where an actor has the explicit intent to deceive or cause harm to\ \ others. Even \nvery subtle changes to text or images can manipulate human and\ \ machine perception. \nSimilarly, GAI systems could enable a higher degree of\ \ sophistication for malicious actors to produce \ndisinformation that is targeted\ \ towards specific demographics. Current and emerging multimodal models \nmake\ \ it possible to generate both text-based disinformation and highly realistic\ \ “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12\ \ Additional disinformation threats could be \nenabled by future GAI models trained\ \ on new data modalities. \nDisinformation and misinformation – both of which\ \ may be facilitated by GAI – may erode public trust in \ntrue or valid evidence\ \ and information, with downstream effects. For example, a synthetic image of a\ \ \nPentagon blast went viral and briefly caused a drop in the stock market. Generative\ \ AI models can also \nassist malicious actors in creating compelling imagery\ \ and propaganda to support disinformation \ncampaigns, which may not be photorealistic,\ \ but could enable these campaigns to gain more reach and \nengagement on social\ \ media platforms. Additionally, generative AI models can assist malicious actors\ \ in \ncreating fraudulent content intended to impersonate others. \nTrustworthy\ \ AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable\ \ and \nExplainable \n2.9. Information Security \nInformation security for computer\ \ systems and data is a mature field with widely accepted and \nstandardized practices\ \ for offensive and defensive cyber capabilities. GAI-based systems present two\ \ \nprimary information security risks: GAI could potentially discover or enable\ \ new cybersecurity risks by \nlowering the barriers for or easing automated exercise\ \ of offensive capabilities; simultaneously, it \nexpands the available attack\ \ surface, as GAI itself is vulnerable to attacks like prompt injection or data\ \ \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment\ \ cybersecurity attacks such as \nhacking, malware, and phishing. Reports have\ \ indicated that LLMs are already able to discover some \nvulnerabilities in systems\ \ (hardware, software, data) and write code to exploit them. Sophisticated threat\ \ \nactors might further these risks by developing GAI-powered security co-pilots\ \ for use in several parts of \nthe attack chain, including informing attackers\ \ on how to proactively evade threat detection and escalate \nprivileges after\ \ gaining system access. \nInformation security for GAI models and systems also\ \ includes maintaining availability of the GAI system \nand the integrity and\ \ (when applicable) the confidentiality of the GAI code, training data, and model\ \ \nweights. To identify and secure potential attack points in AI systems or specific\ \ components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4,\ \ to be published." - "25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance,\ \ suitability of data \nused at different stages of AI life cycle. \nHarmful Bias\ \ and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document\ \ fact-checking techniques to verify the accuracy and \nveracity of information\ \ generated by GAI systems, especially when the \ninformation comes from multiple\ \ (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement\ \ testing techniques to identify GAI produced content (e.g., \nsynthetic media)\ \ that might be indistinguishable from human-generated content. Information Integrity\ \ \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial\ \ testing to identify \nvulnerabilities and potential manipulation or misuse.\ \ \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV\ \ \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system\ \ performance and trustworthiness – and relevant \ntechnical standards and certifications\ \ – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI\ \ Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately\ \ understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation\ \ Integrity \nMP-3.4-002 Adapt existing training programs to include modules on\ \ digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop\ \ certification programs that test proficiency in managing GAI risks and \ninterpreting\ \ content provenance, relevant to specific industry and context. \nInformation\ \ Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities.\ \ \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor\ \ and track the outcomes of human-GAI \nconfigurations for future refinement and\ \ improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006\ \ \nInvolve the end-users, practitioners, and operators in GAI system in prototyping\ \ \nand testing activities. Make sure these tests cover various scenarios, such\ \ as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration;\ \ \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent,\ \ or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts,\ \ End-Users, Human Factors, Operation and Monitoring" - "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess\ \ intellectual property, \nand privacy, risks, including to examine whether use\ \ of proprietary or sensitive \ntraining data is consistent with applicable laws.\ \ \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight,\ \ Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood\ \ and magnitude of each identified impact (both potentially beneficial and harmful)\ \ based on expected use, past \nuses of AI systems in similar contexts, public\ \ incident reports, feedback from those external to the team that developed or\ \ deployed \nthe AI system, or other data are identified and documented. \nAction\ \ ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content\ \ provenance (e.g., probing a system's synthetic \ndata generation capabilities\ \ for potential misuse or vulnerabilities. \nInformation Integrity; Information\ \ \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI,\ \ such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered\ \ content. Enumerate and \nrank risks based on their likelihood and potential\ \ impact, and determine how well \nprovenance solutions address specific risks\ \ and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content;\ \ \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing\ \ use of GAI to end users in relevant contexts, while considering \nthe objective\ \ of disclosure, the context of use, the likelihood and magnitude of the \nrisk\ \ posed, the audience of the disclosure, as well as the frequency of the \ndisclosures.\ \ \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback\ \ processes based on risk assessment \nestimates. \nInformation Integrity; CBRN\ \ \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful\ \ Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises,\ \ GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure\ \ modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts\ \ arising from GAI systems interacting with, \nmanipulating, or generating content,\ \ and outlining known and potential \nvulnerabilities and the likelihood of their\ \ occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design,\ \ AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\n\ Users, Operation and Monitoring" - source_sentence: What is the phenomenon referred to as "confabulation" in GAI systems? sentences: - "50 \nParticipatory Engagement Methods \nOn an ad hoc or more structured basis,\ \ organizations can design and use a variety of channels to engage \nexternal\ \ stakeholders in product development or review. Focus groups with select experts\ \ can provide \nfeedback on a range of issues. Small user studies can provide\ \ feedback from representative groups or \npopulations. Anonymous surveys can\ \ be used to poll or gauge reactions to specific features. Participatory \nengagement\ \ methods are often less structured than field testing or red teaming, and are\ \ more \ncommonly used in early stages of AI or product development. \nField\ \ Testing \nField testing involves structured settings to evaluate risks and impacts\ \ and to simulate the conditions \nunder which the GAI system will be deployed.\ \ Field style tests can be adapted from a focus on user \npreferences and experiences\ \ towards AI risks and impacts – both negative and positive. When carried \nout\ \ with large groups of users, these tests can provide estimations of the likelihood\ \ of risks and impacts \nin real world interactions. \nOrganizations may also\ \ collect feedback on outcomes, harms, and user experience directly from users\ \ in \nthe production environment after a model has been released, in accordance\ \ with human subject \nstandards such as informed consent and compensation. Organizations\ \ should follow applicable human \nsubjects research requirements, and best practices\ \ such as informed consent and subject compensation, \nwhen implementing feedback\ \ activities. \nAI Red-teaming \nAI red-teaming is an evolving practice that references\ \ exercises often conducted in a controlled \nenvironment and in collaboration\ \ with AI developers building AI models to identify potential adverse \nbehavior\ \ or outcomes of a GAI model or system, how they could occur, and stress test\ \ safeguards”. AI \nred-teaming can be performed before or after AI models or\ \ systems are made available to the broader \npublic; this section focuses on\ \ red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs\ \ is related to the background and expertise of the AI red team \nitself. Demographically\ \ and interdisciplinarily diverse AI red teams can be used to identify flaws in\ \ the \nvarying contexts where GAI will be used. For best results, AI red teams\ \ should demonstrate domain \nexpertise, and awareness of socio-cultural aspects\ \ within the deployment context. AI red-teaming results \nshould be given additional\ \ analysis before they are incorporated into organizational governance and \n\ decision making, policy and procedural updates, and AI risk management efforts.\ \ \nVarious types of AI red-teaming may be appropriate, depending on the use case:\ \ \n• \nGeneral Public: Performed by general users (not necessarily AI or technical\ \ experts) who are \nexpected to use the model or interact with its outputs, and\ \ who bring their own lived \nexperiences and perspectives to the task of AI red-teaming.\ \ These individuals may have been \nprovided instructions and material to complete\ \ tasks which may elicit harmful model behaviors. \nThis type of exercise can\ \ be more effective with large groups of AI red-teamers. \n• \nExpert: Performed\ \ by specialists with expertise in the domain or specific AI red-teaming context\ \ \nof use (e.g., medicine, biotech, cybersecurity). \n• \nCombination: In scenarios\ \ when it is difficult to identify and recruit specialists with sufficient \ndomain\ \ and contextual expertise, AI red-teaming exercises may leverage both expert\ \ and" - "54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of\ \ AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/\ \ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023\ \ Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/\ \ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611\ \ \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/\ \ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture\ \ lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \n\ Boyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused\ \ System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416\ \ \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant. \nhttps://www.mandiant.com/resources/blog/securing-ai-pipeline\ \ \nBurgess, M. (2024) Generative AI’s Biggest Security Flaw Is Not Easy to Fix.\ \ WIRED. \nhttps://www.wired.com/story/generative-ai-prompt-injection-hacking/\ \ \nBurtell, M. et al. (2024) The Surprising Power of Next Word Prediction: Large\ \ Language Models \nExplained, Part 1. Georgetown Center for Security and Emerging\ \ Technology. \nhttps://cset.georgetown.edu/article/the-surprising-power-of-next-word-prediction-large-language-\n\ models-explained-part-1/ \nCanadian Centre for Cyber Security (2023) Generative\ \ artificial intelligence (AI) - ITSAP.00.041. \nhttps://www.cyber.gc.ca/en/guidance/generative-artificial-intelligence-ai-itsap00041\ \ \nCarlini, N., et al. (2021) Extracting Training Data from Large Language Models.\ \ Usenix. \nhttps://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting\ \ \nCarlini, N. et al. (2023) Quantifying Memorization Across Neural Language\ \ Models. ICLR 2023. \nhttps://arxiv.org/pdf/2202.07646 \nCarlini, N. et al. (2024)\ \ Stealing Part of a Production Language Model. arXiv. \nhttps://arxiv.org/abs/2403.06634\ \ \nChandra, B. et al. (2023) Dismantling the Disinformation Business of Chinese\ \ Influence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\n\ chinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship:\ \ A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\n\ AI_Companionship_A_Dialectical_Inquiry_into_Replika \nDahl, M. et al. (2024) Large\ \ Legal Fictions: Profiling Legal Hallucinations in Large Language Models. arXiv.\ \ \nhttps://arxiv.org/abs/2401.01301" - "6 \n2.2. Confabulation \n“Confabulation” refers to a phenomenon in which GAI\ \ systems generate and confidently present \nerroneous or false content in response\ \ to prompts. Confabulations also include generated outputs that \ndiverge from\ \ the prompts or other input or that contradict previously generated statements\ \ in the same \ncontext. These phenomena are colloquially also referred to as\ \ “hallucinations” or “fabrications.” \nConfabulations can occur across GAI outputs\ \ and contexts.9,10 Confabulations are a natural result of the \nway generative\ \ models are designed: they generate outputs that approximate the statistical\ \ distribution \nof their training data; for example, LLMs predict the next token\ \ or word in a sentence or phrase. While \nsuch statistical prediction can produce\ \ factually accurate and consistent outputs, it can also produce \noutputs that\ \ are factually inaccurate or internally inconsistent. This dynamic is particularly\ \ relevant when \nit comes to open-ended prompts for long-form responses and in\ \ domains which require highly \ncontextual and/or domain expertise. \nRisks\ \ from confabulations may arise when users believe false content – often due to\ \ the confident nature \nof the response – leading users to act upon or promote\ \ the false information. This poses a challenge for \nmany real-world applications,\ \ such as in healthcare, where a confabulated summary of patient \ninformation\ \ reports could cause doctors to make incorrect diagnoses and/or recommend the\ \ wrong \ntreatments. Risks of confabulated content may be especially important\ \ to monitor when integrating GAI \ninto applications involving consequential\ \ decision making. \nGAI outputs may also include confabulated logic or citations\ \ that purport to justify or explain the \nsystem’s answer, which may further\ \ mislead humans into inappropriately trusting the system’s output. \nFor instance,\ \ LLMs sometimes provide logical steps for how they arrived at an answer even\ \ when the \nanswer itself is incorrect. Similarly, an LLM could falsely assert\ \ that it is human or has human traits, \npotentially deceiving humans into believing\ \ they are speaking with another human. \nThe extent to which humans can be deceived\ \ by LLMs, the mechanisms by which this may occur, and the \npotential risks from\ \ adversarial prompting of such behavior are emerging areas of study. Given the\ \ wide \nrange of downstream impacts of GAI, it is difficult to estimate the downstream\ \ scale and impact of \nconfabulations. \nTrustworthy AI Characteristics: Fair\ \ with Harmful Bias Managed, Safe, Valid and Reliable, Explainable \nand Interpretable\ \ \n2.3. Dangerous, Violent, or Hateful Content \nGAI systems can produce content\ \ that is inciting, radicalizing, or threatening, or that glorifies violence, \n\ with greater ease and scale than other technologies. LLMs have been reported to\ \ generate dangerous or \nviolent recommendations, and some models have generated\ \ actionable instructions for dangerous or \n \n \n9 Confabulations of falsehoods\ \ are most commonly a problem for text-based outputs; for audio, image, or video\ \ \ncontent, creative generation of non-factual content can be a desired behavior.\ \ \n10 For example, legal confabulations have been shown to be pervasive in current\ \ state-of-the-art LLMs. See also, \ne.g.," - source_sentence: How can organizations address risks associated with the use of third-party data for GAI model inputs? sentences: - "48 \n• Data protection \n• Data retention \n• Consistency in use of defining\ \ key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \ \ \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n\ • Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV\ \ practices \n• Secure software development practices \n• Stakeholder engagement\ \ \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower\ \ protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing\ \ acceptable use policies and guidance for the use of GAI in formal human-AI teaming\ \ settings \nas well as different levels of human-AI configurations can help to\ \ decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment\ \ between systems and users. These practices are just \none example of adapting\ \ existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations\ \ \nOrganizations may seek to acquire, embed, incorporate, or use open-source\ \ or proprietary third-party \nGAI models, systems, or generated data for various\ \ applications across an enterprise. Use of these GAI \ntools and inputs has implications\ \ for all functions of the organization – including but not limited to \nacquisition,\ \ human resources, legal, compliance, and IT services – regardless of whether\ \ they are carried \nout by employees or third parties. Many of the actions cited\ \ above are relevant and options for \naddressing third-party considerations.\ \ \nThird party GAI integrations may give rise to increased intellectual property,\ \ data privacy, or information \nsecurity risks, pointing to the need for clear\ \ guidelines for transparency and risk management regarding \nthe collection and\ \ use of third-party data for model inputs. Organizations may consider varying\ \ risk \ncontrols for foundation models, fine-tuned models, and embedded tools,\ \ enhanced processes for \ninteracting with external GAI technologies or service\ \ providers. Organizations can apply standard or \nexisting risk controls and\ \ processes to proprietary or open-source GAI technologies, data, and third-party\ \ \nservice providers, including acquisition and procurement due diligence, requests\ \ for software bills of \nmaterials (SBOMs), application of service level agreements\ \ (SLAs), and statement on standards for \nattestation engagement (SSAE) reports\ \ to help with third-party transparency and risk management for \nGAI systems.\ \ \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in\ \ which GAI systems may be developed, used, and repurposed \ncomplicates risk\ \ mapping and pre-deployment measurement efforts. Robust test, evaluation, validation,\ \ \nand verification (TEVV) processes can be iteratively applied – and documented\ \ – in early stages of the AI \nlifecycle and informed by representative AI Actors\ \ (see Figure 3 of the AI RMF). Until new and rigorous" - "About AI at NIST: The National Institute of Standards and Technology (NIST) develops\ \ measurements, \ntechnology, tools, and standards to advance reliable, safe,\ \ transparent, explainable, privacy-enhanced, \nand fair artificial intelligence\ \ (AI) so that its full commercial and societal benefits can be realized without\ \ \nharm to people or the planet. NIST, which has conducted both fundamental and\ \ applied work on AI for \nmore than a decade, is also helping to fulfill the 2023\ \ Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the\ \ U.S. AI Safety Institute and the companion AI Safety Institute Consortium to\ \ \ncontinue the efforts set in motion by the E.O. to build the science necessary\ \ for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments:\ \ This report was accomplished with the many helpful comments and contributions\ \ \nfrom the community, including the NIST Generative AI Public Working Group,\ \ and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall,\ \ Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi.\ \ \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements\ \ \nNIST Technical Series Publication Identifier Syntax \nPublication History\ \ \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information\ \ \[email protected] \nNational Institute of Standards and Technology \n\ Attn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive\ \ (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional\ \ information about this publication and other NIST AI publications are available\ \ at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities,\ \ equipment, or materials may be identified in this document in \norder to adequately\ \ describe an experimental procedure or concept. Such identification is not intended\ \ to \nimply recommendation or endorsement by the National Institute of Standards\ \ and Technology, nor is it \nintended to imply that the entities, materials,\ \ or equipment are necessarily the best available for the \npurpose. Any mention\ \ of commercial, non-profit, academic partners, or their products, or references\ \ is \nfor information only; it is not intended to imply endorsement or recommendation\ \ by any U.S. \nGovernment agency." - "8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced,\ \ Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining,\ \ and operating (running inference on) GAI systems are resource-intensive activities,\ \ \nwith potentially large energy and environmental footprints. Energy and carbon\ \ emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training,\ \ fine-tuning, inference), the modality of the \ncontent, hardware used, and type\ \ of task or application. \nCurrent estimates suggest that training a single transformer\ \ LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco\ \ and New York. In a study comparing energy consumption and carbon \nemissions\ \ for LLM inference, generative tasks (e.g., text summarization) were found to\ \ be more energy- \nand carbon-intensive than discriminative or non-generative\ \ tasks (e.g., text classification). \nMethods for creating smaller versions of\ \ trained models, such as model distillation or compression, \ncould reduce environmental\ \ impacts at inference time, but training and tuning such models may still \n\ contribute to their environmental impacts. Currently there is no agreed upon method\ \ to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics:\ \ Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \n\ Bias exists in many forms and can become ingrained in automated systems. AI systems,\ \ including GAI \nsystems, can increase the speed and scale at which harmful biases\ \ manifest and are acted upon, \npotentially perpetuating and amplifying harms\ \ to individuals, groups, communities, organizations, and \nsociety. For example,\ \ when prompted to generate images of CEOs, doctors, lawyers, and judges, current\ \ \ntext-to-image models underrepresent women and/or racial minorities, and people\ \ with disabilities. \nImage generator models have also produced biased or stereotyped\ \ output for various demographic \ngroups and have difficulty producing non-stereotyped\ \ content even when the prompt specifically \nrequests image features that are\ \ inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem\ \ from their training data, can also cause representational harms or perpetuate\ \ or exacerbate \nbias based on race, gender, disability, or other protected classes.\ \ \nHarmful bias in GAI systems can also lead to harms via disparities between\ \ how a model performs for \ndifferent subgroups or languages (e.g., an LLM may\ \ perform less well for non-English languages or \ncertain dialects). Such disparities\ \ can contribute to discriminatory decision-making or amplification of \nexisting\ \ societal biases. In addition, GAI systems may be inappropriately trusted to\ \ perform similarly \nacross all subgroups, which could leave the groups facing\ \ underperformance with worse outcomes than \nif no GAI system were used. Disparate\ \ or reduced performance for lower-resource languages also \npresents challenges\ \ to model adoption, inclusion, and accessibility, and may make preservation of\ \ \nendangered languages more difficult if GAI systems become embedded in everyday\ \ processes that would \notherwise have been opportunities to use these languages.\ \ \nBias is mutually reinforcing with the problem of undesired homogenization,\ \ in which GAI systems \nproduce skewed distributions of outputs that are overly\ \ uniform (for example, repetitive aesthetic styles" --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("danicafisher/dfisher-base-sentence-transformer") # Run inference sentences = [ 'How can organizations address risks associated with the use of third-party data for GAI model inputs?', '48 \n• Data protection \n• Data retention \n• Consistency in use of defining key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n• Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV practices \n• Secure software development practices \n• Stakeholder engagement \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as different levels of human-AI configurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations \nOrganizations may seek to acquire, embed, incorporate, or use open-source or proprietary third-party \nGAI models, systems, or generated data for various applications across an enterprise. Use of these GAI \ntools and inputs has implications for all functions of the organization – including but not limited to \nacquisition, human resources, legal, compliance, and IT services – regardless of whether they are carried \nout by employees or third parties. Many of the actions cited above are relevant and options for \naddressing third-party considerations. \nThird party GAI integrations may give rise to increased intellectual property, data privacy, or information \nsecurity risks, pointing to the need for clear guidelines for transparency and risk management regarding \nthe collection and use of third-party data for model inputs. Organizations may consider varying risk \ncontrols for foundation models, fine-tuned models, and embedded tools, enhanced processes for \ninteracting with external GAI technologies or service providers. Organizations can apply standard or \nexisting risk controls and processes to proprietary or open-source GAI technologies, data, and third-party \nservice providers, including acquisition and procurement due diligence, requests for software bills of \nmaterials (SBOMs), application of service level agreements (SLAs), and statement on standards for \nattestation engagement (SSAE) reports to help with third-party transparency and risk management for \nGAI systems. \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in which GAI systems may be developed, used, and repurposed \ncomplicates risk mapping and pre-deployment measurement efforts. Robust test, evaluation, validation, \nand verification (TEVV) processes can be iteratively applied – and documented – in early stages of the AI \nlifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous', '8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 128 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 128 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 23.14 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 56 tokens</li><li>mean: 247.42 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What measures are suggested to assess the environmental impact of AI model training and management activities?</code> | <code>37 <br>MS-2.11-005 <br>Assess the proportion of synthetic to non-synthetic training data and verify <br>training data is not overly homogenous or GAI-produced to mitigate concerns of <br>model collapse. <br>Harmful Bias and Homogenization <br>AI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, <br>Operation and Monitoring, TEVV <br> <br>MEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP <br>function – are assessed and documented. <br>Action ID <br>Suggested Action <br>GAI Risks <br>MS-2.12-001 Assess safety to physical environments when deploying GAI systems. <br>Dangerous, Violent, or Hateful <br>Content <br>MS-2.12-002 Document anticipated environmental impacts of model development, <br>maintenance, and deployment in product design decisions. <br>Environmental <br>MS-2.12-003 <br>Measure or estimate environmental impacts (e.g., energy and water <br>consumption) for training, fine tuning, and deploying models: Verify tradeoffs <br>between resources used at inference time versus additional resources required <br>at training time. <br>Environmental <br>MS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and <br>applications, and address green-washing concerns. <br>Environmental <br>AI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV</code> | | <code>What are some limitations of current pre-deployment testing approaches for GAI applications?</code> | <code>49 <br>early lifecycle TEVV approaches are developed and matured for GAI, organizations may use <br>recommended “pre-deployment testing” practices to measure performance, capabilities, limits, risks, <br>and impacts. This section describes risk measurement and estimation as part of pre-deployment TEVV, <br>and examines the state of play for pre-deployment testing methodologies. <br>Limitations of Current Pre-deployment Test Approaches <br>Currently available pre-deployment TEVV processes used for GAI applications may be inadequate, non-<br>systematically applied, or fail to reflect or mismatched to deployment contexts. For example, the <br>anecdotal testing of GAI system capabilities through video games or standardized tests designed for <br>humans (e.g., intelligence tests, professional licensing exams) does not guarantee GAI system validity or <br>reliability in those domains. Similarly, jailbreaking or prompt engineering tests may not systematically <br>assess validity or reliability risks. <br>Measurement gaps can arise from mismatches between laboratory and real-world settings. Current <br>testing approaches often remain focused on laboratory conditions or restricted to benchmark test <br>datasets and in silico techniques that may not extrapolate well to—or directly assess GAI impacts in real-<br>world conditions. For example, current measurement gaps for GAI make it difficult to precisely estimate <br>its potential ecosystem-level or longitudinal risks and related political, social, and economic impacts. <br>Gaps between benchmarks and real-world use of GAI systems may likely be exacerbated due to prompt <br>sensitivity and broad heterogeneity of contexts of use. <br>A.1.5. Structured Public Feedback <br>Structured public feedback can be used to evaluate whether GAI systems are performing as intended <br>and to calibrate and verify traditional measurement methods. Examples of structured feedback include, <br>but are not limited to: <br>• <br>Participatory Engagement Methods: Methods used to solicit feedback from civil society groups, <br>affected communities, and users, including focus groups, small user studies, and surveys. <br>• <br>Field Testing: Methods used to determine how people interact with, consume, use, and make <br>sense of AI-generated information, and subsequent actions and effects, including UX, usability, <br>and other structured, randomized experiments. <br>• <br>AI Red-teaming: A structured testing exercise used to probe an AI system to find flaws and <br>vulnerabilities such as inaccurate, harmful, or discriminatory outputs, often in a controlled <br>environment and in collaboration with system developers. <br>Information gathered from structured public feedback can inform design, implementation, deployment <br>approval, maintenance, or decommissioning decisions. Results and insights gleaned from these exercises <br>can serve multiple purposes, including improving data quality and preprocessing, bolstering governance <br>decision making, and enhancing system documentation and debugging practices. When implementing <br>feedback activities, organizations should follow human subjects research requirements and best <br>practices such as informed consent and subject compensation.</code> | | <code>How can organizations adjust their governance regimes to effectively manage the unique risks associated with generative AI?</code> | <code>47 <br>Appendix A. Primary GAI Considerations <br>The following primary considerations were derived as overarching themes from the GAI PWG <br>consultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, <br>and Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and <br>using GAI and also inform the Actions to Manage GAI risks. Information included about the primary <br>considerations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG. <br>Acknowledgments: These considerations could not have been surfaced without the helpful analysis and <br>contributions from the community and NIST staff GAI PWG leads: George Awad, Luca Belli, Harold Booth, <br>Mat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. <br>A.1. Governance <br>A.1.1. Overview <br>Like any other technology system, governance principles and techniques can be used to manage risks <br>related to generative AI models, capabilities, and applications. Organizations may choose to apply their <br>existing risk tiering to GAI systems, or they may opt to revise or update AI system risk levels to address <br>these unique GAI risks. This section describes how organizational governance regimes may be re-<br>evaluated and adjusted for GAI contexts. It also addresses third-party considerations for governing across <br>the AI value chain. <br>A.1.2. Organizational Governance <br>GAI opportunities, risks and long-term performance characteristics are typically less well-understood <br>than non-generative AI tools and may be perceived and acted upon by humans in ways that vary greatly. <br>Accordingly, GAI may call for different levels of oversight from AI Actors or different human-AI <br>configurations in order to manage their risks effectively. Organizations’ use of GAI systems may also <br>warrant additional human review, tracking and documentation, and greater management oversight. <br>AI technology can produce varied outputs in multiple modalities and present many classes of user <br>interfaces. This leads to a broader set of AI Actors interacting with GAI systems for widely differing <br>applications and contexts of use. These can include data labeling and preparation, development of GAI <br>models, content moderation, code generation and review, text generation and editing, image and video <br>generation, summarization, search, and chat. These activities can take place within organizational <br>settings or in the public domain. <br>Organizations can restrict AI applications that cause harm, exceed stated risk tolerances, or that conflict <br>with their tolerances or values. Governance tools and protocols that are applied to other types of AI <br>systems can be applied to GAI systems. These plans and actions include: <br>• Accessibility and reasonable <br>accommodations <br>• AI actor credentials and qualifications <br>• Alignment to organizational values <br>• Auditing and assessment <br>• Change-management controls <br>• Commercial use <br>• Data provenance</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a --> - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("danicafisher/dfisher-base-sentence-transformer") # Run inference sentences = [ 'How can organizations address risks associated with the use of third-party data for GAI model inputs?', '48 \n• Data protection \n• Data retention \n• Consistency in use of defining key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n• Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV practices \n• Secure software development practices \n• Stakeholder engagement \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as different levels of human-AI configurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations \nOrganizations may seek to acquire, embed, incorporate, or use open-source or proprietary third-party \nGAI models, systems, or generated data for various applications across an enterprise. Use of these GAI \ntools and inputs has implications for all functions of the organization – including but not limited to \nacquisition, human resources, legal, compliance, and IT services – regardless of whether they are carried \nout by employees or third parties. Many of the actions cited above are relevant and options for \naddressing third-party considerations. \nThird party GAI integrations may give rise to increased intellectual property, data privacy, or information \nsecurity risks, pointing to the need for clear guidelines for transparency and risk management regarding \nthe collection and use of third-party data for model inputs. Organizations may consider varying risk \ncontrols for foundation models, fine-tuned models, and embedded tools, enhanced processes for \ninteracting with external GAI technologies or service providers. Organizations can apply standard or \nexisting risk controls and processes to proprietary or open-source GAI technologies, data, and third-party \nservice providers, including acquisition and procurement due diligence, requests for software bills of \nmaterials (SBOMs), application of service level agreements (SLAs), and statement on standards for \nattestation engagement (SSAE) reports to help with third-party transparency and risk management for \nGAI systems. \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in which GAI systems may be developed, used, and repurposed \ncomplicates risk mapping and pre-deployment measurement efforts. Robust test, evaluation, validation, \nand verification (TEVV) processes can be iteratively applied – and documented – in early stages of the AI \nlifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous', '8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 128 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 128 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 17 tokens</li><li>mean: 23.14 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 56 tokens</li><li>mean: 247.42 tokens</li><li>max: 256 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What measures are suggested to assess the environmental impact of AI model training and management activities?</code> | <code>37 <br>MS-2.11-005 <br>Assess the proportion of synthetic to non-synthetic training data and verify <br>training data is not overly homogenous or GAI-produced to mitigate concerns of <br>model collapse. <br>Harmful Bias and Homogenization <br>AI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, <br>Operation and Monitoring, TEVV <br> <br>MEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP <br>function – are assessed and documented. <br>Action ID <br>Suggested Action <br>GAI Risks <br>MS-2.12-001 Assess safety to physical environments when deploying GAI systems. <br>Dangerous, Violent, or Hateful <br>Content <br>MS-2.12-002 Document anticipated environmental impacts of model development, <br>maintenance, and deployment in product design decisions. <br>Environmental <br>MS-2.12-003 <br>Measure or estimate environmental impacts (e.g., energy and water <br>consumption) for training, fine tuning, and deploying models: Verify tradeoffs <br>between resources used at inference time versus additional resources required <br>at training time. <br>Environmental <br>MS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and <br>applications, and address green-washing concerns. <br>Environmental <br>AI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV</code> | | <code>What are some limitations of current pre-deployment testing approaches for GAI applications?</code> | <code>49 <br>early lifecycle TEVV approaches are developed and matured for GAI, organizations may use <br>recommended “pre-deployment testing” practices to measure performance, capabilities, limits, risks, <br>and impacts. This section describes risk measurement and estimation as part of pre-deployment TEVV, <br>and examines the state of play for pre-deployment testing methodologies. <br>Limitations of Current Pre-deployment Test Approaches <br>Currently available pre-deployment TEVV processes used for GAI applications may be inadequate, non-<br>systematically applied, or fail to reflect or mismatched to deployment contexts. For example, the <br>anecdotal testing of GAI system capabilities through video games or standardized tests designed for <br>humans (e.g., intelligence tests, professional licensing exams) does not guarantee GAI system validity or <br>reliability in those domains. Similarly, jailbreaking or prompt engineering tests may not systematically <br>assess validity or reliability risks. <br>Measurement gaps can arise from mismatches between laboratory and real-world settings. Current <br>testing approaches often remain focused on laboratory conditions or restricted to benchmark test <br>datasets and in silico techniques that may not extrapolate well to—or directly assess GAI impacts in real-<br>world conditions. For example, current measurement gaps for GAI make it difficult to precisely estimate <br>its potential ecosystem-level or longitudinal risks and related political, social, and economic impacts. <br>Gaps between benchmarks and real-world use of GAI systems may likely be exacerbated due to prompt <br>sensitivity and broad heterogeneity of contexts of use. <br>A.1.5. Structured Public Feedback <br>Structured public feedback can be used to evaluate whether GAI systems are performing as intended <br>and to calibrate and verify traditional measurement methods. Examples of structured feedback include, <br>but are not limited to: <br>• <br>Participatory Engagement Methods: Methods used to solicit feedback from civil society groups, <br>affected communities, and users, including focus groups, small user studies, and surveys. <br>• <br>Field Testing: Methods used to determine how people interact with, consume, use, and make <br>sense of AI-generated information, and subsequent actions and effects, including UX, usability, <br>and other structured, randomized experiments. <br>• <br>AI Red-teaming: A structured testing exercise used to probe an AI system to find flaws and <br>vulnerabilities such as inaccurate, harmful, or discriminatory outputs, often in a controlled <br>environment and in collaboration with system developers. <br>Information gathered from structured public feedback can inform design, implementation, deployment <br>approval, maintenance, or decommissioning decisions. Results and insights gleaned from these exercises <br>can serve multiple purposes, including improving data quality and preprocessing, bolstering governance <br>decision making, and enhancing system documentation and debugging practices. When implementing <br>feedback activities, organizations should follow human subjects research requirements and best <br>practices such as informed consent and subject compensation.</code> | | <code>How can organizations adjust their governance regimes to effectively manage the unique risks associated with generative AI?</code> | <code>47 <br>Appendix A. Primary GAI Considerations <br>The following primary considerations were derived as overarching themes from the GAI PWG <br>consultation process. These considerations (Governance, Pre-Deployment Testing, Content Provenance, <br>and Incident Disclosure) are relevant for voluntary use by any organization designing, developing, and <br>using GAI and also inform the Actions to Manage GAI risks. Information included about the primary <br>considerations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG. <br>Acknowledgments: These considerations could not have been surfaced without the helpful analysis and <br>contributions from the community and NIST staff GAI PWG leads: George Awad, Luca Belli, Harold Booth, <br>Mat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz, Martin Stanley, and Kyra Yee. <br>A.1. Governance <br>A.1.1. Overview <br>Like any other technology system, governance principles and techniques can be used to manage risks <br>related to generative AI models, capabilities, and applications. Organizations may choose to apply their <br>existing risk tiering to GAI systems, or they may opt to revise or update AI system risk levels to address <br>these unique GAI risks. This section describes how organizational governance regimes may be re-<br>evaluated and adjusted for GAI contexts. It also addresses third-party considerations for governing across <br>the AI value chain. <br>A.1.2. Organizational Governance <br>GAI opportunities, risks and long-term performance characteristics are typically less well-understood <br>than non-generative AI tools and may be perceived and acted upon by humans in ways that vary greatly. <br>Accordingly, GAI may call for different levels of oversight from AI Actors or different human-AI <br>configurations in order to manage their risks effectively. Organizations’ use of GAI systems may also <br>warrant additional human review, tracking and documentation, and greater management oversight. <br>AI technology can produce varied outputs in multiple modalities and present many classes of user <br>interfaces. This leads to a broader set of AI Actors interacting with GAI systems for widely differing <br>applications and contexts of use. These can include data labeling and preparation, development of GAI <br>models, content moderation, code generation and review, text generation and editing, image and video <br>generation, summarization, search, and chat. These activities can take place within organizational <br>settings or in the public domain. <br>Organizations can restrict AI applications that cause harm, exceed stated risk tolerances, or that conflict <br>with their tolerances or values. Governance tools and protocols that are applied to other types of AI <br>systems can be applied to GAI systems. These plans and actions include: <br>• Accessibility and reasonable <br>accommodations <br>• AI actor credentials and qualifications <br>• Alignment to organizational values <br>• Auditing and assessment <br>• Change-management controls <br>• Commercial use <br>• Data provenance</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 20 - `per_device_eval_batch_size`: 20 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:128", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "What is the title of the publication released by NIST in July 2024 regarding artificial intelligence?", "sentences": ["NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1", "NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1 \n \nJuly 2024 \n \n \n \n \nU.S. Department of Commerce \nGina M. Raimondo, Secretary \nNational Institute of Standards and Technology \nLaurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology", "37 \nMS-2.11-005 \nAssess the proportion of synthetic to non-synthetic training data and verify \ntraining data is not overly homogenous or GAI-produced to mitigate concerns of \nmodel collapse. \nHarmful Bias and Homogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Affected Individuals and Communities, Domain Experts, End-Users, \nOperation and Monitoring, TEVV \n \nMEASURE 2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the MAP \nfunction – are assessed and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.12-001 Assess safety to physical environments when deploying GAI systems. \nDangerous, Violent, or Hateful \nContent \nMS-2.12-002 Document anticipated environmental impacts of model development, \nmaintenance, and deployment in product design decisions. \nEnvironmental \nMS-2.12-003 \nMeasure or estimate environmental impacts (e.g., energy and water \nconsumption) for training, fine tuning, and deploying models: Verify tradeoffs \nbetween resources used at inference time versus additional resources required \nat training time. \nEnvironmental \nMS-2.12-004 Verify effectiveness of carbon capture or offset programs for GAI training and \napplications, and address green-washing concerns. \nEnvironmental \nAI Actor Tasks: AI Deployment, AI Impact Assessment, Domain Experts, Operation and Monitoring, TEVV"]}, {"source_sentence": "What are the four primary considerations relevant to Generative AI (GAI) that the GAI Public Working Group focused on?", "sentences": ["23 \nMP-1.1-002 \nDetermine and document the expected and acceptable GAI system context of \nuse in collaboration with socio-cultural and other domain experts, by assessing: \nAssumptions and limitations; Direct value to the organization; Intended \noperational environment and observed usage patterns; Potential positive and \nnegative impacts to individuals, public safety, groups, communities, \norganizations, democratic institutions, and the physical environment; Social \nnorms and expectations. \nHarmful Bias and Homogenization \nMP-1.1-003 \nDocument risk measurement plans to address identified risks. Plans may \ninclude, as applicable: Individual and group cognitive biases (e.g., confirmation \nbias, funding bias, groupthink) for AI Actors involved in the design, \nimplementation, and use of GAI systems; Known past GAI system incidents and \nfailure modes; In-context use and foreseeable misuse, abuse, and off-label use; \nOver reliance on quantitative metrics and methodologies without sufficient \nawareness of their limitations in the context(s) of use; Standard measurement \nand structured human feedback approaches; Anticipated human-AI \nconfigurations. \nHuman-AI Configuration; Harmful \nBias and Homogenization; \nDangerous, Violent, or Hateful \nContent \nMP-1.1-004 \nIdentify and document foreseeable illegal uses or applications of the GAI system \nthat surpass organizational risk tolerances. \nCBRN Information or Capabilities; \nDangerous, Violent, or Hateful \nContent; Obscene, Degrading, \nand/or Abusive Content \nAI Actor Tasks: AI Deployment \n \nMAP 1.2: Interdisciplinary AI Actors, competencies, skills, and capacities for establishing context reflect demographic diversity and \nbroad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary \ncollaboration are prioritized. \nAction ID \nSuggested Action \nGAI Risks \nMP-1.2-001 \nEstablish and empower interdisciplinary teams that reflect a wide range of \ncapabilities, competencies, demographic groups, domain expertise, educational \nbackgrounds, lived experiences, professions, and skills across the enterprise to \ninform and conduct risk measurement and management functions. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMP-1.2-002 \nVerify that data or benchmarks used in risk measurement, and users, \nparticipants, or subjects involved in structured GAI public feedback exercises \nare representative of diverse in-context user populations. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nAI Actor Tasks: AI Deployment", "2 \nThis work was informed by public feedback and consultations with diverse stakeholder groups as part of NIST’s \nGenerative AI Public Working Group (GAI PWG). The GAI PWG was an open, transparent, and collaborative \nprocess, facilitated via a virtual workspace, to obtain multistakeholder input on GAI risk management and to \ninform NIST’s approach. \nThe focus of the GAI PWG was limited to four primary considerations relevant to GAI: Governance, Content \nProvenance, Pre-deployment Testing, and Incident Disclosure (further described in Appendix A). As such, the \nsuggested actions in this document primarily address these considerations. \nFuture revisions of this profile will include additional AI RMF subcategories, risks, and suggested actions based \non additional considerations of GAI as the space evolves and empirical evidence indicates additional risks. A \nglossary of terms pertinent to GAI risk management will be developed and hosted on NIST’s Trustworthy & \nResponsible AI Resource Center (AIRC), and added to The Language of Trustworthy AI: An In-Depth Glossary of \nTerms. \nThis document was also informed by public comments and consultations from several Requests for Information. \n \n2. \nOverview of Risks Unique to or Exacerbated by GAI \nIn the context of the AI RMF, risk refers to the composite measure of an event’s probability (or \nlikelihood) of occurring and the magnitude or degree of the consequences of the corresponding event. \nSome risks can be assessed as likely to materialize in a given context, particularly those that have been \nempirically demonstrated in similar contexts. Other risks may be unlikely to materialize in a given \ncontext, or may be more speculative and therefore uncertain. \nAI risks can differ from or intensify traditional software risks. Likewise, GAI can exacerbate existing AI \nrisks, and creates unique risks. GAI risks can vary along many dimensions: \n• \nStage of the AI lifecycle: Risks can arise during design, development, deployment, operation, \nand/or decommissioning. \n• \nScope: Risks may exist at individual model or system levels, at the application or implementation \nlevels (i.e., for a specific use case), or at the ecosystem level – that is, beyond a single system or \norganizational context. Examples of the latter include the expansion of “algorithmic \nmonocultures,3” resulting from repeated use of the same model, or impacts on access to \nopportunity, labor markets, and the creative economies.4 \n• \nSource of risk: Risks may emerge from factors related to the design, training, or operation of the \nGAI model itself, stemming in some cases from GAI model or system inputs, and in other cases, \nfrom GAI system outputs. Many GAI risks, however, originate from human behavior, including \n \n \n3 “Algorithmic monocultures” refers to the phenomenon in which repeated use of the same model or algorithm in \nconsequential decision-making settings like employment and lending can result in increased susceptibility by \nsystems to correlated failures (like unexpected shocks), due to multiple actors relying on the same algorithm. \n4 Many studies have projected the impact of AI on the workforce and labor markets. Fewer studies have examined \nthe impact of GAI on the labor market, though some industry surveys indicate that that both employees and \nemployers are pondering this disruption.", "44 \nMG-3.2-007 \nLeverage feedback and recommendations from organizational boards or \ncommittees related to the deployment of GAI applications and content \nprovenance when using third-party pre-trained models. \nInformation Integrity; Value Chain \nand Component Integration \nMG-3.2-008 \nUse human moderation systems where appropriate to review generated content \nin accordance with human-AI configuration policies established in the Govern \nfunction, aligned with socio-cultural norms in the context of use, and for settings \nwhere AI models are demonstrated to perform poorly. \nHuman-AI Configuration \nMG-3.2-009 \nUse organizational risk tolerance to evaluate acceptable risks and performance \nmetrics and decommission or retrain pre-trained models that perform outside of \ndefined limits. \nCBRN Information or Capabilities; \nConfabulation \nAI Actor Tasks: AI Deployment, Operation and Monitoring, Third-party entities \n \nMANAGE 4.1: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating \ninput from users and other relevant AI Actors, appeal and override, decommissioning, incident response, recovery, and change \nmanagement. \nAction ID \nSuggested Action \nGAI Risks \nMG-4.1-001 \nCollaborate with external researchers, industry experts, and community \nrepresentatives to maintain awareness of emerging best practices and \ntechnologies in measuring and managing identified risks. \nInformation Integrity; Harmful Bias \nand Homogenization \nMG-4.1-002 \nEstablish, maintain, and evaluate effectiveness of organizational processes and \nprocedures for post-deployment monitoring of GAI systems, particularly for \npotential confabulation, CBRN, or cyber risks. \nCBRN Information or Capabilities; \nConfabulation; Information \nSecurity \nMG-4.1-003 \nEvaluate the use of sentiment analysis to gauge user sentiment regarding GAI \ncontent performance and impact, and work in collaboration with AI Actors \nexperienced in user research and experience. \nHuman-AI Configuration \nMG-4.1-004 Implement active learning techniques to identify instances where the model fails \nor produces unexpected outputs. \nConfabulation \nMG-4.1-005 \nShare transparency reports with internal and external stakeholders that detail \nsteps taken to update the GAI system to enhance transparency and \naccountability. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-4.1-006 \nTrack dataset modifications for provenance by monitoring data deletions, \nrectification requests, and other changes that may impact the verifiability of \ncontent origins. \nInformation Integrity"]}, {"source_sentence": "What techniques should be deployed to verify the accuracy and veracity of information generated by GAI systems?", "sentences": ["10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations. \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning. \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.", "25 \nMP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. \nHarmful Bias and Homogenization; \nIntellectual Property \nMP-2.3-003 \nDeploy and document fact-checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. \nInformation Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., \nsynthetic media) that might be indistinguishable from human-generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. \nInformation Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-3.4-001 \nEvaluate whether GAI operators and end-users can accurately understand \ncontent lineage and origin. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. \nInformation Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. \nInformation Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. \nHuman-AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human-GAI \nconfigurations for future refinement and improvements. \nHuman-AI Configuration; \nInformation Integrity \nMP-3.4-006 \nInvolve the end-users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios, such as crisis \nsituations or ethically sensitive contexts. \nHuman-AI Configuration; \nInformation Integrity; Harmful Bias \nand Homogenization; Dangerous, \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End-Users, Human Factors, Operation and Monitoring", "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002 \nIdentify potential content provenance harms of GAI, such as misinformation or \ndisinformation, deepfakes, including NCII, or tampered content. Enumerate and \nrank risks based on their likelihood and potential impact, and determine how well \nprovenance solutions address specific risks and/or harms. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; \nObscene, Degrading, and/or \nAbusive Content \nMP-5.1-003 \nConsider disclosing use of GAI to end users in relevant contexts, while considering \nthe objective of disclosure, the context of use, the likelihood and magnitude of the \nrisk posed, the audience of the disclosure, as well as the frequency of the \ndisclosures. \nHuman-AI Configuration \nMP-5.1-004 Prioritize GAI structured public feedback processes based on risk assessment \nestimates. \nInformation Integrity; CBRN \nInformation or Capabilities; \nDangerous, Violent, or Hateful \nContent; Harmful Bias and \nHomogenization \nMP-5.1-005 Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to \nidentify anomalous or unforeseen failure modes. \nInformation Security \nMP-5.1-006 \nProfile threats and negative impacts arising from GAI systems interacting with, \nmanipulating, or generating content, and outlining known and potential \nvulnerabilities and the likelihood of their occurrence. \nInformation Security \nAI Actor Tasks: AI Deployment, AI Design, AI Development, AI Impact Assessment, Affected Individuals and Communities, End-\nUsers, Operation and Monitoring"]}, {"source_sentence": "What is the phenomenon referred to as \"confabulation\" in GAI systems?", "sentences": ["50 \nParticipatory Engagement Methods \nOn an ad hoc or more structured basis, organizations can design and use a variety of channels to engage \nexternal stakeholders in product development or review. Focus groups with select experts can provide \nfeedback on a range of issues. Small user studies can provide feedback from representative groups or \npopulations. Anonymous surveys can be used to poll or gauge reactions to specific features. Participatory \nengagement methods are often less structured than field testing or red teaming, and are more \ncommonly used in early stages of AI or product development. \nField Testing \nField testing involves structured settings to evaluate risks and impacts and to simulate the conditions \nunder which the GAI system will be deployed. Field style tests can be adapted from a focus on user \npreferences and experiences towards AI risks and impacts – both negative and positive. When carried \nout with large groups of users, these tests can provide estimations of the likelihood of risks and impacts \nin real world interactions. \nOrganizations may also collect feedback on outcomes, harms, and user experience directly from users in \nthe production environment after a model has been released, in accordance with human subject \nstandards such as informed consent and compensation. Organizations should follow applicable human \nsubjects research requirements, and best practices such as informed consent and subject compensation, \nwhen implementing feedback activities. \nAI Red-teaming \nAI red-teaming is an evolving practice that references exercises often conducted in a controlled \nenvironment and in collaboration with AI developers building AI models to identify potential adverse \nbehavior or outcomes of a GAI model or system, how they could occur, and stress test safeguards”. AI \nred-teaming can be performed before or after AI models or systems are made available to the broader \npublic; this section focuses on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming outputs is related to the background and expertise of the AI red team \nitself. Demographically and interdisciplinarily diverse AI red teams can be used to identify flaws in the \nvarying contexts where GAI will be used. For best results, AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural aspects within the deployment context. AI red-teaming results \nshould be given additional analysis before they are incorporated into organizational governance and \ndecision making, policy and procedural updates, and AI risk management efforts. \nVarious types of AI red-teaming may be appropriate, depending on the use case: \n• \nGeneral Public: Performed by general users (not necessarily AI or technical experts) who are \nexpected to use the model or interact with its outputs, and who bring their own lived \nexperiences and perspectives to the task of AI red-teaming. These individuals may have been \nprovided instructions and material to complete tasks which may elicit harmful model behaviors. \nThis type of exercise can be more effective with large groups of AI red-teamers. \n• \nExpert: Performed by specialists with expertise in the domain or specific AI red-teaming context \nof use (e.g., medicine, biotech, cybersecurity). \n• \nCombination: In scenarios when it is difficult to identify and recruit specialists with sufficient \ndomain and contextual expertise, AI red-teaming exercises may leverage both expert and", "54 \nAppendix B. References \nAcemoglu, D. (2024) The Simple Macroeconomics of AI https://www.nber.org/papers/w32487 \nAI Incident Database. https://incidentdatabase.ai/ \nAtherton, D. (2024) Deepfakes and Child Safety: A Survey and Analysis of 2023 Incidents and Responses. \nAI Incident Database. https://incidentdatabase.ai/blog/deepfakes-and-child-safety/ \nBadyal, N. et al. (2023) Intentional Biases in LLM Responses. arXiv. https://arxiv.org/pdf/2311.07611 \nBing Chat: Data Exfiltration Exploit Explained. Embrace The Red. \nhttps://embracethered.com/blog/posts/2023/bing-chat-data-exfiltration-poc-and-fix/ \nBommasani, R. et al. (2022) Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome \nHomogenization? arXiv. https://arxiv.org/pdf/2211.13972 \nBoyarskaya, M. et al. (2020) Overcoming Failures of Imagination in AI Infused System Development and \nDeployment. arXiv. https://arxiv.org/pdf/2011.13416 \nBrowne, D. et al. (2023) Securing the AI Pipeline. Mandiant. \nhttps://www.mandiant.com/resources/blog/securing-ai-pipeline \nBurgess, M. (2024) Generative AI’s Biggest Security Flaw Is Not Easy to Fix. WIRED. \nhttps://www.wired.com/story/generative-ai-prompt-injection-hacking/ \nBurtell, M. et al. (2024) The Surprising Power of Next Word Prediction: Large Language Models \nExplained, Part 1. Georgetown Center for Security and Emerging Technology. \nhttps://cset.georgetown.edu/article/the-surprising-power-of-next-word-prediction-large-language-\nmodels-explained-part-1/ \nCanadian Centre for Cyber Security (2023) Generative artificial intelligence (AI) - ITSAP.00.041. \nhttps://www.cyber.gc.ca/en/guidance/generative-artificial-intelligence-ai-itsap00041 \nCarlini, N., et al. (2021) Extracting Training Data from Large Language Models. Usenix. \nhttps://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting \nCarlini, N. et al. (2023) Quantifying Memorization Across Neural Language Models. ICLR 2023. \nhttps://arxiv.org/pdf/2202.07646 \nCarlini, N. et al. (2024) Stealing Part of a Production Language Model. arXiv. \nhttps://arxiv.org/abs/2403.06634 \nChandra, B. et al. (2023) Dismantling the Disinformation Business of Chinese Influence Operations. \nRAND. https://www.rand.org/pubs/commentary/2023/10/dismantling-the-disinformation-business-of-\nchinese.html \nCiriello, R. et al. (2024) Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika. \nResearchGate. https://www.researchgate.net/publication/374505266_Ethical_Tensions_in_Human-\nAI_Companionship_A_Dialectical_Inquiry_into_Replika \nDahl, M. et al. (2024) Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models. arXiv. \nhttps://arxiv.org/abs/2401.01301", "6 \n2.2. Confabulation \n“Confabulation” refers to a phenomenon in which GAI systems generate and confidently present \nerroneous or false content in response to prompts. Confabulations also include generated outputs that \ndiverge from the prompts or other input or that contradict previously generated statements in the same \ncontext. These phenomena are colloquially also referred to as “hallucinations” or “fabrications.” \nConfabulations can occur across GAI outputs and contexts.9,10 Confabulations are a natural result of the \nway generative models are designed: they generate outputs that approximate the statistical distribution \nof their training data; for example, LLMs predict the next token or word in a sentence or phrase. While \nsuch statistical prediction can produce factually accurate and consistent outputs, it can also produce \noutputs that are factually inaccurate or internally inconsistent. This dynamic is particularly relevant when \nit comes to open-ended prompts for long-form responses and in domains which require highly \ncontextual and/or domain expertise. \nRisks from confabulations may arise when users believe false content – often due to the confident nature \nof the response – leading users to act upon or promote the false information. This poses a challenge for \nmany real-world applications, such as in healthcare, where a confabulated summary of patient \ninformation reports could cause doctors to make incorrect diagnoses and/or recommend the wrong \ntreatments. Risks of confabulated content may be especially important to monitor when integrating GAI \ninto applications involving consequential decision making. \nGAI outputs may also include confabulated logic or citations that purport to justify or explain the \nsystem’s answer, which may further mislead humans into inappropriately trusting the system’s output. \nFor instance, LLMs sometimes provide logical steps for how they arrived at an answer even when the \nanswer itself is incorrect. Similarly, an LLM could falsely assert that it is human or has human traits, \npotentially deceiving humans into believing they are speaking with another human. \nThe extent to which humans can be deceived by LLMs, the mechanisms by which this may occur, and the \npotential risks from adversarial prompting of such behavior are emerging areas of study. Given the wide \nrange of downstream impacts of GAI, it is difficult to estimate the downstream scale and impact of \nconfabulations. \nTrustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe, Valid and Reliable, Explainable \nand Interpretable \n2.3. Dangerous, Violent, or Hateful Content \nGAI systems can produce content that is inciting, radicalizing, or threatening, or that glorifies violence, \nwith greater ease and scale than other technologies. LLMs have been reported to generate dangerous or \nviolent recommendations, and some models have generated actionable instructions for dangerous or \n \n \n9 Confabulations of falsehoods are most commonly a problem for text-based outputs; for audio, image, or video \ncontent, creative generation of non-factual content can be a desired behavior. \n10 For example, legal confabulations have been shown to be pervasive in current state-of-the-art LLMs. See also, \ne.g.,"]}, {"source_sentence": "How can organizations address risks associated with the use of third-party data for GAI model inputs?", "sentences": ["48 \n• Data protection \n• Data retention \n• Consistency in use of defining key terms \n• Decommissioning \n• Discouraging anonymous use \n• Education \n• Impact assessments \n• Incident response \n• Monitoring \n• Opt-outs \n• Risk-based controls \n• Risk mapping and measurement \n• Science-backed TEVV practices \n• Secure software development practices \n• Stakeholder engagement \n• Synthetic content detection and \nlabeling tools and techniques \n• Whistleblower protections \n• Workforce diversity and \ninterdisciplinary teams\nEstablishing acceptable use policies and guidance for the use of GAI in formal human-AI teaming settings \nas well as different levels of human-AI configurations can help to decrease risks arising from misuse, \nabuse, inappropriate repurpose, and misalignment between systems and users. These practices are just \none example of adapting existing governance protocols for GAI contexts. \nA.1.3. Third-Party Considerations \nOrganizations may seek to acquire, embed, incorporate, or use open-source or proprietary third-party \nGAI models, systems, or generated data for various applications across an enterprise. Use of these GAI \ntools and inputs has implications for all functions of the organization – including but not limited to \nacquisition, human resources, legal, compliance, and IT services – regardless of whether they are carried \nout by employees or third parties. Many of the actions cited above are relevant and options for \naddressing third-party considerations. \nThird party GAI integrations may give rise to increased intellectual property, data privacy, or information \nsecurity risks, pointing to the need for clear guidelines for transparency and risk management regarding \nthe collection and use of third-party data for model inputs. Organizations may consider varying risk \ncontrols for foundation models, fine-tuned models, and embedded tools, enhanced processes for \ninteracting with external GAI technologies or service providers. Organizations can apply standard or \nexisting risk controls and processes to proprietary or open-source GAI technologies, data, and third-party \nservice providers, including acquisition and procurement due diligence, requests for software bills of \nmaterials (SBOMs), application of service level agreements (SLAs), and statement on standards for \nattestation engagement (SSAE) reports to help with third-party transparency and risk management for \nGAI systems. \nA.1.4. Pre-Deployment Testing \nOverview \nThe diverse ways and contexts in which GAI systems may be developed, used, and repurposed \ncomplicates risk mapping and pre-deployment measurement efforts. Robust test, evaluation, validation, \nand verification (TEVV) processes can be iteratively applied – and documented – in early stages of the AI \nlifecycle and informed by representative AI Actors (see Figure 3 of the AI RMF). Until new and rigorous", "About AI at NIST: The National Institute of Standards and Technology (NIST) develops measurements, \ntechnology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, \nand fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without \nharm to people or the planet. NIST, which has conducted both fundamental and applied work on AI for \nmore than a decade, is also helping to fulfill the 2023 Executive Order on Safe, Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute and the companion AI Safety Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build the science necessary for safe, secure, and \ntrustworthy development and use of AI. \nAcknowledgments: This report was accomplished with the many helpful comments and contributions \nfrom the community, including the NIST Generative AI Public Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz, Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing Statements \nNIST Technical Series Publication Identifier Syntax \nPublication History \nApproved by the NIST Editorial Review Board on 07-25-2024 \nContact Information \[email protected] \nNational Institute of Standards and Technology \nAttn: NIST AI Innovation Lab, Information Technology Laboratory \n100 Bureau Drive (Mail Stop 8900) Gaithersburg, MD 20899-8900 \nAdditional Information \nAdditional information about this publication and other NIST AI publications are available at \nhttps://airc.nist.gov/Home. \n \nDisclaimer: Certain commercial entities, equipment, or materials may be identified in this document in \norder to adequately describe an experimental procedure or concept. Such identification is not intended to \nimply recommendation or endorsement by the National Institute of Standards and Technology, nor is it \nintended to imply that the entities, materials, or equipment are necessarily the best available for the \npurpose. Any mention of commercial, non-profit, academic partners, or their products, or references is \nfor information only; it is not intended to imply endorsement or recommendation by any U.S. \nGovernment agency.", "8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining, and operating (running inference on) GAI systems are resource-intensive activities, \nwith potentially large energy and environmental footprints. Energy and carbon emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training, fine-tuning, inference), the modality of the \ncontent, hardware used, and type of task or application. \nCurrent estimates suggest that training a single transformer LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco and New York. In a study comparing energy consumption and carbon \nemissions for LLM inference, generative tasks (e.g., text summarization) were found to be more energy- \nand carbon-intensive than discriminative or non-generative tasks (e.g., text classification). \nMethods for creating smaller versions of trained models, such as model distillation or compression, \ncould reduce environmental impacts at inference time, but training and tuning such models may still \ncontribute to their environmental impacts. Currently there is no agreed upon method to estimate \nenvironmental impacts from GAI. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe \n2.6. Harmful Bias and Homogenization \nBias exists in many forms and can become ingrained in automated systems. AI systems, including GAI \nsystems, can increase the speed and scale at which harmful biases manifest and are acted upon, \npotentially perpetuating and amplifying harms to individuals, groups, communities, organizations, and \nsociety. For example, when prompted to generate images of CEOs, doctors, lawyers, and judges, current \ntext-to-image models underrepresent women and/or racial minorities, and people with disabilities. \nImage generator models have also produced biased or stereotyped output for various demographic \ngroups and have difficulty producing non-stereotyped content even when the prompt specifically \nrequests image features that are inconsistent with the stereotypes. Harmful bias in GAI models, which \nmay stem from their training data, can also cause representational harms or perpetuate or exacerbate \nbias based on race, gender, disability, or other protected classes. \nHarmful bias in GAI systems can also lead to harms via disparities between how a model performs for \ndifferent subgroups or languages (e.g., an LLM may perform less well for non-English languages or \ncertain dialects). Such disparities can contribute to discriminatory decision-making or amplification of \nexisting societal biases. In addition, GAI systems may be inappropriately trusted to perform similarly \nacross all subgroups, which could leave the groups facing underperformance with worse outcomes than \nif no GAI system were used. Disparate or reduced performance for lower-resource languages also \npresents challenges to model adoption, inclusion, and accessibility, and may make preservation of \nendangered languages more difficult if GAI systems become embedded in everyday processes that would \notherwise have been opportunities to use these languages. \nBias is mutually reinforcing with the problem of undesired homogenization, in which GAI systems \nproduce skewed distributions of outputs that are overly uniform (for example, repetitive aesthetic styles"]}]}
task
[ "TEXT_CLASSIFICATION", "SUMMARIZATION" ]
44,278
Helsinki-NLP/opus-mt-sv-war
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sv", "war", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:06:27+00:00
47
0
--- license: apache-2.0 tags: - translation --- ### opus-mt-sv-war * source languages: sv * target languages: war * OPUS readme: [sv-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-war/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.war | 36.7 | 0.576 |
null
Non_BioNLP
### opus-mt-sv-war * source languages: sv * target languages: war * OPUS readme: [sv-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-war/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-war/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.sv.war | 36.7 | 0.576 |
{"license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,279
JustFrederik/jparacrawl-v3-big-ct2-int8
JustFrederik
translation
[ "translation", "ja", "en", "license:unknown", "region:us" ]
2023-05-09T15:58:33Z
2023-05-09T16:30:02+00:00
0
0
--- language: - ja - en license: unknown pipeline_tag: translation --- https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/ <br /> ``` ct2-fairseq-converter --model_path big.pretrain.pt --data_dir . --source_lang en --target_lang ja --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/en-ja ``` ``` ct2-fairseq-converter --model_path ./big/big.pretrain.pt --data_dir ./big --source_lang ja --target_lang en --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/ja-en ```
null
Non_BioNLP
https://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/ <br /> ``` ct2-fairseq-converter --model_path big.pretrain.pt --data_dir . --source_lang en --target_lang ja --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/en-ja ``` ``` ct2-fairseq-converter --model_path ./big/big.pretrain.pt --data_dir ./big --source_lang ja --target_lang en --quantization int8 --output_dir ../converted/jparacrawl-v3-big-ct2-int8/ja-en ```
{"language": ["ja", "en"], "license": "unknown", "pipeline_tag": "translation"}
task
[ "TRANSLATION" ]
44,280
aXhyra/emotion_trained_1234567
aXhyra
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-12-12T13:19:19+00:00
17
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - f1 tags: - generated_from_trainer model-index: - name: emotion_trained_1234567 results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval args: emotion metrics: - type: f1 value: 0.7301562209701973 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9051 - F1: 0.7302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6480 | 0.7231 | | No log | 2.0 | 408 | 0.6114 | 0.7403 | | 0.5045 | 3.0 | 612 | 0.7592 | 0.7311 | | 0.5045 | 4.0 | 816 | 0.9051 | 0.7302 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9051 - F1: 0.7302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6480 | 0.7231 | | No log | 2.0 | 408 | 0.6114 | 0.7403 | | 0.5045 | 3.0 | 612 | 0.7592 | 0.7311 | | 0.5045 | 4.0 | 816 | 0.9051 | 0.7302 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "emotion_trained_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7301562209701973, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,281
alex-atelo/flan-t5-small-finetuned-xlsum-en-accelerate
alex-atelo
summarization
[ "transformers", "safetensors", "t5", "text2text-generation", "summarization", "en", "dataset:csebuetnlp/xlsum", "arxiv:1910.09700", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:wtfpl", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-19T21:20:46Z
2024-03-20T16:19:12+00:00
22
0
--- base_model: google/flan-t5-small datasets: - csebuetnlp/xlsum language: - en license: wtfpl metrics: - rouge tags: - summarization model-index: - name: flan-t5-small-finetuned-xlsum-en-accelerate results: [] --- # Model Card for flan-t5-small-finetuned-xlsum-en-accelerate This model is a fine-tuned version of [flan-t5-small](https://huggingface.co/google/flan-t5-small) on the csebuetnlp/xlsum dataset. A reduced version of the English subset was used, focusing on shorter targets. It achieves the following results on the evaluation set: - rouge1: 29.99 - rouge2: 10.61 - rougeL: 25.52 - rougeLsum: 25.52 <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Model can produce false information when summarizing. This is very much an initial draft, and is not expected for use in production, use at your own risk. [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results | Epoch | rouge1 | rouge2 | rougeL | rougeLsum | |:-----:|:------:|:------:|:------:|:---------:| | 1.0 | 29.38 | 10.31 | 25.0 | 25.0 | | 2.0 | 29.87 | 10.46 | 25.41 | 25.41 | | 3.0 | 29.99 | 10.61 | 25.52 | 25.52 | [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ##### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1 [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @inproceedings{hasan-etal-2021-xl, title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Islam, Md. Saiful and Mubasshir, Kazi and Li, Yuan-Fang and Kang, Yong-Bin and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.413", pages = "4693--4703", } ``` [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for flan-t5-small-finetuned-xlsum-en-accelerate This model is a fine-tuned version of [flan-t5-small](https://huggingface.co/google/flan-t5-small) on the csebuetnlp/xlsum dataset. A reduced version of the English subset was used, focusing on shorter targets. It achieves the following results on the evaluation set: - rouge1: 29.99 - rouge2: 10.61 - rougeL: 25.52 - rougeLsum: 25.52 <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Model can produce false information when summarizing. This is very much an initial draft, and is not expected for use in production, use at your own risk. [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results | Epoch | rouge1 | rouge2 | rougeL | rougeLsum | |:-----:|:------:|:------:|:------:|:---------:| | 1.0 | 29.38 | 10.31 | 25.0 | 25.0 | | 2.0 | 29.87 | 10.46 | 25.41 | 25.41 | | 3.0 | 29.99 | 10.61 | 25.52 | 25.52 | [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ##### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.1 [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @inproceedings{hasan-etal-2021-xl, title = "{XL}-Sum: Large-Scale Multilingual Abstractive Summarization for 44 Languages", author = "Hasan, Tahmid and Bhattacharjee, Abhik and Islam, Md. Saiful and Mubasshir, Kazi and Li, Yuan-Fang and Kang, Yong-Bin and Rahman, M. Sohel and Shahriyar, Rifat", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.413", pages = "4693--4703", } ``` [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "google/flan-t5-small", "datasets": ["csebuetnlp/xlsum"], "language": ["en"], "license": "wtfpl", "metrics": ["rouge"], "tags": ["summarization"], "model-index": [{"name": "flan-t5-small-finetuned-xlsum-en-accelerate", "results": []}]}
task
[ "SUMMARIZATION" ]
44,282
JorgeSarry/classify-setfit
JorgeSarry
text-classification
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-06-21T08:43:01Z
2023-06-21T08:43:54+00:00
13
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # JorgeSarry/classify-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("JorgeSarry/classify-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# JorgeSarry/classify-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("JorgeSarry/classify-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
44,283
apatidar0/marian-finetuned-kde4-en-to-fr
apatidar0
translation
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-06T16:37:37Z
2023-02-06T17:24:40+00:00
112
0
--- datasets: - kde4 license: apache-2.0 tags: - translation - generated_from_trainer model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
{"datasets": ["kde4"], "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": []}]}
task
[ "TRANSLATION" ]
44,284
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2021-06-23T06:27:30+00:00
116
0
--- tags: - summarization widget: - text: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot } --- # CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
null
Non_BioNLP
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
task
[ "SUMMARIZATION" ]
44,285
gaudi/opus-mt-de-en-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T00:09:05Z
2024-10-18T22:01:45+00:00
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-de-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-de-en --output_dir ./ctranslate2/opus-mt-de-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-de-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-de-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-de-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-de-en).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-de-en --output_dir ./ctranslate2/opus-mt-de-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-de-en-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-de-en-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-de-en-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
44,287
binhphap5/en-vi-machine-translation
binhphap5
null
[ "transformers", "safetensors", "encoder-decoder", "generated_from_trainer", "endpoints_compatible", "region:us" ]
2025-02-25T17:12:27Z
2025-04-04T09:07:08+00:00
88
0
--- library_name: transformers tags: - generated_from_trainer model-index: - name: en-vi-machine-translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # en-vi-machine-translation This is a custom Transformer encoder-decoder model. Training from scratch on iwslt2015-en-vi datasets. It achieves the following results on the evaluation set: - Loss: 4.3761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 350 - eval_batch_size: 350 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.5582 | 1.0 | 381 | 6.3939 | | 6.0664 | 2.0 | 762 | 5.7502 | | 5.6536 | 3.0 | 1143 | 5.4572 | | 5.3981 | 4.0 | 1524 | 5.2329 | | 5.199 | 5.0 | 1905 | 5.0636 | | 5.0443 | 6.0 | 2286 | 4.9307 | | 4.9222 | 7.0 | 2667 | 4.8311 | | 4.8242 | 8.0 | 3048 | 4.7455 | | 4.7445 | 9.0 | 3429 | 4.6765 | | 4.6778 | 10.0 | 3810 | 4.6196 | | 4.6218 | 11.0 | 4191 | 4.5714 | | 4.5751 | 12.0 | 4572 | 4.5287 | | 4.5343 | 13.0 | 4953 | 4.4960 | | 4.5014 | 14.0 | 5334 | 4.4704 | | 4.4739 | 15.0 | 5715 | 4.4467 | | 4.4506 | 16.0 | 6096 | 4.4270 | | 4.4324 | 17.0 | 6477 | 4.4121 | | 4.417 | 18.0 | 6858 | 4.3996 | | 4.4056 | 19.0 | 7239 | 4.3922 | | 4.3967 | 20.0 | 7620 | 4.3843 | | 4.3908 | 21.0 | 8001 | 4.3807 | | 4.3865 | 22.0 | 8382 | 4.3784 | | 4.3844 | 23.0 | 8763 | 4.3766 | | 4.3838 | 24.0 | 9144 | 4.3761 | | 4.3829 | 25.0 | 9525 | 4.3761 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu118 - Datasets 3.0.1 - Tokenizers 0.19.1
null
Non_BioNLP
# en-vi-machine-translation This is a custom Transformer encoder-decoder model. Training from scratch on iwslt2015-en-vi datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: - train_batch_size: - eval_batch_size: - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu118 - Datasets 3.0.1 - Tokenizers 0.19.1
{"library_name": "transformers", "tags": ["generated_from_trainer"], "model-index": [{"name": "en-vi-machine-translation", "results": []}]}
task
[ "TRANSLATION" ]
44,288
Gopal2002/setfit_zeon_3456
Gopal2002
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
2024-01-12T06:53:23Z
2024-01-16T09:57:59+00:00
4
0
--- base_model: BAAI/bge-small-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: <s_cord-v2><s_menu><s_nm> MATERAL REGIP REGORI</s_nm><s_unitprice> RAKE-16</s_unitprice><s_cnt> VR.284 718</s_cnt><s_price> RAKE-16</s_price><sep/><s_nm> Challen Nus</s_nm><s_unitprice> RER/Requip Nc. PMBIRUMABUMAGHOMAN</s_nm><s_unitprice> R&R/Requip DATE 30.AUG-16</s_unitprice><s_cnt> M&R/R/Requip DAIE</s_nm><s_discountprice> PPONIG/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P - text: <s_cord-v2><s_menu><s_nm> BGM INFRACORE</s_nm><s_num> DETAILE OF REGEPENT (GILLA)</s_nm><s_num> DATAX</s_cnt><s_price> CULU04309</s_price><sep/><s_nm> HANDOLCO DOUGTRIES LIMITED WHADOLE NOVOICE NO</s_nm><s_num> DATE</s_nm><s_num> 113/2213</s_price><sep/><s_nm> AYAPUPU HURAKUND POWER</s_nm><s_num> NOVOICE DATE</s_nm><s_num> AYAPYCE</s_nm><s_num> 12/21<s_nm> CIST-SAMGALPUR PO NONE</s_nm><s_num> 33/362<s_etc> 25,736<sep/><s_nm> SATE COGE 21</s_nm><s_num> PFRRIOD O'SEERVICE:</s_nm><s_num> 22.11.022</s_num><s_price> 0.13.202</s_price><sep/><s_nm> GSTNI</s_nm><s_num> 21AACHM201<sep/><s_nm> WORE:</s_nm><s_num> 22.11.022</s_num><s_price> WAGONE</s_nm><s_num> PAN NO AACHA12014</s_num><s_price> 22.11.022</s_price><sep/><s_nm> SPICYICE DESCIPION</s_nm><s_num> HSH/SMC</s_nm><s_num> JUM RATE</s_nm><s_num> SRL</s_nm><s_num> JUM WOM RATE</s_nm><s_num> JAMUE</s_nm><s_num> JINCOCOGAL<sep/><s_nm> UNCOGAL FROM WAGING USING</s_nm><s_num> SPICYE</s_nm><s_num> DXGPICYE</s_nm><s_num> JUM NATE</s_nm><s_num> JUMINCOE</s_nm><s_num> 78121600013</s_num><s_price> SUSING<sep/><s_nm> UNCOGALING COAL HYA USING</s_nm><s_num> SPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPIC - text: <s_cord-v2><s_menu><s_nm> Iter Goren received</s_nm><s_cnt> 1</s_cnt><s_price> Mocoonde</s_price><sep/><s_nm> bellowed 22 G/12/11</s_nm><s_cnt> 1</s_cnt><s_discountprice> -o-</s_discountprice><s_price> ---</s_price><sep/><s_nm> leneen 18</s_nm><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> d.Touble 0&</s_nm><s_unitprice> @2x</s_cnt><s_discountprice> -o-</s_discountprice><s_price> ---</s_price><sep/><s_nm> Milonea 19</s_nm><s_unitprice> 19</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> Itellatellat</s_nm><s_unitprice> 22/21</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> WELLOW 02/</s_nm><s_unitprice> @2x</s_cnt><s_price> ---</s_price><sep/><s_nm> Milo cover n</s_nm><s_unitprice> @Mik</s_nm><s_unitprice> @Mik</s_nm><s_unitprice> @Mik</s_cnt><s_price> ---</s_price><sep/><s_nm> V.B.Telle</s_nm><s_unitprice> @11</s_unitprice><s_cnt> 4</s_cnt><s_price> ---</s_price><sep/><s_nm> Beneh 1/2</s_nm><s_unitprice> @)</s_nm><s_unitprice> @11</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> - text: <s_cord-v2><s_menu><s_nm> MATERIAL REGGIPT REPORT</s_nm><s_cnt> 6</s_cnt><s_price> WRB SUBMITTED</s_price><sep/><s_nm> WINTHARTY'S INVICE</s_nm><sep/><s_nm> Java ICE Ndg</s_nm><sep/><s_nm> INVALICE Dentie</s_nm><s_cnt> 3</s_cnt><sep/><s_nm> MRB/Acecipict WESKIP/SHR/SHR/LIB/CHAMBMB<sep/><s_nm> Commi<sep/><s_nm> N Nb = PO N. P/PO/SHR/A/G/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/G/G/A/G/G/E/G/A/A/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M - text: <s_cord-v2><s_menu><s_nm> RT @kanaka<s_nm> RT @kanaka</s_total> inference: true model-index: - name: SetFit with BAAI/bge-small-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5 | <ul><li>'<s_cord-v2><s_menu><s_nm> Original for Buyer/ Duplicate for Temponte/ Teplicate for Assesce<s_nm> OMCL</s_nm><s_num> 93103986</s_num><s_unitprice> 76000</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> Original Box buyer/ Duplicate for Temponte/ Teisplicate Box Assesce<s_nm> MCL</s_nm><s_num> 9310301</s_num><s_unitprice> @099</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> Original Box Buyer/ Duplicate for Taquequeque/ Tcquil<sep/><s_nm> OMCL</s_nm><s_unitprice> @MCL</s_nm><s_unitprice> @MCL</s_unitprice><s_cnt> 1</s_cnt><s_price> @020</s_price><sep/><s_nm> SUPPOLIZE Vitter. Musc</s_nm><s_unitprice> @MACHIZED WHITED</s_nm><s_unitprice> @MICEDICED WHITED Claysucci Cryccucee</s_nm><s_unitprice> @MADEMI SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDENTERTAINCE SUSHIZEDICEDGE CLIZEDGE SUSHIZEDGEECECECECECECECE SUSHIZEDMIZEDENTERTAINCE SUSHIZEDICEDICEDGEECECECECECE SUSHIZEDENTERTAINCEE Cry SUSHIZEDGEECECECECECECECECE SUSHIZEDSUMIZEDENTERTAINCE SUSHIZEGGYECECECECECECECECEE SUSHIZEDSUMIZED SUSHIZEGGYECECECECECECECECECEECECE SUSHIZEDSUMIZED SUSHIZEDSUMIZED SUSHIZEGGYECECECECECECECECECEECECE SUSHIZEDSUMIZED SUSHIZEDSUKECECECECECECEECE SUSHIZEDSUKECECECECECEECECE SUSHIZEDSUKECECECECECEECECE SUSHIZEDSUKECECECECEECECECECEECECECECECECECEECECECECECECECECECEECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECE'</li></ul> | | 6 | <ul><li>'<s_cord-v2><s_menu><s_nm> GARDANG CHOCO POWE IHADOUS</s_nm><sep/><s_nm> GENERAL TERM & CHOCOLIONSE</s_nm><sep/><s_nm> LIGHTEGAL TERS A CHOCOLIONSE</s_nm><sep/><s_nm> LIGHTICE</s_nm></s_sub><sep/><s_nm> ICE LIGHTING CHOCOLIZEDAYOSE</s_nm><s_cnt> 3</s_cnt><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm></s_sub><sep/><s_nm> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_cnt> 3</s_cnt><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKAIDEE'</li><li>"<s_cord-v2><s_menu><s_nm> SUSHI SALMONG</s_nm><s_cnt> 1</s_cnt><s_price> SUSHI SALMONG</s_nm><s_cnt> 1</s_cnt><s_price> SUSHI SALMONG</s_nm></s_sub></s_menu><s_sub_total><s_subtotal_price> SUSHI SALMONG</s_subtotal_price><s_tax_price> 11+727</s_tax_price></s_sub_total><s_total><s_total_price> Dolla's</s_total_price></s_total>"</li><li>'<s_cord-v2><s_menu><s_nm> WESEC</s_nm><s_price> 63644,836</s_price></s_menu><s_sub_total><s_subtotal_price> 636</s_subtotal_price></s_sub_total><s_total><s_total_price> 636</s_total_price><s_cashprice> 636-modern</s_cashprice></s_total>'</li></ul> | | 4 | <ul><li>'<s_cord-v2><s_menu><s_nm> MATERIAL REPORT</s_nm><s_unitprice> 06/28世纪 -85-18 -To</s_nm><s_unitprice> @0.9세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기'</li><li>"<s_cord-v2><s_menu><s_nm> MATERIAL REGIRI REPORT</s_nm><s_cnt> MRR SUBMITTED</s_nm><s_num> WHTPART'S INVICE</s_nm><sep/><s_nm> MRR/A/Rucciniot Box PHARK/CHECKGRAM/CHE<sep/><s_nm> Nc<sep/><s_nm> N&Eceivet N&Eceivet N&Eceivet N&Eceivet<sep/><s_nm> PO N&E P/PO/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G"</li><li>'<s_cord-v2><s_menu><s_nm> MATERAL REGUT REPORT</s_nm><s_unitprice> PMVRRMM1810/0225</s_unitprice><s_cnt> x</s_cnt><s_price> 416</s_price><sep/><s_nm> WBRR&Gapp Cate &HG+8</s_nm><s_unitprice> Rp. PO PO POWERMMISB19002</s_unitprice><s_cnt> x</s_cnt><s_discountprice> : 1</s_discountprice><s_price> Rp. 0</s_price><sep/><s_nm> Pollow & JUN-16</s_nm><s_unitprice> Rp.Net</s_nm><s_unitprice> Rp. Pollow &Ham</s_nm><s_unitprice> Rp. Pollow &Ham</s_nm></s_sub><sep/><s_nm> Venderce CENTRAL COALIFELDS LIMITED</s_nm><s_unitprice> @Mini W&Gli Nc:</s_nm><s_unitprice> @Minitie NcMelle NcMelle NcMelle NcMelle NcMelle Nklake.M</s_nm><s_unitprice> @Mihe DATE &HACHAQUALIG-16</s_unitprice><s_cnt> 18 x</s_cnt><s_discountprice> -</s_discountprice><s_price> Rate/Unit Value (R).</s_nm><s_unitprice> @MINEx)</s_discountprice><s_price> 53636</s_price><sep/><s_nm> 00003647</s_price><sep/><s_nm> 0066447</s_num><s_unitprice> @030</s_unitprice><s_cnt> 1</s_cnt><s_price> 536.00</s_price><sep/><s_nm> COAL FROM THIRD DART</s_nm><s_unitprice> @MINEG340</s_unitprice><s_cnt> 1</s_cnt><s_price> 1347.200</s_price><sep/><s_nm> cc.Code</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MINEGit</s_nm><s_unitprice> @MITEMITEM</s_nm><s_unitprice> @MITEMITEMITEM</s_nm><s_unitprice> @MITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITE'</li></ul> | | 3 | <ul><li>'<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUD</s_nm></s_sub><sep/><s_nm> PAYMENT ORDER</s_nm><s_cnt> 1</s_cnt><s_price> 395-48</s_price><sep/><s_nm> Pauto Bresanta Kumar Mohany</s_nm><s_unitprice> 636</s_unitprice><s_cnt> 1</s_cnt><s_price> 636</s_price><sep/><s_nm> Emp. N/S.Code No. CachCheque/D./Tramiertie summer Rc. 1.73.00</s_unitprice><s_cnt> 1</s_cnt><s_price> A.P.V.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUD WORKS</s_nm><sep/><s_nm> PAYMENT ORDER</s_nm><s_num> Pauto HOTOL SHEELA TOWERS</s_nm><s_num> SAWABALPUR</s_nm><s_num> VucherNQUA</s_nm><s_num> Nc. H369</s_nm><s_num> JANCE<sep/><s_nm> by Cash/Choque/D./Tangle summer of R&S.84同意 & HZQUAYE<sep/><s_nm> Rupes Eight thousand four hundred/Cory one only</s_nm><s_num> Rupes of Pavoment</s_nm><s_num> Dtalls of Pavoment</s_nm><s_num> Amount</s_nm><s_num> Sxpemie of fooding &lodging charges Cor Companywa quest</s_nm><s_num> Rs. P.</s_nm><s_num> Bll<s_num> Bll<s_num> B.4411</s_num><s_price> 8,4441</s_price></s_menu><s_sub_total><s_discount_price> 8,443</s_discount_price><s_service_price> 8,442</s_discount_price><s_tax_price> (s)B)<sep/><s_nm> Sudah</s_nm><s_num> Allow P.</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> (S)<sep/><s_nm> Stychocolate</s_nm><s_num> Allowed</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> (S)B)<sep/><s_nm> Shoud</s_nm><s_num> Allowed by</s_nm><s_num> Allowed by</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> Stella</s_nm><s_num> Stakud</s_num><s_price> 8,443</s_price><sep/><s_nm> Stakelor</s_nm><s_num> Stella</s_nm><s_num> Stella</s_nm><s_num> Stella</s_num><s_price> 8,443</s_price></s_menu><s_sub_total><s_subtotal_price> 638<sep/><s_nm> Punidah</s_nm><s_num> Ndg. HIGBUG</s_nm><s_num> JPGB-21</s_nm><s_num> C.F.F.Code</s_nm><s_num> Cssssstine</s_nm><s_num> Received Payment</s_nm><s_num> Acount Chechected by</s_nm><s_num> SQUAE</s_nm><s_num> SQUATITE</s_nm><s_num> SQUATITE</s_nm><s_num> SQUATITE</s_menuqty_cnt></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ADITYA BIRLA HINDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUR POWER</s_nm></s_sub><sep/><s_nm> PAYMENT ORDER</s_nm></s_sub><sep/><s_nm> SHRING WU PEAYAG PADMI</s_nm><s_num> SHRING WU PADMI</s_nm><s_num> SHRINGHU PEANG</s_nm><s_num> AQUATADO</s_nm><s_num> PYito -PHOTOPOPICHT STURIO CHOCO</s_nm><s_num> AQUAT</s_nm><s_num> AQUAT<s_etc> 3<sep/><s_nm> Address Bubwang SAMBOUR</s_nm><s_num> AQU</s_nm><s_num> AQUAG</s_num><s_price> NO.07462</s_price></s_menu><s_sub_total><s_subtotal_price> 1314</s_subtotal_price><s_discount_price> -<sep/><s_nm> Emp No. OTIO</s_nm><s_num> JFSC ORBC102746</s_num><s_price> 0.00</s_price><sep/><s_nm> by C<sep/><s_nm> Chequeque D/Trenzie The sun of 200</s_unitprice><s_cnt> 2</s_cnt><s_price> A.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon_3456") # Run inference preds = model("<s_cord-v2><s_menu><s_nm> RT @kanaka<s_nm> RT @kanaka</s_total>") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 2 | 193.4045 | 1530 | | Label | Training Sample Count | |:------|:----------------------| | 3 | 38 | | 4 | 45 | | 5 | 54 | | 6 | 41 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0014 | 1 | 0.2908 | - | | 0.0677 | 50 | 0.1735 | - | | 0.1353 | 100 | 0.0824 | - | | 0.2030 | 150 | 0.0505 | - | | 0.2706 | 200 | 0.014 | - | | 0.3383 | 250 | 0.0066 | - | | 0.4060 | 300 | 0.0035 | - | | 0.4736 | 350 | 0.0018 | - | | 0.5413 | 400 | 0.0022 | - | | 0.6089 | 450 | 0.0019 | - | | 0.6766 | 500 | 0.0015 | - | | 0.7442 | 550 | 0.0014 | - | | 0.8119 | 600 | 0.0012 | - | | 0.8796 | 650 | 0.0013 | - | | 0.9472 | 700 | 0.0014 | - | | 1.0149 | 750 | 0.0012 | - | | 1.0825 | 800 | 0.001 | - | | 1.1502 | 850 | 0.0012 | - | | 1.2179 | 900 | 0.001 | - | | 1.2855 | 950 | 0.0009 | - | | 1.3532 | 1000 | 0.0009 | - | | 1.4208 | 1050 | 0.0008 | - | | 1.4885 | 1100 | 0.001 | - | | 1.5562 | 1150 | 0.0009 | - | | 1.6238 | 1200 | 0.001 | - | | 1.6915 | 1250 | 0.0009 | - | | 1.7591 | 1300 | 0.0009 | - | | 1.8268 | 1350 | 0.0008 | - | | 1.8945 | 1400 | 0.0007 | - | | 1.9621 | 1450 | 0.0008 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.2 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 5 | <ul><li>'<s_cord-v2><s_menu><s_nm> Original for Buyer/ Duplicate for Temponte/ Teplicate for Assesce<s_nm> OMCL</s_nm><s_num> 93103986</s_num><s_unitprice> 76000</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> Original Box buyer/ Duplicate for Temponte/ Teisplicate Box Assesce<s_nm> MCL</s_nm><s_num> 9310301</s_num><s_unitprice> @099</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> Original Box Buyer/ Duplicate for Taquequeque/ Tcquil<sep/><s_nm> OMCL</s_nm><s_unitprice> @MCL</s_nm><s_unitprice> @MCL</s_unitprice><s_cnt> 1</s_cnt><s_price> @020</s_price><sep/><s_nm> SUPPOLIZE Vitter. Musc</s_nm><s_unitprice> @MACHIZED WHITED</s_nm><s_unitprice> @MICEDICED WHITED Claysucci Cryccucee</s_nm><s_unitprice> @MADEMI SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDMIZED SUSHIZEDENTERTAINCE SUSHIZEDICEDGE CLIZEDGE SUSHIZEDGEECECECECECECECE SUSHIZEDMIZEDENTERTAINCE SUSHIZEDICEDICEDGEECECECECECE SUSHIZEDENTERTAINCEE Cry SUSHIZEDGEECECECECECECECECE SUSHIZEDSUMIZEDENTERTAINCE SUSHIZEGGYECECECECECECECECEE SUSHIZEDSUMIZED SUSHIZEGGYECECECECECECECECECEECECE SUSHIZEDSUMIZED SUSHIZEDSUMIZED SUSHIZEGGYECECECECECECECECECEECECE SUSHIZEDSUMIZED SUSHIZEDSUKECECECECECECEECE SUSHIZEDSUKECECECECECEECECE SUSHIZEDSUKECECECECECEECECE SUSHIZEDSUKECECECECEECECECECEECECECECECECECEECECECECECECECECECEECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECECE'</li></ul> | | 6 | <ul><li>'<s_cord-v2><s_menu><s_nm> GARDANG CHOCO POWE IHADOUS</s_nm><sep/><s_nm> GENERAL TERM & CHOCOLIONSE</s_nm><sep/><s_nm> LIGHTEGAL TERS A CHOCOLIONSE</s_nm><sep/><s_nm> LIGHTICE</s_nm></s_sub><sep/><s_nm> ICE LIGHTING CHOCOLIZEDAYOSE</s_nm><s_cnt> 3</s_cnt><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm></s_sub><sep/><s_nm> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_cnt> 3</s_cnt><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKA LIGHTONE TERS & CHOCOLIZEDAYOSE</s_nm><s_price> SUSHIKAIDEE'</li><li>"<s_cord-v2><s_menu><s_nm> SUSHI SALMONG</s_nm><s_cnt> 1</s_cnt><s_price> SUSHI SALMONG</s_nm><s_cnt> 1</s_cnt><s_price> SUSHI SALMONG</s_nm></s_sub></s_menu><s_sub_total><s_subtotal_price> SUSHI SALMONG</s_subtotal_price><s_tax_price> 11+727</s_tax_price></s_sub_total><s_total><s_total_price> Dolla's</s_total_price></s_total>"</li><li>'<s_cord-v2><s_menu><s_nm> WESEC</s_nm><s_price> 63644,836</s_price></s_menu><s_sub_total><s_subtotal_price> 636</s_subtotal_price></s_sub_total><s_total><s_total_price> 636</s_total_price><s_cashprice> 636-modern</s_cashprice></s_total>'</li></ul> | | 4 | <ul><li>'<s_cord-v2><s_menu><s_nm> MATERIAL REPORT</s_nm><s_unitprice> 06/28世纪 -85-18 -To</s_nm><s_unitprice> @0.9세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기세기'</li><li>"<s_cord-v2><s_menu><s_nm> MATERIAL REGIRI REPORT</s_nm><s_cnt> MRR SUBMITTED</s_nm><s_num> WHTPART'S INVICE</s_nm><sep/><s_nm> MRR/A/Rucciniot Box PHARK/CHECKGRAM/CHE<sep/><s_nm> Nc<sep/><s_nm> N&Eceivet N&Eceivet N&Eceivet N&Eceivet<sep/><s_nm> PO N&E P/PO/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G/G"</li><li>'<s_cord-v2><s_menu><s_nm> MATERAL REGUT REPORT</s_nm><s_unitprice> PMVRRMM1810/0225</s_unitprice><s_cnt> x</s_cnt><s_price> 416</s_price><sep/><s_nm> WBRR&Gapp Cate &HG+8</s_nm><s_unitprice> Rp. PO PO POWERMMISB19002</s_unitprice><s_cnt> x</s_cnt><s_discountprice> : 1</s_discountprice><s_price> Rp. 0</s_price><sep/><s_nm> Pollow & JUN-16</s_nm><s_unitprice> Rp.Net</s_nm><s_unitprice> Rp. Pollow &Ham</s_nm><s_unitprice> Rp. Pollow &Ham</s_nm></s_sub><sep/><s_nm> Venderce CENTRAL COALIFELDS LIMITED</s_nm><s_unitprice> @Mini W&Gli Nc:</s_nm><s_unitprice> @Minitie NcMelle NcMelle NcMelle NcMelle NcMelle Nklake.M</s_nm><s_unitprice> @Mihe DATE &HACHAQUALIG-16</s_unitprice><s_cnt> 18 x</s_cnt><s_discountprice> -</s_discountprice><s_price> Rate/Unit Value (R).</s_nm><s_unitprice> @MINEx)</s_discountprice><s_price> 53636</s_price><sep/><s_nm> 00003647</s_price><sep/><s_nm> 0066447</s_num><s_unitprice> @030</s_unitprice><s_cnt> 1</s_cnt><s_price> 536.00</s_price><sep/><s_nm> COAL FROM THIRD DART</s_nm><s_unitprice> @MINEG340</s_unitprice><s_cnt> 1</s_cnt><s_price> 1347.200</s_price><sep/><s_nm> cc.Code</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MEGAMIC</s_nm><s_unitprice> @MINEGit</s_nm><s_unitprice> @MITEMITEM</s_nm><s_unitprice> @MITEMITEMITEM</s_nm><s_unitprice> @MITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITEMITE'</li></ul> | | 3 | <ul><li>'<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUD</s_nm></s_sub><sep/><s_nm> PAYMENT ORDER</s_nm><s_cnt> 1</s_cnt><s_price> 395-48</s_price><sep/><s_nm> Pauto Bresanta Kumar Mohany</s_nm><s_unitprice> 636</s_unitprice><s_cnt> 1</s_cnt><s_price> 636</s_price><sep/><s_nm> Emp. N/S.Code No. CachCheque/D./Tramiertie summer Rc. 1.73.00</s_unitprice><s_cnt> 1</s_cnt><s_price> A.P.V.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N.N'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUD WORKS</s_nm><sep/><s_nm> PAYMENT ORDER</s_nm><s_num> Pauto HOTOL SHEELA TOWERS</s_nm><s_num> SAWABALPUR</s_nm><s_num> VucherNQUA</s_nm><s_num> Nc. H369</s_nm><s_num> JANCE<sep/><s_nm> by Cash/Choque/D./Tangle summer of R&S.84同意 & HZQUAYE<sep/><s_nm> Rupes Eight thousand four hundred/Cory one only</s_nm><s_num> Rupes of Pavoment</s_nm><s_num> Dtalls of Pavoment</s_nm><s_num> Amount</s_nm><s_num> Sxpemie of fooding &lodging charges Cor Companywa quest</s_nm><s_num> Rs. P.</s_nm><s_num> Bll<s_num> Bll<s_num> B.4411</s_num><s_price> 8,4441</s_price></s_menu><s_sub_total><s_discount_price> 8,443</s_discount_price><s_service_price> 8,442</s_discount_price><s_tax_price> (s)B)<sep/><s_nm> Sudah</s_nm><s_num> Allow P.</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> (S)<sep/><s_nm> Stychocolate</s_nm><s_num> Allowed</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> (S)B)<sep/><s_nm> Shoud</s_nm><s_num> Allowed by</s_nm><s_num> Allowed by</s_nm><s_num> Allowed</s_num><s_price> 8,443</s_price><sep/><s_nm> Stella</s_nm><s_num> Stakud</s_num><s_price> 8,443</s_price><sep/><s_nm> Stakelor</s_nm><s_num> Stella</s_nm><s_num> Stella</s_nm><s_num> Stella</s_num><s_price> 8,443</s_price></s_menu><s_sub_total><s_subtotal_price> 638<sep/><s_nm> Punidah</s_nm><s_num> Ndg. HIGBUG</s_nm><s_num> JPGB-21</s_nm><s_num> C.F.F.Code</s_nm><s_num> Cssssstine</s_nm><s_num> Received Payment</s_nm><s_num> Acount Chechected by</s_nm><s_num> SQUAE</s_nm><s_num> SQUATITE</s_nm><s_num> SQUATITE</s_nm><s_num> SQUATITE</s_menuqty_cnt></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ADITYA BIRLA HINDALCO INDUSTRIES LIMITED</s_nm><s_unitprice> HIRAKUR POWER</s_nm></s_sub><sep/><s_nm> PAYMENT ORDER</s_nm></s_sub><sep/><s_nm> SHRING WU PEAYAG PADMI</s_nm><s_num> SHRING WU PADMI</s_nm><s_num> SHRINGHU PEANG</s_nm><s_num> AQUATADO</s_nm><s_num> PYito -PHOTOPOPICHT STURIO CHOCO</s_nm><s_num> AQUAT</s_nm><s_num> AQUAT<s_etc> 3<sep/><s_nm> Address Bubwang SAMBOUR</s_nm><s_num> AQU</s_nm><s_num> AQUAG</s_num><s_price> NO.07462</s_price></s_menu><s_sub_total><s_subtotal_price> 1314</s_subtotal_price><s_discount_price> -<sep/><s_nm> Emp No. OTIO</s_nm><s_num> JFSC ORBC102746</s_num><s_price> 0.00</s_price><sep/><s_nm> by C<sep/><s_nm> Chequeque D/Trenzie The sun of 200</s_unitprice><s_cnt> 2</s_cnt><s_price> A.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.P.'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon_3456") # Run inference preds = model("<s_cord-v2><s_menu><s_nm> RT @kanaka<s_nm> RT @kanaka</s_total>") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:-----| | Word count | 2 | 193.4045 | 1530 | | Label | Training Sample Count | |:------|:----------------------| | 3 | 38 | | 4 | 45 | | 5 | 54 | | 6 | 41 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0014 | 1 | 0.2908 | - | | 0.0677 | 50 | 0.1735 | - | | 0.1353 | 100 | 0.0824 | - | | 0.2030 | 150 | 0.0505 | - | | 0.2706 | 200 | 0.014 | - | | 0.3383 | 250 | 0.0066 | - | | 0.4060 | 300 | 0.0035 | - | | 0.4736 | 350 | 0.0018 | - | | 0.5413 | 400 | 0.0022 | - | | 0.6089 | 450 | 0.0019 | - | | 0.6766 | 500 | 0.0015 | - | | 0.7442 | 550 | 0.0014 | - | | 0.8119 | 600 | 0.0012 | - | | 0.8796 | 650 | 0.0013 | - | | 0.9472 | 700 | 0.0014 | - | | 1.0149 | 750 | 0.0012 | - | | 1.0825 | 800 | 0.001 | - | | 1.1502 | 850 | 0.0012 | - | | 1.2179 | 900 | 0.001 | - | | 1.2855 | 950 | 0.0009 | - | | 1.3532 | 1000 | 0.0009 | - | | 1.4208 | 1050 | 0.0008 | - | | 1.4885 | 1100 | 0.001 | - | | 1.5562 | 1150 | 0.0009 | - | | 1.6238 | 1200 | 0.001 | - | | 1.6915 | 1250 | 0.0009 | - | | 1.7591 | 1300 | 0.0009 | - | | 1.8268 | 1350 | 0.0008 | - | | 1.8945 | 1400 | 0.0007 | - | | 1.9621 | 1450 | 0.0008 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.2 - Sentence Transformers: 2.2.2 - Transformers: 4.35.2 - PyTorch: 2.1.0+cu121 - Datasets: 2.16.1 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "BAAI/bge-small-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "<s_cord-v2><s_menu><s_nm> MATERAL REGIP REGORI</s_nm><s_unitprice> RAKE-16</s_unitprice><s_cnt> VR.284 718</s_cnt><s_price> RAKE-16</s_price><sep/><s_nm> Challen Nus</s_nm><s_unitprice> RER/Requip Nc. PMBIRUMABUMAGHOMAN</s_nm><s_unitprice> R&R/Requip DATE 30.AUG-16</s_unitprice><s_cnt> M&R/R/Requip DAIE</s_nm><s_discountprice> PPONIG/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P/P"}, {"text": "<s_cord-v2><s_menu><s_nm> BGM INFRACORE</s_nm><s_num> DETAILE OF REGEPENT (GILLA)</s_nm><s_num> DATAX</s_cnt><s_price> CULU04309</s_price><sep/><s_nm> HANDOLCO DOUGTRIES LIMITED WHADOLE NOVOICE NO</s_nm><s_num> DATE</s_nm><s_num> 113/2213</s_price><sep/><s_nm> AYAPUPU HURAKUND POWER</s_nm><s_num> NOVOICE DATE</s_nm><s_num> AYAPYCE</s_nm><s_num> 12/21<s_nm> CIST-SAMGALPUR PO NONE</s_nm><s_num> 33/362<s_etc> 25,736<sep/><s_nm> SATE COGE 21</s_nm><s_num> PFRRIOD O'SEERVICE:</s_nm><s_num> 22.11.022</s_num><s_price> 0.13.202</s_price><sep/><s_nm> GSTNI</s_nm><s_num> 21AACHM201<sep/><s_nm> WORE:</s_nm><s_num> 22.11.022</s_num><s_price> WAGONE</s_nm><s_num> PAN NO AACHA12014</s_num><s_price> 22.11.022</s_price><sep/><s_nm> SPICYICE DESCIPION</s_nm><s_num> HSH/SMC</s_nm><s_num> JUM RATE</s_nm><s_num> SRL</s_nm><s_num> JUM WOM RATE</s_nm><s_num> JAMUE</s_nm><s_num> JINCOCOGAL<sep/><s_nm> UNCOGAL FROM WAGING USING</s_nm><s_num> SPICYE</s_nm><s_num> DXGPICYE</s_nm><s_num> JUM NATE</s_nm><s_num> JUMINCOE</s_nm><s_num> 78121600013</s_num><s_price> SUSING<sep/><s_nm> UNCOGALING COAL HYA USING</s_nm><s_num> SPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPICYPIC"}, {"text": "<s_cord-v2><s_menu><s_nm> Iter Goren received</s_nm><s_cnt> 1</s_cnt><s_price> Mocoonde</s_price><sep/><s_nm> bellowed 22 G/12/11</s_nm><s_cnt> 1</s_cnt><s_discountprice> -o-</s_discountprice><s_price> ---</s_price><sep/><s_nm> leneen 18</s_nm><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> d.Touble 0&</s_nm><s_unitprice> @2x</s_cnt><s_discountprice> -o-</s_discountprice><s_price> ---</s_price><sep/><s_nm> Milonea 19</s_nm><s_unitprice> 19</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> Itellatellat</s_nm><s_unitprice> 22/21</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm> WELLOW 02/</s_nm><s_unitprice> @2x</s_cnt><s_price> ---</s_price><sep/><s_nm> Milo cover n</s_nm><s_unitprice> @Mik</s_nm><s_unitprice> @Mik</s_nm><s_unitprice> @Mik</s_cnt><s_price> ---</s_price><sep/><s_nm> V.B.Telle</s_nm><s_unitprice> @11</s_unitprice><s_cnt> 4</s_cnt><s_price> ---</s_price><sep/><s_nm> Beneh 1/2</s_nm><s_unitprice> @)</s_nm><s_unitprice> @11</s_unitprice><s_cnt> 1</s_cnt><s_price> ---</s_price><sep/><s_nm>"}, {"text": "<s_cord-v2><s_menu><s_nm> MATERIAL REGGIPT REPORT</s_nm><s_cnt> 6</s_cnt><s_price> WRB SUBMITTED</s_price><sep/><s_nm> WINTHARTY'S INVICE</s_nm><sep/><s_nm> Java ICE Ndg</s_nm><sep/><s_nm> INVALICE Dentie</s_nm><s_cnt> 3</s_cnt><sep/><s_nm> MRB/Acecipict WESKIP/SHR/SHR/LIB/CHAMBMB<sep/><s_nm> Commi<sep/><s_nm> N Nb = PO N. P/PO/SHR/A/G/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/A/A/G/A/A/A/A/A/A/A/A/G/G/A/G/G/E/G/A/A/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M/M"}, {"text": "<s_cord-v2><s_menu><s_nm> RT @kanaka<s_nm> RT @kanaka</s_total>"}], "inference": true, "model-index": [{"name": "SetFit with BAAI/bge-small-en-v1.5", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,289
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task171
Lots-of-LoRAs
null
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "arxiv:2407.00066", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2", "license:mit", "region:us" ]
2025-01-01T13:35:31Z
2025-01-01T13:35:36+00:00
0
0
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 language: en library_name: pytorch license: mit --- # Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task171 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task171_spl_translation_en_es - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task171_spl_translation_en_es sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task171 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> LoRA trained on task171_spl_translation_en_es - **Developed by:** bruel - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** LoRA - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/bruel-gabrielsson - **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Lots-of-LoRAs/task171_spl_translation_en_es sourced from https://github.com/allenai/natural-instructions ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** @misc{brüelgabrielsson2024compressserveservingthousands, title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead}, author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon}, year={2024}, eprint={2407.00066}, archivePrefix={arXiv}, primaryClass={cs.DC}, url={https://arxiv.org/abs/2407.00066}, } **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
task
[ "TRANSLATION" ]
44,290
Sunmin-dev/distilbert-base-uncased-finetuned-emotion
Sunmin-dev
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-18T14:26:10Z
2023-08-18T14:37:04+00:00
10
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.925 name: Accuracy - type: f1 value: 0.9250672461061793 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2135 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8263 | 1.0 | 250 | 0.3230 | 0.9015 | 0.8998 | | 0.2505 | 2.0 | 500 | 0.2135 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2135 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8263 | 1.0 | 250 | 0.3230 | 0.9015 | 0.8998 | | 0.2505 | 2.0 | 500 | 0.2135 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.925, "name": "Accuracy"}, {"type": "f1", "value": 0.9250672461061793, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,291
smartcat/T5-GenQ-T-v1
smartcat
text2text-generation
[ "transformers", "safetensors", "t5", "text2text-generation", "e-commerce", "query-generation", "en", "dataset:smartcat/Amazon-2023-GenQ", "base_model:BeIR/query-gen-msmarco-t5-base-v1", "base_model:finetune:BeIR/query-gen-msmarco-t5-base-v1", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2025-02-21T15:06:08Z
2025-03-11T11:35:49+00:00
19
0
--- base_model: - BeIR/query-gen-msmarco-t5-base-v1 datasets: - smartcat/Amazon-2023-GenQ language: - en library_name: transformers license: mit metrics: - rouge pipeline_tag: text2text-generation tags: - e-commerce - query-generation --- # Model Card for T5-GenQ-T-v1 🤖 ✨ 🔍 Generate precise, realistic user-focused search queries from product text 🛒 🚀 📊 ### Model Description - **Model Name:** Fine-Tuned Query-Generation Model - **Model type:** Text-to-Text Transformer - **Finetuned from model:** [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - **Dataset**: [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) - **Primary Use Case**: Generating accurate and relevant search queries from item descriptions - **Repository:** [smartcat-labs/product2query](https://github.com/smartcat-labs/product2query) ### Model variations <table border="1" class="dataframe"> <tr style="text-align: center;"> <th>Model</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-T-v1">T5-GenQ-T-v1</a></b></td> <td>75.2151</td> <td>54.8735</td> <td><b>74.5142</b></td> <td>74.5262</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TD-v1">T5-GenQ-TD-v1</a></b></td> <td>78.2570</td> <td>58.9586</td> <td><b>77.5308</b></td> <td>77.5466</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDE-v1">T5-GenQ-TDE-v1</a></b></td> <td>76.9075</td> <td>57.0980</td> <td><b>76.1464</b></td> <td>76.1502</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDC-v1">T5-GenQ-TDC-v1</a> (best)</b></td> <td>80.0754</td> <td>61.5974</td> <td><b>79.3557</b></td> <td>79.3427</td> </tr> </table> ### Uses This model is designed to improve e-commerce search functionality by generating user-friendly search queries based on product descriptions. It is particularly suited for applications where product descriptions are the primary input, and the goal is to create concise, descriptive queries that align with user search intent. ### Examples of Use: <li>Generating search queries for product indexing.</li> <li>Enhancing product discoverability in e-commerce search engines.</li> <li>Automating query generation for catalog management.</li> ### Comparison of ROUGE scores: <table border="1"> <thead> <tr> <th>Model</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> </thead> <tbody> <tr> <td>T5-GenQ-T-v1</td> <td>73.11</td> <td>52.27</td> <td>72.51</td> <td>72.51</td> </tr> <tr> <td>query-gen-msmarco-t5-base-v1</td> <td>40.34</td> <td>19.52</td> <td>39.21</td> <td>39.21</td> </tr> </tbody> </table> **Note:** This evaluation is done after training, based on the test split of the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ/viewer/default/test?views%5B%5D=test) dataset. ### Examples <details><summary>Expand to see table with examples</summary> <table border="1" text-align: center> <thead> <tr> <th style="width: 25%;" >Input Text</th> <th style="width: 25%;">Target Query</th> <th>Before Fine-tuning</th> <th>After Fine-tuning</th> </tr> </thead> <tbody> <tr> <td>PANDORA Jewelry Crossover Pave Triple Band Ring for Women - Sterling Silver with Cubic Zirconia</td> <td>PANDORA Crossover Triple Band Ring</td> <td>what is pandora jewelry</td> <td>Pandora crossover ring</td> </tr> <tr> <td>SAYOYO Baby Sneakers Leather Baby Shoes Crib Shoes Toddler Soft Sole Sneakers</td> <td>SAYOYO Baby Sneakers</td> <td>what kind of shoes are baby sneakers</td> <td>baby leather sneakers</td> </tr> <tr> <td>5 PCS Strap Replacement Compatible with Xiaomi Mi Band 3/4, Bands Xiaomi Mi Band 4 Smart Watch Wristbands Replacement Accessories Strap Bracelets for Mi Fit 3 Straps</td> <td>Replacement Straps for Xiaomi Mi Band 3/4p</td> <td>what is the strap on a xiaomi smartwatch</td> <td>Xiaomi Mi Fit 3 replacement bands</td> </tr> <tr> <td>Backpacker Ladies' Solid Flannel Shirt</td> <td>ladies flannel shirt</td> <td>what kind of shirt is a backpacker</td> <td>women's flannel shirt</td> </tr> </tbody> </table> </details> ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("smartcat/T5-GenQ-T-v1") tokenizer = AutoTokenizer.from_pretrained("smartcat/T5-GenQ-T-v1") description = "Silver-colored cuff with embossed braid pattern. Made of brass, flexible to fit wrist." inputs = tokenizer(description, return_tensors="pt", padding=True, truncation=True) generated_ids = model.generate(inputs["input_ids"], max_length=30, num_beams=4, early_stopping=True) generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) ``` ## Training Details ### Training Data The model was trained on the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) dataset, which consists of user-like queries generated from product descriptions. The dataset was created using Claude Haiku 3, incorporating key product attributes such as the title, description, and images to ensure relevant and realistic queries. For more information, read the Dataset Card. 😊 ### Preprocessing - Trained on only the product titles - Tokenized using T5’s default tokenizer with truncation to handle long text. ### Training Hyperparameters <ul> <li><strong>max_input_length:</strong> 512</li> <li><strong>max_target_length:</strong> 30</li> <li><strong>batch_size:</strong> 48</li> <li><strong>num_train_epochs:</strong> 8</li> <li><strong>evaluation_strategy:</strong> epoch</li> <li><strong>save_strategy:</strong> epoch</li> <li><strong>learning_rate:</strong> 5.6e-05</li> <li><strong>weight_decay:</strong> 0.01 </li> <li><strong>predict_with_generate:</strong> true</li> <li><strong>load_best_model_at_end:</strong> true</li> <li><strong>metric_for_best_model:</strong> eval_rougeL</li> <li><strong>greater_is_better:</strong> true</li> <li><strong>logging_strategy:</strong> epoch</li> </ul> ### Train time: 2.43 hrs ### Hardware A6000 GPU: - Memory Size: 48 GB - Memory Type: GDDR6 - CUDA: 8.6 ### Metrics **[ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric))**, or **R**ecall-**O**riented **U**nderstudy for **G**isting **E**valuation, is a set of metrics used for evaluating automatic summarization and machine translation in NLP. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference. In our evaluation, ROUGE scores are scaled to resemble percentages for better interpretability. The metric used in the training was ROUGE-L. <table> <thead> <tr> <th>Epoch</th> <th>Step</th> <th>Loss</th> <th>Grad Norm</th> <th>Learning Rate</th> <th>Eval Loss</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> </thead> <tbody> <tr><td>1.0</td><td>4285</td><td>0.9465</td><td>6.7834</td><td>4.9e-05</td><td>0.7644</td><td>73.1872</td><td>52.2019</td><td>72.5199</td><td>72.5183</td></tr> <tr><td>2.0</td><td>8570</td><td>0.8076</td><td>4.9071</td><td>4.2e-05</td><td>0.7268</td><td>73.9182</td><td>53.1365</td><td>73.2551</td><td>73.2570</td></tr> <tr><td>3.0</td><td>12855</td><td>0.7485</td><td>4.4814</td><td>3.5e-05</td><td>0.7160</td><td>74.4752</td><td>53.8076</td><td>73.7712</td><td>73.7792</td></tr> <tr><td>4.0</td><td>17140</td><td>0.7082</td><td>5.3145</td><td>2.8e-05</td><td>0.7023</td><td>74.7628</td><td>54.3316</td><td>74.0811</td><td>74.0790</td></tr> <tr><td>5.0</td><td>21425</td><td>0.6788</td><td>4.4266</td><td>2.1e-05</td><td>0.7013</td><td>74.9437</td><td>54.5630</td><td>74.2637</td><td>74.2668</td></tr> <tr><td>6.0</td><td>25710</td><td>0.6561</td><td>5.2897</td><td>1.4e-05</td><td>0.6998</td><td>75.0834</td><td>54.7163</td><td>74.3907</td><td>74.3977</td></tr> <tr><td>7.0</td><td>29995</td><td>0.6396</td><td>3.5197</td><td>7.0e-06</td><td>0.7005</td><td>75.2151</td><td>54.8735</td><td>74.5142</td><td>74.5262</td></tr> <tr><td>8.0</td><td>34280</td><td>0.6278</td><td>4.4625</td><td>0.0</td><td>0.7016</td><td>75.1899</td><td>54.8423</td><td>74.4695</td><td>74.4801</td></tr> </tbody> </table> <style> .model-analysis table { width: 100%; border-collapse: collapse; } .model-analysis td { padding: 10px; vertical-align: middle; } .model-analysis img { width: auto; /* Maintain aspect ratio */ display: block; margin: 0 auto; max-height: 750px; /* Default height for most images */ } </style> <div class="model-analysis"> ### Model Analysis <details><summary>Average scores by model</summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/average_scores_by_model.png" alt="image"></td> <td>The checkpoint-29995 (T5-GenQ-T-v1) model outperforms query-gen-msmarco-t5-base-v1 across all ROUGE metrics. The largest performance gap is in ROUGE2, where checkpoint-29995 achieves 52.27, whereas query-gen-msmarco-t5-base-v1 scores 19.52. ROUGE1, ROUGEL, and ROUGELSUM scores are very similar in both trends, with checkpoint-29995 consistently scoring above 72, while query-gen-msmarco-t5-base-v1 stays below 41.</td></tr> </table> </details> <details><summary>Density comparison </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/density_comparison.png" alt="image"></td> <td> ```T5-GenQ-T-v1``` - Higher concentration of high ROUGE scores, especially near 100%, indicating strong text overlap with references. ```query-gen-msmarco-t5-base-v1``` – more spread-out distribution, with multiple peaks at 10-40%, suggesting greater variability but lower precision. ROUGE-1 & ROUGE-L: ```T5-GenQ-T-v1``` peaks at 100%, while ```query-gen-msmarco-t5-base-v1``` has lower, broader peaks. ROUGE-2: ```query-gen-msmarco-t5-base-v1``` has a high density at 0%, indicating many low-overlap outputs. </td></tr> </table> </details> <details><summary>Histogram comparison </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/histogram_comparison.png" alt="image"></td> <td> ```T5-GenQ-T-v1``` – higher concentration of high ROUGE scores, especially near 100%, indicating strong text overlap with references. ```query-gen-msmarco-t5-base-v1``` – more spread-out distribution, with peaks in the 10-40% range, suggesting greater variability but lower precision. ROUGE-1 & ROUGE-L: ```T5-GenQ-T-v1``` shows a rising trend towards higher scores, while ```query-gen-msmarco-t5-base-v1``` has multiple peaks at lower scores. ROUGE-2: ```query-gen-msmarco-t5-base-v1``` has a high concentration of low-score outputs, whereas ```T5-GenQ-T-v1``` achieves more high-scoring outputs.</td></tr> </table> </details> <details><summary>Scores by generated query length </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/group_sizes.png" alt="image"></td> <td> This visualization analyzes average ROUGE scores and score differences across different query sizes. High ROUGE Scores for Most Sizes (3-9 words). ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-LSUM scores remain consistently high across most word sizes. Sharp Spike at Size 2: A large positive score difference at 2 words, suggesting strong alignment for very short phrases. Stable Score Differences (Sizes 3-9): After the initial spike at size 2, score differences stay close to zero, indicating consistent performance across phrase lengths.</td></tr> </table> </details> <details><summary>Semantic similarity distribution </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/semantic_similarity_distribution.png" alt="image"></td> <td> This histogram visualizes the distribution of cosine similarity scores, which measure the semantic similarity between paired texts. The majority of similarity scores cluster near 1.0, indicating that most text pairs are highly similar. A gradual increase in frequency is observed as similarity scores rise, with a sharp peak at 1.0. Lower similarity scores (0.0–0.4) are rare, suggesting fewer instances of dissimilar text pairs. </td></tr> </table> </details> <details><summary>Semantic similarity score against ROUGE scores </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/similarity_vs_rouge.png" alt="image"></td> <td> This scatter plot matrix compares semantic similarity (cosine similarity) with ROUGE scores, showing their correlation. Higher similarity → Higher ROUGE scores, indicating strong n-gram overlap in semantically similar texts. ROUGE-1 & ROUGE-L show the strongest correlation, while ROUGE-2 has more variability. Low-similarity outliers exist, where texts share words but differ semantically. </td></tr> </table> </details> </details> </div> ## More Information - Please visit the [GitHub Repository](https://github.com/smartcat-labs/product2query) ## Authors - Mentor: [Milutin Studen](https://www.linkedin.com/in/milutin-studen/) - Engineers: [Petar Surla](https://www.linkedin.com/in/petar-surla-6448b6269/), [Andjela Radojevic](https://www.linkedin.com/in/an%C4%91ela-radojevi%C4%87-936197196/) ## Model Card Contact For questions, please open an issue on the [GitHub Repository](https://github.com/smartcat-labs/product2query)
null
Non_BioNLP
# Model Card for T5-GenQ-T-v1 🤖 ✨ 🔍 Generate precise, realistic user-focused search queries from product text 🛒 🚀 📊 ### Model Description - **Model Name:** Fine-Tuned Query-Generation Model - **Model type:** Text-to-Text Transformer - **Finetuned from model:** [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1) - **Dataset**: [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) - **Primary Use Case**: Generating accurate and relevant search queries from item descriptions - **Repository:** [smartcat-labs/product2query](https://github.com/smartcat-labs/product2query) ### Model variations <table border="1" class="dataframe"> <tr style="text-align: center;"> <th>Model</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-T-v1">T5-GenQ-T-v1</a></b></td> <td>75.2151</td> <td>54.8735</td> <td><b>74.5142</b></td> <td>74.5262</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TD-v1">T5-GenQ-TD-v1</a></b></td> <td>78.2570</td> <td>58.9586</td> <td><b>77.5308</b></td> <td>77.5466</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDE-v1">T5-GenQ-TDE-v1</a></b></td> <td>76.9075</td> <td>57.0980</td> <td><b>76.1464</b></td> <td>76.1502</td> </tr> <tr> <td><b><a href="https://huggingface.co/smartcat/T5-GenQ-TDC-v1">T5-GenQ-TDC-v1</a> (best)</b></td> <td>80.0754</td> <td>61.5974</td> <td><b>79.3557</b></td> <td>79.3427</td> </tr> </table> ### Uses This model is designed to improve e-commerce search functionality by generating user-friendly search queries based on product descriptions. It is particularly suited for applications where product descriptions are the primary input, and the goal is to create concise, descriptive queries that align with user search intent. ### Examples of Use: <li>Generating search queries for product indexing.</li> <li>Enhancing product discoverability in e-commerce search engines.</li> <li>Automating query generation for catalog management.</li> ### Comparison of ROUGE scores: <table border="1"> <thead> <tr> <th>Model</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> </thead> <tbody> <tr> <td>T5-GenQ-T-v1</td> <td>73.11</td> <td>52.27</td> <td>72.51</td> <td>72.51</td> </tr> <tr> <td>query-gen-msmarco-t5-base-v1</td> <td>40.34</td> <td>19.52</td> <td>39.21</td> <td>39.21</td> </tr> </tbody> </table> **Note:** This evaluation is done after training, based on the test split of the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ/viewer/default/test?views%5B%5D=test) dataset. ### Examples <details><summary>Expand to see table with examples</summary> <table border="1" text-align: center> <thead> <tr> <th style="width: 25%;" >Input Text</th> <th style="width: 25%;">Target Query</th> <th>Before Fine-tuning</th> <th>After Fine-tuning</th> </tr> </thead> <tbody> <tr> <td>PANDORA Jewelry Crossover Pave Triple Band Ring for Women - Sterling Silver with Cubic Zirconia</td> <td>PANDORA Crossover Triple Band Ring</td> <td>what is pandora jewelry</td> <td>Pandora crossover ring</td> </tr> <tr> <td>SAYOYO Baby Sneakers Leather Baby Shoes Crib Shoes Toddler Soft Sole Sneakers</td> <td>SAYOYO Baby Sneakers</td> <td>what kind of shoes are baby sneakers</td> <td>baby leather sneakers</td> </tr> <tr> <td>5 PCS Strap Replacement Compatible with Xiaomi Mi Band 3/4, Bands Xiaomi Mi Band 4 Smart Watch Wristbands Replacement Accessories Strap Bracelets for Mi Fit 3 Straps</td> <td>Replacement Straps for Xiaomi Mi Band 3/4p</td> <td>what is the strap on a xiaomi smartwatch</td> <td>Xiaomi Mi Fit 3 replacement bands</td> </tr> <tr> <td>Backpacker Ladies' Solid Flannel Shirt</td> <td>ladies flannel shirt</td> <td>what kind of shirt is a backpacker</td> <td>women's flannel shirt</td> </tr> </tbody> </table> </details> ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("smartcat/T5-GenQ-T-v1") tokenizer = AutoTokenizer.from_pretrained("smartcat/T5-GenQ-T-v1") description = "Silver-colored cuff with embossed braid pattern. Made of brass, flexible to fit wrist." inputs = tokenizer(description, return_tensors="pt", padding=True, truncation=True) generated_ids = model.generate(inputs["input_ids"], max_length=30, num_beams=4, early_stopping=True) generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) ``` ## Training Details ### Training Data The model was trained on the [smartcat/Amazon-2023-GenQ](https://huggingface.co/datasets/smartcat/Amazon-2023-GenQ) dataset, which consists of user-like queries generated from product descriptions. The dataset was created using Claude Haiku 3, incorporating key product attributes such as the title, description, and images to ensure relevant and realistic queries. For more information, read the Dataset Card. 😊 ### Preprocessing - Trained on only the product titles - Tokenized using T5’s default tokenizer with truncation to handle long text. ### Training Hyperparameters <ul> <li><strong>max_input_length:</strong> 512</li> <li><strong>max_target_length:</strong> 30</li> <li><strong>batch_size:</strong> 48</li> <li><strong>num_train_epochs:</strong> 8</li> <li><strong>evaluation_strategy:</strong> epoch</li> <li><strong>save_strategy:</strong> epoch</li> <li><strong>learning_rate:</strong> 5.6e-05</li> <li><strong>weight_decay:</strong> 0.01 </li> <li><strong>predict_with_generate:</strong> true</li> <li><strong>load_best_model_at_end:</strong> true</li> <li><strong>metric_for_best_model:</strong> eval_rougeL</li> <li><strong>greater_is_better:</strong> true</li> <li><strong>logging_strategy:</strong> epoch</li> </ul> ### Train time: 2.43 hrs ### Hardware A6000 GPU: - Memory Size: 48 GB - Memory Type: GDDR6 - CUDA: 8.6 ### Metrics **[ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric))**, or **R**ecall-**O**riented **U**nderstudy for **G**isting **E**valuation, is a set of metrics used for evaluating automatic summarization and machine translation in NLP. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation. ROUGE metrics range between 0 and 1, with higher scores indicating higher similarity between the automatically produced summary and the reference. In our evaluation, ROUGE scores are scaled to resemble percentages for better interpretability. The metric used in the training was ROUGE-L. <table> <thead> <tr> <th>Epoch</th> <th>Step</th> <th>Loss</th> <th>Grad Norm</th> <th>Learning Rate</th> <th>Eval Loss</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>ROUGE-Lsum</th> </tr> </thead> <tbody> <tr><td>1.0</td><td>4285</td><td>0.9465</td><td>6.7834</td><td>4.9e-05</td><td>0.7644</td><td>73.1872</td><td>52.2019</td><td>72.5199</td><td>72.5183</td></tr> <tr><td>2.0</td><td>8570</td><td>0.8076</td><td>4.9071</td><td>4.2e-05</td><td>0.7268</td><td>73.9182</td><td>53.1365</td><td>73.2551</td><td>73.2570</td></tr> <tr><td>3.0</td><td>12855</td><td>0.7485</td><td>4.4814</td><td>3.5e-05</td><td>0.7160</td><td>74.4752</td><td>53.8076</td><td>73.7712</td><td>73.7792</td></tr> <tr><td>4.0</td><td>17140</td><td>0.7082</td><td>5.3145</td><td>2.8e-05</td><td>0.7023</td><td>74.7628</td><td>54.3316</td><td>74.0811</td><td>74.0790</td></tr> <tr><td>5.0</td><td>21425</td><td>0.6788</td><td>4.4266</td><td>2.1e-05</td><td>0.7013</td><td>74.9437</td><td>54.5630</td><td>74.2637</td><td>74.2668</td></tr> <tr><td>6.0</td><td>25710</td><td>0.6561</td><td>5.2897</td><td>1.4e-05</td><td>0.6998</td><td>75.0834</td><td>54.7163</td><td>74.3907</td><td>74.3977</td></tr> <tr><td>7.0</td><td>29995</td><td>0.6396</td><td>3.5197</td><td>7.0e-06</td><td>0.7005</td><td>75.2151</td><td>54.8735</td><td>74.5142</td><td>74.5262</td></tr> <tr><td>8.0</td><td>34280</td><td>0.6278</td><td>4.4625</td><td>0.0</td><td>0.7016</td><td>75.1899</td><td>54.8423</td><td>74.4695</td><td>74.4801</td></tr> </tbody> </table> <style> .model-analysis table { width: 100%; border-collapse: collapse; } .model-analysis td { padding: 10px; vertical-align: middle; } .model-analysis img { width: auto; /* Maintain aspect ratio */ display: block; margin: 0 auto; max-height: 750px; /* Default height for most images */ } </style> <div class="model-analysis"> ### Model Analysis <details><summary>Average scores by model</summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/average_scores_by_model.png" alt="image"></td> <td>The checkpoint-29995 (T5-GenQ-T-v1) model outperforms query-gen-msmarco-t5-base-v1 across all ROUGE metrics. The largest performance gap is in ROUGE2, where checkpoint-29995 achieves 52.27, whereas query-gen-msmarco-t5-base-v1 scores 19.52. ROUGE1, ROUGEL, and ROUGELSUM scores are very similar in both trends, with checkpoint-29995 consistently scoring above 72, while query-gen-msmarco-t5-base-v1 stays below 41.</td></tr> </table> </details> <details><summary>Density comparison </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/density_comparison.png" alt="image"></td> <td> ```T5-GenQ-T-v1``` - Higher concentration of high ROUGE scores, especially near 100%, indicating strong text overlap with references. ```query-gen-msmarco-t5-base-v1``` – more spread-out distribution, with multiple peaks at 10-40%, suggesting greater variability but lower precision. ROUGE-1 & ROUGE-L: ```T5-GenQ-T-v1``` peaks at 100%, while ```query-gen-msmarco-t5-base-v1``` has lower, broader peaks. ROUGE-2: ```query-gen-msmarco-t5-base-v1``` has a high density at 0%, indicating many low-overlap outputs. </td></tr> </table> </details> <details><summary>Histogram comparison </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/histogram_comparison.png" alt="image"></td> <td> ```T5-GenQ-T-v1``` – higher concentration of high ROUGE scores, especially near 100%, indicating strong text overlap with references. ```query-gen-msmarco-t5-base-v1``` – more spread-out distribution, with peaks in the 10-40% range, suggesting greater variability but lower precision. ROUGE-1 & ROUGE-L: ```T5-GenQ-T-v1``` shows a rising trend towards higher scores, while ```query-gen-msmarco-t5-base-v1``` has multiple peaks at lower scores. ROUGE-2: ```query-gen-msmarco-t5-base-v1``` has a high concentration of low-score outputs, whereas ```T5-GenQ-T-v1``` achieves more high-scoring outputs.</td></tr> </table> </details> <details><summary>Scores by generated query length </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/group_sizes.png" alt="image"></td> <td> This visualization analyzes average ROUGE scores and score differences across different query sizes. High ROUGE Scores for Most Sizes (3-9 words). ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-LSUM scores remain consistently high across most word sizes. Sharp Spike at Size 2: A large positive score difference at 2 words, suggesting strong alignment for very short phrases. Stable Score Differences (Sizes 3-9): After the initial spike at size 2, score differences stay close to zero, indicating consistent performance across phrase lengths.</td></tr> </table> </details> <details><summary>Semantic similarity distribution </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/semantic_similarity_distribution.png" alt="image"></td> <td> This histogram visualizes the distribution of cosine similarity scores, which measure the semantic similarity between paired texts. The majority of similarity scores cluster near 1.0, indicating that most text pairs are highly similar. A gradual increase in frequency is observed as similarity scores rise, with a sharp peak at 1.0. Lower similarity scores (0.0–0.4) are rare, suggesting fewer instances of dissimilar text pairs. </td></tr> </table> </details> <details><summary>Semantic similarity score against ROUGE scores </summary> <table style="width:100%"><tr> <td style="width:65%"><img src="https://huggingface.co/smartcat/T5-GenQ-T-v1/resolve/main/similarity_vs_rouge.png" alt="image"></td> <td> This scatter plot matrix compares semantic similarity (cosine similarity) with ROUGE scores, showing their correlation. Higher similarity → Higher ROUGE scores, indicating strong n-gram overlap in semantically similar texts. ROUGE-1 & ROUGE-L show the strongest correlation, while ROUGE-2 has more variability. Low-similarity outliers exist, where texts share words but differ semantically. </td></tr> </table> </details> </details> </div> ## More Information - Please visit the [GitHub Repository](https://github.com/smartcat-labs/product2query) ## Authors - Mentor: [Milutin Studen](https://www.linkedin.com/in/milutin-studen/) - Engineers: [Petar Surla](https://www.linkedin.com/in/petar-surla-6448b6269/), [Andjela Radojevic](https://www.linkedin.com/in/an%C4%91ela-radojevi%C4%87-936197196/) ## Model Card Contact For questions, please open an issue on the [GitHub Repository](https://github.com/smartcat-labs/product2query)
{"base_model": ["BeIR/query-gen-msmarco-t5-base-v1"], "datasets": ["smartcat/Amazon-2023-GenQ"], "language": ["en"], "library_name": "transformers", "license": "mit", "metrics": ["rouge"], "pipeline_tag": "text2text-generation", "tags": ["e-commerce", "query-generation"]}
task
[ "SEMANTIC_SIMILARITY", "TRANSLATION", "SUMMARIZATION" ]
44,292
gaudi/opus-mt-en-ho-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-18T14:59:59Z
2024-10-19T00:14:51+00:00
7
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-ho --output_dir ./ctranslate2/opus-mt-en-ho-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-ho-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-ho-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-ho-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho) by Helsinki-NLP.
null
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho) - This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2). - This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil). # What is CTranslate2? [CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models. CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include: - Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper - Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon - Encoder-only models: BERT, DistilBERT, XLM-RoBERTa The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration. # CTranslate2 Benchmarks Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset. The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers. Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. ## CPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 | | Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 | | Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 | | CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 | | CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 | ## GPU Benchmarks for Generic Opus-MT Models | Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU | | :----: | :----: | :----: | :----: | :----: | | Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 | | Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 | | CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 | | CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 | `Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.` **Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br /> **Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho).** ## Internal Benchmarks Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality. # CTranslate2 Installation ```bash pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0 ``` ### ct2-transformers-converter Command Used: ```bash ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-ho --output_dir ./ctranslate2/opus-mt-en-ho-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16 ``` # CTranslate2 Converted Checkpoint Information: **Compatible With:** - [ctranslate2](https://github.com/OpenNMT/CTranslate2) - [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2) **Compute Type:** - `compute_type=int8_float16` for `device="cuda"` - `compute_type=int8` for `device="cpu"` # Sample Code - ctranslate2 #### Clone the repository to the working directory or wherever you wish to store the model artifacts. #### ```bash git clone https://huggingface.co/gaudi/opus-mt-en-ho-ctranslate2 ``` #### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. #### ```python from ctranslate2 import Translator import transformers model_dir = "./opus-mt-en-ho-ctranslate2" # Path to model directory. translator = Translator( model_path=model_dir, device="cuda", # cpu, cuda, or auto. inter_threads=1, # Maximum number of parallel translations. intra_threads=4, # Number of OpenMP threads per translator. compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda. ) tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir) source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX.")) results = translator.translate_batch([source]) target = results[0].hypotheses[0] print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target))) ``` # Sample Code - hf-hub-ctranslate2 **Derived From [michaelfeil](https://huggingface.co/michaelfeil):** ```python from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub from transformers import AutoTokenizer model_name = "gaudi/opus-mt-en-ho-ctranslate2" model = TranslatorCT2fromHfHub( model_name_or_path=model_name, device="cuda", compute_type="int8_float16", tokenizer=AutoTokenizer.from_pretrained(model_name) ) outputs = model.generate( text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"], ) print(outputs) ``` # License and other remarks: License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-ho) by Helsinki-NLP.
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
task
[ "TRANSLATION" ]
44,293
Helsinki-NLP/opus-mt-lv-ru
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T12:00:54+00:00
43
0
--- language: - lv - ru license: apache-2.0 tags: - translation --- ### lav-rus * source group: Latvian * target group: Russian * OPUS readme: [lav-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md) * model: transformer-align * source language(s): lav * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lav.rus | 53.3 | 0.702 | ### System Info: - hf_name: lav-rus - source_languages: lav - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lv', 'ru'] - src_constituents: {'lav'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt - src_alpha3: lav - tgt_alpha3: rus - short_pair: lv-ru - chrF2_score: 0.7020000000000001 - bleu: 53.3 - brevity_penalty: 0.9840000000000001 - ref_len: 1541.0 - src_name: Latvian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: lv - tgt_alpha2: ru - prefer_old: False - long_pair: lav-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### lav-rus * source group: Latvian * target group: Russian * OPUS readme: [lav-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md) * model: transformer-align * source language(s): lav * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lav.rus | 53.3 | 0.702 | ### System Info: - hf_name: lav-rus - source_languages: lav - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lv', 'ru'] - src_constituents: {'lav'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt - src_alpha3: lav - tgt_alpha3: rus - short_pair: lv-ru - chrF2_score: 0.7020000000000001 - bleu: 53.3 - brevity_penalty: 0.9840000000000001 - ref_len: 1541.0 - src_name: Latvian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: lv - tgt_alpha2: ru - prefer_old: False - long_pair: lav-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lv", "ru"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,294
sagorsarker/mbert-bengali-tydiqa-qa
sagorsarker
question-answering
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "mbert", "bengali", "bangla", "qa", "bn", "dataset:tydiqa", "license:mit", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2023-03-19T12:45:46+00:00
34
1
--- datasets: - tydiqa language: bn license: mit tags: - mbert - bengali - question-answering - bangla - qa --- # mBERT Bengali Question Answering `mBERT-Bengali-Tydiqa-QA` is a question answering model fine-tuning [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets. ## Usage You can use [bntransformer](https://github.com/sagorbrur/bntransformer) ### Installation `pip install bntransformer` ### Generate Answer ```py from bntransformer import BanglaQA bnqa = BanglaQA() # you can custom model path or other bengali huggingface model path # default it takes "sagorsarker/mbert-bengali-tydiqa-qa" context = "সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।" question = "মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?" answers = bnqa.find_answer(context, question) print(answers) ``` or ### Transformers QA Pipeline ```py from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "sagorsarker/mbert-bengali-tydiqa-qa" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) qa_input = { 'question': 'মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?', 'context': 'সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।' } result = nlp(qa_input) print(result) ``` ## Training Details - `mBERT-Bengali-Tydiqa-QA` model build using [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) - `mBERT-Bengali-Tydiqa-QA` model trained with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets. - Tydiqa Bengali data contains **2390 train** data and **113 validation** data - `mBERT-Bengali-Tydiqa-QA` model trained in [kaggle](https://www.kaggle.com/) GPU - `mBERT-Bengali-Tydiqa-QA` model trained total 5 epochs - `mBERT-Bengali-Tydiqa-QA` trained using [transformers/example/question-aswering](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb) notebook with all default settings except pre-trained model and datasets part ## Evaluation Results Here is the training evaluation part ``` Exact Match: 57.52212389380531 F1 Score: 68.66183963529096 ``` ## Authors - Sagor Sarker - [Github](https://github.com/sagorbrur) - [LinkedIn](https://www.linkedin.com/in/sagor-sarker/)
null
Non_BioNLP
# mBERT Bengali Question Answering `mBERT-Bengali-Tydiqa-QA` is a question answering model fine-tuning [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) model with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets. ## Usage You can use [bntransformer](https://github.com/sagorbrur/bntransformer) ### Installation `pip install bntransformer` ### Generate Answer ```py from bntransformer import BanglaQA bnqa = BanglaQA() # you can custom model path or other bengali huggingface model path # default it takes "sagorsarker/mbert-bengali-tydiqa-qa" context = "সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।" question = "মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?" answers = bnqa.find_answer(context, question) print(answers) ``` or ### Transformers QA Pipeline ```py from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "sagorsarker/mbert-bengali-tydiqa-qa" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) qa_input = { 'question': 'মাস্টারদা সূর্যকুমার সেনের বাবার নাম কী ছিল ?', 'context': 'সূর্য সেন ১৮৯৪ সালের ২২ মার্চ চট্টগ্রামের রাউজান থানার নোয়াপাড়ায় অর্থনৈতিক ভাবে অস্বচ্ছল পরিবারে জন্মগ্রহণ করেন। তাঁর পিতার নাম রাজমনি সেন এবং মাতার নাম শশী বালা সেন। রাজমনি সেনের দুই ছেলে আর চার মেয়ে। সূর্য সেন তাঁদের পরিবারের চতুর্থ সন্তান। দুই ছেলের নাম সূর্য ও কমল। চার মেয়ের নাম বরদাসুন্দরী, সাবিত্রী, ভানুমতী ও প্রমিলা। শৈশবে পিতা মাতাকে হারানো সূর্য সেন কাকা গৌরমনি সেনের কাছে মানুষ হয়েছেন। সূর্য সেন ছেলেবেলা থেকেই খুব মনোযোগী ভাল ছাত্র ছিলেন এবং ধর্মভাবাপন্ন গম্ভীর প্রকৃতির ছিলেন।' } result = nlp(qa_input) print(result) ``` ## Training Details - `mBERT-Bengali-Tydiqa-QA` model build using [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) - `mBERT-Bengali-Tydiqa-QA` model trained with [tydiqa](https://github.com/google-research-datasets/tydiqa) Bengali datasets. - Tydiqa Bengali data contains **2390 train** data and **113 validation** data - `mBERT-Bengali-Tydiqa-QA` model trained in [kaggle](https://www.kaggle.com/) GPU - `mBERT-Bengali-Tydiqa-QA` model trained total 5 epochs - `mBERT-Bengali-Tydiqa-QA` trained using [transformers/example/question-aswering](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb) notebook with all default settings except pre-trained model and datasets part ## Evaluation Results Here is the training evaluation part ``` Exact Match: 57.52212389380531 F1 Score: 68.66183963529096 ``` ## Authors - Sagor Sarker - [Github](https://github.com/sagorbrur) - [LinkedIn](https://www.linkedin.com/in/sagor-sarker/)
{"datasets": ["tydiqa"], "language": "bn", "license": "mit", "tags": ["mbert", "bengali", "question-answering", "bangla", "qa"]}
task
[ "QUESTION_ANSWERING" ]
44,295
RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
2024-06-24T22:51:58Z
2024-06-24T23:08:00+00:00
55
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-orca-v1.0 - GGUF - Model creator: https://huggingface.co/sreeramajay/ - Original model: https://huggingface.co/sreeramajay/TinyLlama-1.1B-orca-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-orca-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-orca-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-orca-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-orca-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-orca-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-orca-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-orca-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-orca-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-orca-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - Intel/orca_dpo_pairs language: - en --- Applied DPO to TinyLlama-1.1B-Chat-v1.0 using orca_dpo_pairs dataset This is only experimental Model created by following instruction from the nice Blog [Fine-tune a Mistral-7b model with Direct Preference Optimization ](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) You can run this model using the following code: ```python # Format prompt message = [ {"role": "system", "content": "You are a helpful assistant chatbot."}, {"role": "user", "content": "What is a Large Language Model?"} ] tokenizer = AutoTokenizer.from_pretrained(new_model) prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) # Create pipeline pipeline = transformers.pipeline( "text-generation", model=new_model, tokenizer=tokenizer ) # Generate text sequences = pipeline( prompt, do_sample=True, temperature=0.7, top_p=0.9, num_return_sequences=1, max_length=200, ) print(sequences[0]['generated_text']) # <|system|> # You are a helpful assistant chatbot.</s> # <|user|> # What is a Large Language Model?</s> # <|assistant|> # A Large Language Model (LLM) is a type of deep learning model that processes large amounts of text or data to improve the accuracy of natural language processing tasks such as sentiment analysis, machine translation, and question answering. LLMs are trained using large datasets, which allow them to generalize better and have better performance compared to traditional machine learning models. They are capable of handling vast amounts of text and can learn complex relationships between words, phrases, and sentences, making them an essential tool for natural language processing. ``` Results on GPT4ALL benchmark: | Tasks | Metric |Value | |Stderr| |-------------|--------|-----:|---|-----:| |arc_challenge|acc |0.3003|± |0.0134| | |acc_norm|0.3276|± |0.0137| |arc_easy |acc |0.6115|± |0.0100| | |acc_norm|0.5354|± |0.0102| |boolq |acc |0.6147|± |0.0085| |hellaswag |acc |0.4633|± |0.0050| | |acc_norm|0.6033|± |0.0049| |openbookqa |acc |0.2480|± |0.0193| | |acc_norm|0.3720|± |0.0216| |piqa |acc |0.7470|± |0.0101| | |acc_norm|0.7470|± |0.0101| |winogrande |acc |0.6054|± |0.0137|
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-orca-v1.0 - GGUF - Model creator: https://huggingface.co/sreeramajay/ - Original model: https://huggingface.co/sreeramajay/TinyLlama-1.1B-orca-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-orca-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-orca-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-orca-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-orca-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-orca-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-orca-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-orca-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-orca-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-orca-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-orca-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-orca-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-orca-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-orca-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/sreeramajay_-_TinyLlama-1.1B-orca-v1.0-gguf/blob/main/TinyLlama-1.1B-orca-v1.0.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - Intel/orca_dpo_pairs language: - en --- Applied DPO to TinyLlama-1.1B-Chat-v1.0 using orca_dpo_pairs dataset This is only experimental Model created by following instruction from the nice Blog [Fine-tune a Mistral-7b model with Direct Preference Optimization ](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) You can run this model using the following code: ```python # Format prompt message = [ {"role": "system", "content": "You are a helpful assistant chatbot."}, {"role": "user", "content": "What is a Large Language Model?"} ] tokenizer = AutoTokenizer.from_pretrained(new_model) prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) # Create pipeline pipeline = transformers.pipeline( "text-generation", model=new_model, tokenizer=tokenizer ) # Generate text sequences = pipeline( prompt, do_sample=True, temperature=0.7, top_p=0.9, num_return_sequences=1, max_length=200, ) print(sequences[0]['generated_text']) # <|system|> # You are a helpful assistant chatbot.</s> # <|user|> # What is a Large Language Model?</s> # <|assistant|> # A Large Language Model (LLM) is a type of deep learning model that processes large amounts of text or data to improve the accuracy of natural language processing tasks such as sentiment analysis, machine translation, and question answering. LLMs are trained using large datasets, which allow them to generalize better and have better performance compared to traditional machine learning models. They are capable of handling vast amounts of text and can learn complex relationships between words, phrases, and sentences, making them an essential tool for natural language processing. ``` Results on GPT4ALL benchmark: | Tasks | Metric |Value | |Stderr| |-------------|--------|-----:|---|-----:| |arc_challenge|acc |0.3003|± |0.0134| | |acc_norm|0.3276|± |0.0137| |arc_easy |acc |0.6115|± |0.0100| | |acc_norm|0.5354|± |0.0102| |boolq |acc |0.6147|± |0.0085| |hellaswag |acc |0.4633|± |0.0050| | |acc_norm|0.6033|± |0.0049| |openbookqa |acc |0.2480|± |0.0193| | |acc_norm|0.3720|± |0.0216| |piqa |acc |0.7470|± |0.0101| | |acc_norm|0.7470|± |0.0101| |winogrande |acc |0.6054|± |0.0137|
{}
task
[ "QUESTION_ANSWERING", "TRANSLATION" ]
44,296
kibru/distilbert-base-uncased-finetuned-cola
kibru
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-10-11T17:03:15Z
2023-10-11T17:21:42+00:00
9
0
--- base_model: distilbert-base-uncased datasets: - glue license: apache-2.0 metrics: - matthews_correlation tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - type: matthews_correlation value: 0.5402881620449809 name: Matthews Correlation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7569 - Matthews Correlation: 0.5403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5304 | 1.0 | 535 | 0.4711 | 0.4796 | | 0.3578 | 2.0 | 1070 | 0.5249 | 0.5209 | | 0.2374 | 3.0 | 1605 | 0.6196 | 0.5212 | | 0.1808 | 4.0 | 2140 | 0.7569 | 0.5403 | | 0.136 | 5.0 | 2675 | 0.8015 | 0.5328 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7569 - Matthews Correlation: 0.5403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5304 | 1.0 | 535 | 0.4711 | 0.4796 | | 0.3578 | 2.0 | 1070 | 0.5249 | 0.5209 | | 0.2374 | 3.0 | 1605 | 0.6196 | 0.5212 | | 0.1808 | 4.0 | 2140 | 0.7569 | 0.5403 | | 0.136 | 5.0 | 2675 | 0.8015 | 0.5328 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
{"base_model": "distilbert-base-uncased", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "cola", "split": "validation", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5402881620449809, "name": "Matthews Correlation"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,297
leopiano98/LLIMONIIE_llama3-8b
leopiano98
null
[ "safetensors", "llama", "unsloth", "llama3", "NER", "RE", "OIE", "LLIMONIIE", "en", "it", "license:apache-2.0", "region:us" ]
2024-12-12T14:45:48Z
2024-12-12T18:22:14+00:00
7
0
--- base_model: - unsloth/llama-3-8b-instruct-bnb-4bit language: - en - it license: apache-2.0 tags: - unsloth - llama3 - NER - RE - OIE - LLIMONIIE --- # LLIMONIIE: Large Language Instructed Model for Open Named Italian Information Extraction LLIMONIE generalizes IE across diverse domains without requiring input ontologies. - Perform three tasks: - **Open Named Entity Recognition** - **Open Relation Extraction** - **Joint Entity and Relation Extraction** ## 💻 Quick Start ### Setup conda environment Install the unsloth package following the repo [guide](https://github.com/unslothai/unsloth?tab=readme-ov-file#conda-installation) ### Clone the repository ```bash git clone https://github.com/leonardoPiano/LLIMONIE.git ``` ### Run the generation ```python from PromptTemplates.instruct_prompt_templates import NER,RE,JOINT from LLM.Unsloth import UnslothLLM model_path="leopiano98/LLIMONIIE_llama3-8b" llimonie=UnslothLLM(model_path,inference=True) task=NER text="Alessandro Manzoni è considerato uno dei maggiori romanzieri italiani di tutti i tempi per il suo celebre romanzo I promessi sposi" messages = [{"role": "system", "content": task}, {"role": "user", "content": text}] output= llimonie.generate(messages, max_new_tokens=512) #output: Alessandro Manzoni[Writer|Person]; I promessi sposi[Novel|Book]; italiani[Nationality|Ethnicity] ``` - **Developed by:** leopiano98 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
null
Non_BioNLP
# LLIMONIIE: Large Language Instructed Model for Open Named Italian Information Extraction LLIMONIE generalizes IE across diverse domains without requiring input ontologies. - Perform three tasks: - **Open Named Entity Recognition** - **Open Relation Extraction** - **Joint Entity and Relation Extraction** ## 💻 Quick Start ### Setup conda environment Install the unsloth package following the repo [guide](https://github.com/unslothai/unsloth?tab=readme-ov-file#conda-installation) ### Clone the repository ```bash git clone https://github.com/leonardoPiano/LLIMONIE.git ``` ### Run the generation ```python from PromptTemplates.instruct_prompt_templates import NER,RE,JOINT from LLM.Unsloth import UnslothLLM model_path="leopiano98/LLIMONIIE_llama3-8b" llimonie=UnslothLLM(model_path,inference=True) task=NER text="Alessandro Manzoni è considerato uno dei maggiori romanzieri italiani di tutti i tempi per il suo celebre romanzo I promessi sposi" messages = [{"role": "system", "content": task}, {"role": "user", "content": text}] output= llimonie.generate(messages, max_new_tokens=512) #output: Alessandro Manzoni[Writer|Person]; I promessi sposi[Novel|Book]; italiani[Nationality|Ethnicity] ``` - **Developed by:** leopiano98 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"base_model": ["unsloth/llama-3-8b-instruct-bnb-4bit"], "language": ["en", "it"], "license": "apache-2.0", "tags": ["unsloth", "llama3", "NER", "RE", "OIE", "LLIMONIIE"]}
task
[ "NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION" ]
44,299
Helsinki-NLP/opus-mt-ga-en
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ga", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04Z
2023-08-16T11:37:45+00:00
2,243
0
--- language: - ga - en license: apache-2.0 tags: - translation --- ### gle-eng * source group: Irish * target group: English * OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md) * model: transformer-align * source language(s): gle * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.gle.eng | 51.6 | 0.672 | ### System Info: - hf_name: gle-eng - source_languages: gle - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ga', 'en'] - src_constituents: {'gle'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt - src_alpha3: gle - tgt_alpha3: eng - short_pair: ga-en - chrF2_score: 0.672 - bleu: 51.6 - brevity_penalty: 1.0 - ref_len: 11247.0 - src_name: Irish - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ga - tgt_alpha2: en - prefer_old: False - long_pair: gle-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
null
Non_BioNLP
### gle-eng * source group: Irish * target group: English * OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md) * model: transformer-align * source language(s): gle * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.gle.eng | 51.6 | 0.672 | ### System Info: - hf_name: gle-eng - source_languages: gle - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ga', 'en'] - src_constituents: {'gle'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt - src_alpha3: gle - tgt_alpha3: eng - short_pair: ga-en - chrF2_score: 0.672 - bleu: 51.6 - brevity_penalty: 1.0 - ref_len: 11247.0 - src_name: Irish - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ga - tgt_alpha2: en - prefer_old: False - long_pair: gle-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ga", "en"], "license": "apache-2.0", "tags": ["translation"]}
task
[ "TRANSLATION" ]
44,300
BeenaSamuel/t5_small_bbc_news_extractive_summarizer
BeenaSamuel
summarization
[ "transformers", "safetensors", "t5", "text2text-generation", "summarization", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-05-13T15:58:37Z
2024-05-13T16:04:13+00:00
19
0
--- library_name: transformers pipeline_tag: summarization --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
Non_BioNLP
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "pipeline_tag": "summarization"}
task
[ "SUMMARIZATION" ]
44,301
senjakala/test-sentiment-model-2000-samples
senjakala
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-05T20:16:35Z
2023-05-07T11:00:53+00:00
11
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: test-sentiment-model-2000-samples results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - type: accuracy value: 0.86 name: Accuracy - type: f1 value: 0.8541666666666666 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-sentiment-model-2000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3003 - Accuracy: 0.86 - F1: 0.8542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-sentiment-model-2000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3003 - Accuracy: 0.86 - F1: 0.8542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test-sentiment-model-2000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.86, "name": "Accuracy"}, {"type": "f1", "value": 0.8541666666666666, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,302
Shahriardev/distobert-finetuned-embedding-faq1-v1-1
Shahriardev
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:19", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-distilroberta-v1", "base_model:finetune:sentence-transformers/all-distilroberta-v1", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2025-03-12T10:26:21Z
2025-03-12T10:33:36+00:00
20
0
--- base_model: sentence-transformers/all-distilroberta-v1 library_name: sentence-transformers metrics: - cosine_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:19 - loss:MultipleNegativesRankingLoss model-index: - name: SentenceTransformer based on sentence-transformers/all-distilroberta-v1 results: - task: type: triplet name: Triplet dataset: name: ai faq validation type: ai-faq-validation metrics: - type: cosine_accuracy value: 1.0 name: Cosine Accuracy - task: type: triplet name: Triplet dataset: name: ai job test type: ai-job-test metrics: - type: cosine_accuracy value: 1.0 name: Cosine Accuracy --- # SentenceTransformer based on sentence-transformers/all-distilroberta-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) <!-- at revision 842eaed40bee4d61673a81c92d5689a8fed7a09f --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Datasets: `ai-faq-validation` and `ai-job-test` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | ai-faq-validation | ai-job-test | |:--------------------|:------------------|:------------| | **cosine_accuracy** | **1.0** | **1.0** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 19 training samples * Columns: <code>question</code>, <code>answer</code>, and <code>answer_neg</code> * Approximate statistics based on the first 19 samples: | | question | answer | answer_neg | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 59.47 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 250.11 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 262.47 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | question | answer | answer_neg | |:-----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>در آبان پرایم چه خدماتی دریافت میکنم؟</code> | <code>آبان پرایم یک صرافی آنلاین است به این معنی که شما تمامی خدمات صرافی را میتوانید به صورت آنلاین و غیر حضوری دریافت کنید. <br>خدمات ما شامل:<br>- امکان خرید و فروش ارزهای مختلف تومان، تتر، درهم، دلار آمریکا، دلار استرالیا، دلار کانادا، لیر ترکیه<br>- نرخ لحظه ای<br>- واریز و برداشت نقدی<br>- انجام انواع حوالجات <br>- واریز مستقیم به حساب شبا ایران با هر رقم از طریق وبسایت<br>- امکان دریافت گزارش صورت حساب ها<br>- پشتیبانی ۲۴ ساعته</code> | <code> </code> | | <code>توی امارات به کدوم بانک ها واریز انجام میدین؟</code> | <code>واریز به تمامی بانک های امارات انجام میشود.</code> | <code>شما میتوانید با شماره موبایل(با پیش شماره های ایران،امارات، ترکیه، کانادا و استرالیا) یا ایمیل به راحتی حساب کاربری بسازید<br>https://youtu.be/Sl1ehlS6TYY<br><br> برای ثبت‌نام:<br>۱) شماره موبایل یا ایمیل خودتون را وارد کنید<br>۲) در مرحله بعد کد تایید ارسال شده به موبایل یا ایمیل رو وارد کنید<br>۳) رمز عبور خودتون رو تعریف کنید و وارد حساب کاربری بشید</code> | | <code>واریز و برداشت تتر</code> | <code>واریز و برداشت تتر در آبان پرایم به صورت کاملا آنلاین انجام میشود. <br>https://youtu.be/ARzOJ6Pgp7k<br><br>واریز تتر<br>۱) در صفحه اصلی گزینه deposit تتر رو انتخاب کنید<br>۲) شبکه واریز (TRC20 یا BSC) رو انتخاب و آدرس ولت رو کپی کنید.<br>۴) تتر مورد نظرتون رو از ولت مبدا به آدرس ولت آبان پرام انتقال بدید.<br>۵) چند دقیقه بعد از انتقال تراکنش انجام میشه و اکانت آبان پرایمتون مستقیما شارژ میشه.<br><br>برداشت تتر<br>۱) در صفحه اصلی گزینه Withdraw تتر رو انتخاب کنید.<br>۳) یکی از شبکه‌های برداشت (TRC20 یا BSC) رو انتخاب کنید.<br>۴) آدرس کیف پول مقصد رو وارد کنید<br>آدرس مقصد رو دقیق و منطبق با شبکه برداشت انتخابی وارد کنید.<br>۵) مقدار تتری که می‌خواهید برداشت کنید رو وارد کنید.<br>۶) بعد از بررسی و اطمینان از صحت اطلاعات درخواست برداشت خود را ثبت کنید.</code> | <code>واریز به تمامی بانک های امارات انجام میشود.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 1 evaluation samples * Columns: <code>question</code>, <code>answer</code>, and <code>answer_neg</code> * Approximate statistics based on the first 1 samples: | | question | answer | answer_neg | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 52 tokens</li><li>mean: 52.0 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 50.0 tokens</li><li>max: 50 tokens</li></ul> | * Samples: | question | answer | answer_neg | |:-------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------| | <code>رمز عبورم رو فراموش کردم چیکار باید بکنم؟</code> | <code>۱) انتخاب گزینه فراموشی رمز عبور( Forgot Password)<br>۲) وارد کردن شماره موبایل یا ایمیل<br>۳) وارد کردن کد امنیتی دریافتی از طریق پیامک یا ایمیل<br>۴) تعریف رمز جدید<br><br>اگر پیامک رمز یک‌بارمصرف (OTP) دریافت نمی‌کنید، این مراحل رو انجام بده: <br>- شماره موبایل رو درست وارد کن. (با پیش‌شماره صحیح) <br>- چند دقیقه صبر کن و دوباره تلاش کن. ممکنه گاهی اختلال موقتی از سمت سرویس‌دهنده پیامک باشه.<br>- اینترنت و آنتن گوشی رو بررسی کن. <br>- پوشه پیامک‌های تبلیغاتی و مسدودشده رو چک کن. <br>- دوباره درخواست کد بده و مطمئن شو که تعداد دفعات مجاز تموم نشده. <br>- اگر باز هم دریافت نکردی، با پشتیبانی تماس بگیر. <br></code> | <code>برای انجام حواله با پشتیبانی تماس بگیرید.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 200 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 200 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | ai-faq-validation_cosine_accuracy | ai-job-test_cosine_accuracy | |:-----:|:----:|:-------------:|:---------------:|:---------------------------------:|:---------------------------:| | -1 | -1 | - | - | 0.0 | - | | 50.0 | 100 | 1.0736 | 0.5462 | 1.0 | - | | 100.0 | 200 | 0.0041 | 0.1209 | 1.0 | - | | 150.0 | 300 | 0.0002 | 0.0663 | 1.0 | - | | 200.0 | 400 | 0.0001 | 0.0556 | 1.0 | - | | -1 | -1 | - | - | 1.0 | 1.0 | ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 3.4.1 - Transformers: 4.50.0.dev0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
null
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-distilroberta-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) <!-- at revision 842eaed40bee4d61673a81c92d5689a8fed7a09f --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Datasets: `ai-faq-validation` and `ai-job-test` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | ai-faq-validation | ai-job-test | |:--------------------|:------------------|:------------| | **cosine_accuracy** | **1.0** | **1.0** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 19 training samples * Columns: <code>question</code>, <code>answer</code>, and <code>answer_neg</code> * Approximate statistics based on the first 19 samples: | | question | answer | answer_neg | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 21 tokens</li><li>mean: 59.47 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 250.11 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 262.47 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | question | answer | answer_neg | |:-----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>در آبان پرایم چه خدماتی دریافت میکنم؟</code> | <code>آبان پرایم یک صرافی آنلاین است به این معنی که شما تمامی خدمات صرافی را میتوانید به صورت آنلاین و غیر حضوری دریافت کنید. <br>خدمات ما شامل:<br>- امکان خرید و فروش ارزهای مختلف تومان، تتر، درهم، دلار آمریکا، دلار استرالیا، دلار کانادا، لیر ترکیه<br>- نرخ لحظه ای<br>- واریز و برداشت نقدی<br>- انجام انواع حوالجات <br>- واریز مستقیم به حساب شبا ایران با هر رقم از طریق وبسایت<br>- امکان دریافت گزارش صورت حساب ها<br>- پشتیبانی ۲۴ ساعته</code> | <code> </code> | | <code>توی امارات به کدوم بانک ها واریز انجام میدین؟</code> | <code>واریز به تمامی بانک های امارات انجام میشود.</code> | <code>شما میتوانید با شماره موبایل(با پیش شماره های ایران،امارات، ترکیه، کانادا و استرالیا) یا ایمیل به راحتی حساب کاربری بسازید<br>https://youtu.be/Sl1ehlS6TYY<br><br> برای ثبت‌نام:<br>۱) شماره موبایل یا ایمیل خودتون را وارد کنید<br>۲) در مرحله بعد کد تایید ارسال شده به موبایل یا ایمیل رو وارد کنید<br>۳) رمز عبور خودتون رو تعریف کنید و وارد حساب کاربری بشید</code> | | <code>واریز و برداشت تتر</code> | <code>واریز و برداشت تتر در آبان پرایم به صورت کاملا آنلاین انجام میشود. <br>https://youtu.be/ARzOJ6Pgp7k<br><br>واریز تتر<br>۱) در صفحه اصلی گزینه deposit تتر رو انتخاب کنید<br>۲) شبکه واریز (TRC20 یا BSC) رو انتخاب و آدرس ولت رو کپی کنید.<br>۴) تتر مورد نظرتون رو از ولت مبدا به آدرس ولت آبان پرام انتقال بدید.<br>۵) چند دقیقه بعد از انتقال تراکنش انجام میشه و اکانت آبان پرایمتون مستقیما شارژ میشه.<br><br>برداشت تتر<br>۱) در صفحه اصلی گزینه Withdraw تتر رو انتخاب کنید.<br>۳) یکی از شبکه‌های برداشت (TRC20 یا BSC) رو انتخاب کنید.<br>۴) آدرس کیف پول مقصد رو وارد کنید<br>آدرس مقصد رو دقیق و منطبق با شبکه برداشت انتخابی وارد کنید.<br>۵) مقدار تتری که می‌خواهید برداشت کنید رو وارد کنید.<br>۶) بعد از بررسی و اطمینان از صحت اطلاعات درخواست برداشت خود را ثبت کنید.</code> | <code>واریز به تمامی بانک های امارات انجام میشود.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### Unnamed Dataset * Size: 1 evaluation samples * Columns: <code>question</code>, <code>answer</code>, and <code>answer_neg</code> * Approximate statistics based on the first 1 samples: | | question | answer | answer_neg | |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 52 tokens</li><li>mean: 52.0 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 50.0 tokens</li><li>max: 50 tokens</li></ul> | * Samples: | question | answer | answer_neg | |:-------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------| | <code>رمز عبورم رو فراموش کردم چیکار باید بکنم؟</code> | <code>۱) انتخاب گزینه فراموشی رمز عبور( Forgot Password)<br>۲) وارد کردن شماره موبایل یا ایمیل<br>۳) وارد کردن کد امنیتی دریافتی از طریق پیامک یا ایمیل<br>۴) تعریف رمز جدید<br><br>اگر پیامک رمز یک‌بارمصرف (OTP) دریافت نمی‌کنید، این مراحل رو انجام بده: <br>- شماره موبایل رو درست وارد کن. (با پیش‌شماره صحیح) <br>- چند دقیقه صبر کن و دوباره تلاش کن. ممکنه گاهی اختلال موقتی از سمت سرویس‌دهنده پیامک باشه.<br>- اینترنت و آنتن گوشی رو بررسی کن. <br>- پوشه پیامک‌های تبلیغاتی و مسدودشده رو چک کن. <br>- دوباره درخواست کد بده و مطمئن شو که تعداد دفعات مجاز تموم نشده. <br>- اگر باز هم دریافت نکردی، با پشتیبانی تماس بگیر. <br></code> | <code>برای انجام حواله با پشتیبانی تماس بگیرید.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 200 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 200 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | ai-faq-validation_cosine_accuracy | ai-job-test_cosine_accuracy | |:-----:|:----:|:-------------:|:---------------:|:---------------------------------:|:---------------------------:| | -1 | -1 | - | - | 0.0 | - | | 50.0 | 100 | 1.0736 | 0.5462 | 1.0 | - | | 100.0 | 200 | 0.0041 | 0.1209 | 1.0 | - | | 150.0 | 300 | 0.0002 | 0.0663 | 1.0 | - | | 200.0 | 400 | 0.0001 | 0.0556 | 1.0 | - | | -1 | -1 | - | - | 1.0 | 1.0 | ### Framework Versions - Python: 3.10.16 - Sentence Transformers: 3.4.1 - Transformers: 4.50.0.dev0 - PyTorch: 2.6.0+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"base_model": "sentence-transformers/all-distilroberta-v1", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:19", "loss:MultipleNegativesRankingLoss"], "model-index": [{"name": "SentenceTransformer based on sentence-transformers/all-distilroberta-v1", "results": [{"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "ai faq validation", "type": "ai-faq-validation"}, "metrics": [{"type": "cosine_accuracy", "value": 1.0, "name": "Cosine Accuracy"}]}, {"task": {"type": "triplet", "name": "Triplet"}, "dataset": {"name": "ai job test", "type": "ai-job-test"}, "metrics": [{"type": "cosine_accuracy", "value": 1.0, "name": "Cosine Accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,303
TransferGraph/riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:riyadhctg/distilbert-base-uncased-finetuned-cola", "base_model:adapter:riyadhctg/distilbert-base-uncased-finetuned-cola", "license:apache-2.0", "model-index", "region:us" ]
2024-02-29T13:10:17Z
2024-02-29T13:10:23+00:00
0
0
--- base_model: riyadhctg/distilbert-base-uncased-finetuned-cola datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: validation args: sentiment metrics: - type: accuracy value: 0.709 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.209 | None | 0 | | 0.6985 | 0.7312 | 0 | | 0.6995 | 0.6561 | 1 | | 0.702 | 0.6327 | 2 | | 0.709 | 0.6149 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment This model is a fine-tuned version of [riyadhctg/distilbert-base-uncased-finetuned-cola](https://huggingface.co/riyadhctg/distilbert-base-uncased-finetuned-cola) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.709 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.209 | None | 0 | | 0.6985 | 0.7312 | 0 | | 0.6995 | 0.6561 | 1 | | 0.702 | 0.6327 | 2 | | 0.709 | 0.6149 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
{"base_model": "riyadhctg/distilbert-base-uncased-finetuned-cola", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "riyadhctg_distilbert-base-uncased-finetuned-cola-finetuned-lora-tweet_eval_sentiment", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "sentiment", "split": "validation", "args": "sentiment"}, "metrics": [{"type": "accuracy", "value": 0.709, "name": "accuracy"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,304
RichardErkhov/NousResearch_-_Nous-Hermes-2-Mixtral-8x7B-SFT-4bits
RichardErkhov
text-generation
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
2024-04-27T23:35:49Z
2024-04-28T00:00:44+00:00
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Nous-Hermes-2-Mixtral-8x7B-SFT - bnb 4bits - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT/ Original model description: --- base_model: mistralai/Mixtral-8x7B-v0.1 tags: - Mixtral - instruct - finetune - chatml - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mixtral-8x7B-SFT results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 --- # Nous Hermes 2 - Mixtral 8x7B - SFT ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/btRmXWMG7PXatTs-u3G85.jpeg) ## Model description Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of our new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. This is the SFT only version of Mixtral Hermes 2, we have also released an SFT+DPO version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO ## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO! # Table of Contents 1. [Example Outputs](#example-outputs) 2. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Comparison to Mixtral-Instruct 3. [Prompt Format](#prompt-format) 4. [Inference Example Code](#inference-code) 5. [Quantized Models](#quantized-models) ## Example Outputs ### Writing Code for Data Visualization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QJ5RHrOqB5GMP7ZAZ5NTk.png) ### Writing Cyberpunk Psychedelic Poems ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/wuKnMlM2HBGdyUFO7mY_H.png) ### Performing Backtranslation to Create Prompts from Input Text ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QElwK1UI9PQQT6WosXpo1.png) ## Benchmark Results Nous-Hermes 2 on Mixtral 8x7B SFT is the bedrock for major improvements on many of the benchmarks below compared to the base Mixtral model, and is the SFT only version of our first model to beat the flagship Mixtral Finetune by MistralAI (the DPO version). ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5904|± |0.0144| | | |acc_norm|0.6323|± |0.0141| |arc_easy | 0|acc |0.8594|± |0.0071| | | |acc_norm|0.8607|± |0.0071| |boolq | 1|acc |0.8783|± |0.0057| |hellaswag | 0|acc |0.6592|± |0.0047| | | |acc_norm|0.8434|± |0.0036| |openbookqa | 0|acc |0.3400|± |0.0212| | | |acc_norm|0.4660|± |0.0223| |piqa | 0|acc |0.8324|± |0.0087| | | |acc_norm|0.8379|± |0.0086| |winogrande | 0|acc |0.7569|± |0.0121| ``` Average: 75.36 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2441|± |0.0270| | | |acc_norm|0.2598|± |0.0276| |agieval_logiqa_en | 0|acc |0.4025|± |0.0192| | | |acc_norm|0.3978|± |0.0192| |agieval_lsat_ar | 0|acc |0.2391|± |0.0282| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.5353|± |0.0221| | | |acc_norm|0.5098|± |0.0222| |agieval_lsat_rc | 0|acc |0.6617|± |0.0289| | | |acc_norm|0.5948|± |0.0300| |agieval_sat_en | 0|acc |0.7961|± |0.0281| | | |acc_norm|0.7816|± |0.0289| |agieval_sat_en_without_passage| 0|acc |0.4757|± |0.0349| | | |acc_norm|0.4515|± |0.0348| |agieval_sat_math | 0|acc |0.4818|± |0.0338| | | |acc_norm|0.3909|± |0.0330| ``` Average: 44.89 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5789|± |0.0359| |bigbench_date_understanding | 0|multiple_choice_grade|0.7154|± |0.0235| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5388|± |0.0311| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4680|± |0.0264| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3260|± |0.0210| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2443|± |0.0163| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5233|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3700|± |0.0216| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6665|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2505|± |0.0137| |bigbench_snarks | 0|multiple_choice_grade|0.7127|± |0.0337| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6592|± |0.0151| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.6860|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2200|± |0.0117| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1503|± |0.0085| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5233|± |0.0289| ``` Average: 48.69 # Benchmark Comparison Charts ## GPT4All ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/S3_tdH822r9UvkGFDiYam.png) ## AGI-Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/paet9FsASWPWa6KJs3mm-.png) ## BigBench Reasoning Test ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/rHmkUnYLTWwq0cuVzMegL.png) # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True) model = MixtralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # Quantized Models: ## All sizes of GGUF Quantizations are available here: ### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF (Note: If you have issues with these GGUF's try TheBloke's) ## TheBloke has also quantized Hermes Mixtral in various forms: ### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF ### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ ### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ ### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ ### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ ## There is also an MLX version available: ### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit ## Exllama2 quants available here: ### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2 (other sizes available in Qeternity's repos) [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
null
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Nous-Hermes-2-Mixtral-8x7B-SFT - bnb 4bits - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT/ Original model description: --- base_model: mistralai/Mixtral-8x7B-v0.1 tags: - Mixtral - instruct - finetune - chatml - gpt4 - synthetic data - distillation model-index: - name: Nous-Hermes-2-Mixtral-8x7B-SFT results: [] license: apache-2.0 language: - en datasets: - teknium/OpenHermes-2.5 --- # Nous Hermes 2 - Mixtral 8x7B - SFT ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/btRmXWMG7PXatTs-u3G85.jpeg) ## Model description Nous Hermes 2 Mixtral 8x7B SFT is the supervised finetune only version of our new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. This is the SFT only version of Mixtral Hermes 2, we have also released an SFT+DPO version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO ## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO! # Table of Contents 1. [Example Outputs](#example-outputs) 2. [Benchmark Results](#benchmark-results) - GPT4All - AGIEval - BigBench - Comparison to Mixtral-Instruct 3. [Prompt Format](#prompt-format) 4. [Inference Example Code](#inference-code) 5. [Quantized Models](#quantized-models) ## Example Outputs ### Writing Code for Data Visualization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QJ5RHrOqB5GMP7ZAZ5NTk.png) ### Writing Cyberpunk Psychedelic Poems ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/wuKnMlM2HBGdyUFO7mY_H.png) ### Performing Backtranslation to Create Prompts from Input Text ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/QElwK1UI9PQQT6WosXpo1.png) ## Benchmark Results Nous-Hermes 2 on Mixtral 8x7B SFT is the bedrock for major improvements on many of the benchmarks below compared to the base Mixtral model, and is the SFT only version of our first model to beat the flagship Mixtral Finetune by MistralAI (the DPO version). ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5904|± |0.0144| | | |acc_norm|0.6323|± |0.0141| |arc_easy | 0|acc |0.8594|± |0.0071| | | |acc_norm|0.8607|± |0.0071| |boolq | 1|acc |0.8783|± |0.0057| |hellaswag | 0|acc |0.6592|± |0.0047| | | |acc_norm|0.8434|± |0.0036| |openbookqa | 0|acc |0.3400|± |0.0212| | | |acc_norm|0.4660|± |0.0223| |piqa | 0|acc |0.8324|± |0.0087| | | |acc_norm|0.8379|± |0.0086| |winogrande | 0|acc |0.7569|± |0.0121| ``` Average: 75.36 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2441|± |0.0270| | | |acc_norm|0.2598|± |0.0276| |agieval_logiqa_en | 0|acc |0.4025|± |0.0192| | | |acc_norm|0.3978|± |0.0192| |agieval_lsat_ar | 0|acc |0.2391|± |0.0282| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.5353|± |0.0221| | | |acc_norm|0.5098|± |0.0222| |agieval_lsat_rc | 0|acc |0.6617|± |0.0289| | | |acc_norm|0.5948|± |0.0300| |agieval_sat_en | 0|acc |0.7961|± |0.0281| | | |acc_norm|0.7816|± |0.0289| |agieval_sat_en_without_passage| 0|acc |0.4757|± |0.0349| | | |acc_norm|0.4515|± |0.0348| |agieval_sat_math | 0|acc |0.4818|± |0.0338| | | |acc_norm|0.3909|± |0.0330| ``` Average: 44.89 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5789|± |0.0359| |bigbench_date_understanding | 0|multiple_choice_grade|0.7154|± |0.0235| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.5388|± |0.0311| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4680|± |0.0264| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3260|± |0.0210| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2443|± |0.0163| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5233|± |0.0289| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.3700|± |0.0216| |bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6665|± |0.0105| |bigbench_ruin_names | 0|multiple_choice_grade|0.6317|± |0.0228| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2505|± |0.0137| |bigbench_snarks | 0|multiple_choice_grade|0.7127|± |0.0337| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6592|± |0.0151| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.6860|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2200|± |0.0117| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1503|± |0.0085| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5233|± |0.0289| ``` Average: 48.69 # Benchmark Comparison Charts ## GPT4All ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/S3_tdH822r9UvkGFDiYam.png) ## AGI-Eval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/paet9FsASWPWa6KJs3mm-.png) ## BigBench Reasoning Test ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/rHmkUnYLTWwq0cuVzMegL.png) # Prompt Format Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: even in 4bit, it will require more than 24GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import LlamaTokenizer, MixtralForCausalLM import bitsandbytes, flash_attn tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO', trust_remote_code=True) model = MixtralForCausalLM.from_pretrained( "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` # Quantized Models: ## All sizes of GGUF Quantizations are available here: ### SFT+DPO Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT Only Version - https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF (Note: If you have issues with these GGUF's try TheBloke's) ## TheBloke has also quantized Hermes Mixtral in various forms: ### SFT+DPO GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ### SFT GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GGUF ### SFT+DPO GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GPTQ ### SFT GPTQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-GPTQ ### SFT+DPO AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ ### SFT AWQ: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-SFT-AWQ ## There is also an MLX version available: ### https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit ## Exllama2 quants available here: ### https://huggingface.co/qeternity/Nous-Hermes-2-Mixtral-8x7B-SFT-4bpw-h6-exl2 (other sizes available in Qeternity's repos) [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
{}
task
[ "TRANSLATION" ]
44,305
edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1
edwsiew
text-classification
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-20T01:48:01Z
2023-08-20T01:48:21+00:00
10
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Non_BioNLP
# edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("edwsiew/setfit-finetuned-tech-sentiment-setfit-32-20-1") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
task
[ "TEXT_CLASSIFICATION" ]
44,306
Gidong/distilbert-base-uncased-finetuned-emotion
Gidong
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-05-30T12:07:12Z
2024-05-30T12:12:51+00:00
4
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - type: accuracy value: 0.924 name: Accuracy - type: f1 value: 0.9239832386459546 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2112 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8117 | 1.0 | 250 | 0.3068 | 0.9085 | 0.9076 | | 0.2413 | 2.0 | 500 | 0.2112 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2112 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8117 | 1.0 | 250 | 0.3068 | 0.9085 | 0.9076 | | 0.2413 | 2.0 | 500 | 0.2112 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.924, "name": "Accuracy"}, {"type": "f1", "value": 0.9239832386459546, "name": "F1"}]}]}]}
task
[ "TEXT_CLASSIFICATION" ]
44,307
tdopierre/ProtAugment-ParaphraseGenerator
tdopierre
text2text-generation
[ "transformers", "pytorch", "bart", "text2text-generation", "Paraphase Generation", "Data Augmentation", "en", "dataset:Quora", "dataset:MSR", "dataset:Google-PAWS", "arxiv:2105.12995", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05Z
2021-07-07T14:15:07+00:00
190
5
--- datasets: - Quora - MSR - Google-PAWS language: en tags: - Paraphase Generation - Data Augmentation --- [![acl](http://img.shields.io/badge/ACL-2021-f31f32)](https://arxiv.org/abs/2105.12995) This model is used to generate paraphrases. It has been trained on a mix of 3 different paraphrase detection datasets: MSR, Quora, Google-PAWS. We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995) Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)! If you use this model, please consider citing our paper. ``` @article{Dopierre2021ProtAugmentUD, title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning}, author={Thomas Dopierre and C. Gravier and Wilfried Logerais}, journal={ArXiv}, year={2021}, volume={abs/2105.12995} } ```
null
Non_BioNLP
[![acl](http://img.shields.io/badge/ACL-2021-f31f32)](https://arxiv.org/abs/2105.12995) This model is used to generate paraphrases. It has been trained on a mix of 3 different paraphrase detection datasets: MSR, Quora, Google-PAWS. We use this model in our ACL'21 Paper ["PROTAUGMENT: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning"](https://arxiv.org/abs/2105.12995) Jointly used with generation constraints, this model allows to generate diverse paraphrases. We use those paraphrases as a data augmentation technique to further boosts a classification model's generalization capability. Feel free to play with the [code](https://github.com/tdopierre/ProtAugment)! If you use this model, please consider citing our paper. ``` @article{Dopierre2021ProtAugmentUD, title={ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning}, author={Thomas Dopierre and C. Gravier and Wilfried Logerais}, journal={ArXiv}, year={2021}, volume={abs/2105.12995} } ```
{"datasets": ["Quora", "MSR", "Google-PAWS"], "language": "en", "tags": ["Paraphase Generation", "Data Augmentation"]}
task
[ "PARAPHRASING" ]
44,308
SZTAKI-HLT/mT5-base-HunSum-2
SZTAKI-HLT
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "hu", "dataset:SZTAKI-HLT/HunSum-2-abstractive", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-04-03T12:34:47Z
2025-01-23T13:57:49+00:00
443
0
--- base_model: - google/mt5-base datasets: - SZTAKI-HLT/HunSum-2-abstractive language: - hu license: apache-2.0 metrics: - rouge pipeline_tag: summarization inference: parameters: num_beams: 5 length_penalty: 2 max_length: 128 encoder_no_repeat_ngram_size: 4 no_repeat_ngram_size: 3 --- # Model Card for mT5-base-HunSum-2 The mT5-base-HunSum-2 is a Hungarian abstractive summarization model, which was trained on the [SZTAKI-HLT/HunSum-2-abstractive dataset](https://huggingface.co/datasets/SZTAKI-HLT/HunSum-2-abstractive). The model is based on [google/mt5-base](https://huggingface.co/google/mt5-base). ## Intended uses & limitations - **Model type:** Text Summarization - **Language(s) (NLP):** Hungarian - **Resource(s) for more information:** - [GitHub Repo](https://github.com/botondbarta/HunSum) ## Parameters - **Batch Size:** 12 - **Learning Rate:** 5e-5 - **Weight Decay:** 0.01 - **Warmup Steps:** 3000 - **Epochs:** 10 - **no_repeat_ngram_size:** 3 - **num_beams:** 5 - **early_stopping:** False - **encoder_no_repeat_ngram_size:** 4 ## Results | Metric | Value | | :------------ | :------------------------------------------ | | ROUGE-1 | 40.06 | | ROUGE-2 | 12.67 | | ROUGE-L | 25.93 |
null
Non_BioNLP
# Model Card for mT5-base-HunSum-2 The mT5-base-HunSum-2 is a Hungarian abstractive summarization model, which was trained on the [SZTAKI-HLT/HunSum-2-abstractive dataset](https://huggingface.co/datasets/SZTAKI-HLT/HunSum-2-abstractive). The model is based on [google/mt5-base](https://huggingface.co/google/mt5-base). ## Intended uses & limitations - **Model type:** Text Summarization - **Language(s) (NLP):** Hungarian - **Resource(s) for more information:** - [GitHub Repo](https://github.com/botondbarta/HunSum) ## Parameters - **Batch Size:** 12 - **Learning Rate:** 5e-5 - **Weight Decay:** 0.01 - **Warmup Steps:** 3000 - **Epochs:** 10 - **no_repeat_ngram_size:** 3 - **num_beams:** 5 - **early_stopping:** False - **encoder_no_repeat_ngram_size:** 4 ## Results | Metric | Value | | :------------ | :------------------------------------------ | | ROUGE-1 | 40.06 | | ROUGE-2 | 12.67 | | ROUGE-L | 25.93 |
{"base_model": ["google/mt5-base"], "datasets": ["SZTAKI-HLT/HunSum-2-abstractive"], "language": ["hu"], "license": "apache-2.0", "metrics": ["rouge"], "pipeline_tag": "summarization", "inference": {"parameters": {"num_beams": 5, "length_penalty": 2, "max_length": 128, "encoder_no_repeat_ngram_size": 4, "no_repeat_ngram_size": 3}}}
task
[ "SUMMARIZATION" ]
44,309
QuantFactory/Arcee-Agent-GGUF
QuantFactory
text-generation
[ "gguf", "text-generation", "en", "de", "ar", "arxiv:2406.06623", "base_model:arcee-ai/Arcee-Agent", "base_model:quantized:arcee-ai/Arcee-Agent", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
2024-07-20T08:08:51Z
2024-07-20T12:23:52+00:00
75
1
--- base_model: arcee-ai/Arcee-Agent language: - en - de - ar license: apache-2.0 pipeline_tag: text-generation --- ![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&) # QuantFactory/Arcee-Agent-GGUF This is quantized version of [arcee-ai/Arcee-Agent](https://huggingface.co/arcee-ai/Arcee-Agent) created using llama.cpp # Original Model Card <div align="center"> <img src="https://i.ibb.co/CBHmTDn/136719a5-6d8a-4654-a618-46eabc788953.jpg" alt="Arcee-Agent" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;"> </div> Arcee Agent is a cutting-edge 7B parameter language model specifically designed for function calling and tool use. Initialized from Qwen2-7B, it rivals the performance of much larger models while maintaining efficiency and speed. This model is particularly suited for developers, researchers, and businesses looking to implement sophisticated AI-driven solutions without the computational overhead of larger language models. Compute for training Arcee-Agent was provided by [CrusoeAI](https://huggingface.co/crusoeai). Arcee-Agent was trained using [Spectrum](https://arxiv.org/abs/2406.06623). GGUFs are available from [CrusoeAI](https://huggingface.co/crusoeai/Arcee-Agent-GGUF). ### Key Features 1. **Advanced Function Calling:** Arcee Agent excels at interpreting, executing, and chaining function calls. This capability allows it to interact seamlessly with a wide range of external tools, APIs, and services. 2. **Multiple Format Support:** The model is compatible with various tool use formats, including: - Glaive FC v2 - Salesforce - Agent-FLAN Arcee-Agent performs best when using the VLLM OpenAI FC format, but it also excels with prompt-based solutions. Agent-Spark can accommodate any specific use case or infrastructure needs you may have. 4. **Dual-Mode Functionality:** - Tool Router: Arcee Agent can serve as intelligent middleware, analyzing requests and efficiently routing them to appropriate tools or larger language models for processing. - Standalone Chat Agent: Despite its focus on function calling, Arcee Agent is capable of engaging in human-like conversations and completing a wide range of tasks independently. 5. **Unparalleled Speed and Efficiency:** With its 7B parameter architecture, Arcee Agent delivers rapid response times and efficient processing, making it suitable for real-time applications and resource-constrained environments. 6. **Competitive Performance:** In function calling and tool use tasks, Arcee Agent competes with the capabilities of models many times its size, offering a cost-effective solution for businesses and developers. ## Detailed Function Calling and Tool Use Capabilities Arcee Agent's function calling and tool use capabilities open up a world of possibilities for AI-driven applications. Here's a deeper look at what you can achieve: 1. **API Integration:** Seamlessly interact with external APIs, allowing your applications to: - Fetch real-time data (e.g., stock prices, weather information) - Post updates to social media platforms - Send emails or SMS messages - Interact with IoT devices 2. **Database Operations:** Execute complex database queries and operations through natural language commands, enabling: - Data retrieval and analysis - Record updates and insertions - Schema modifications 3. **Code Generation and Execution:** Generate and run code snippets in various programming languages, facilitating: - Quick prototyping - Automated code review - Dynamic script generation for data processing 4. **Multi-step Task Execution:** Chain multiple functions together to complete complex tasks, such as: - Booking travel arrangements (flights, hotels, car rentals) - Generating comprehensive reports from multiple data sources - Automating multi-stage business processes ## Business Use Cases Arcee Agent's unique capabilities make it an invaluable asset for businesses across various industries. Here are some specific use cases: 1. **Customer Support Automation:** - Implement AI-driven chatbots that handle complex customer inquiries and support tickets. - Automate routine support tasks such as password resets, order tracking, and FAQ responses. - Integrate with CRM systems to provide personalized customer interactions based on user history. 2. **Sales and Marketing Automation:** - Automate lead qualification and follow-up using personalized outreach based on user behavior. - Generate dynamic marketing content tailored to specific audiences and platforms. - Analyze customer feedback from various sources to inform marketing strategies. 3. **Operational Efficiency:** - Automate administrative tasks such as scheduling, data entry, and report generation. - Implement intelligent assistants for real-time data retrieval and analysis from internal databases. - Streamline project management with automated task assignment and progress tracking. 4. **Financial Services Automation:** - Automate financial reporting and compliance checks. - Implement AI-driven financial advisors for personalized investment recommendations. - Integrate with financial APIs to provide real-time market analysis and alerts. 5. **Healthcare Solutions:** - Automate patient record management and data retrieval for healthcare providers. 6. **E-commerce Enhancements:** - Create intelligent product recommendation systems based on user preferences and behavior. - Automate inventory management and supply chain logistics. - Implement AI-driven pricing strategies and promotional campaigns. 7. **Human Resources Automation:** - Automate candidate screening and ranking based on resume analysis and job requirements. - Implement virtual onboarding assistants to guide new employees through the onboarding process. - Analyze employee feedback and sentiment to inform HR policies and practices. 8. **Legal Services Automation:** - Automate contract analysis and extraction of key legal terms and conditions. - Implement AI-driven tools for legal research and case law summarization. - Develop virtual legal assistants to provide preliminary legal advice and document drafting. 9. **Educational Tools:** - Create personalized learning plans and content recommendations for students. - Automate grading and feedback for assignments and assessments. 10. **Manufacturing and Supply Chain Automation:** - Optimize production schedules and inventory levels using real-time data analysis. - Implement predictive maintenance for machinery and equipment. - Automate quality control processes through data-driven insights. ## Benchmarking <div align="center"> <img src="https://i.ibb.co/xmgswP8/Screenshot-2024-07-02-at-1-49-04-PM.png" alt="Arcee-Agent-Evals" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;"> </div> ## Intended Uses Arcee Agent is designed for a wide range of applications where efficient function calling and tool use are crucial. Some potential use cases include: - Developing sophisticated chatbots and virtual assistants with advanced tool integration - Creating efficient middleware for routing and preprocessing requests to larger language models - Implementing AI-driven process automation in resource-constrained environments - Prototyping and testing complex tool-use scenarios without the need for more computationally expensive models - Building interactive documentation systems that can execute code examples in real-time - Developing intelligent agents for IoT device management and home automation - Creating AI-powered research assistants for various scientific disciplines ## Limitations While Arcee Agent excels in its specialized areas, users should be aware of its limitations: - The model's general knowledge and capabilities outside of function calling and tool use may be more limited compared to larger, general-purpose language models. - Performance in tasks unrelated to its core functionalities may not match that of models with more diverse training. - As with all language models, outputs should be validated and used responsibly, especially in critical applications. - The model's knowledge cutoff date may limit its awareness of recent events or technological advancements. ## Usage The model was trained to respect many different formats - but the evals were done with this specific tool template: ```python In this environment, you have access to a set of tools you can use to answer the user's question. You may call them like this: <function_calls> <invoke> <tool_name>$TOOL_NAME</tool_name> <parameters> <$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME> ... </parameters> </invoke> </function_calls> Here are the tools available: <tools> ```
null
Non_BioNLP
![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&) # QuantFactory/Arcee-Agent-GGUF This is quantized version of [arcee-ai/Arcee-Agent](https://huggingface.co/arcee-ai/Arcee-Agent) created using llama.cpp # Original Model Card <div align="center"> <img src="https://i.ibb.co/CBHmTDn/136719a5-6d8a-4654-a618-46eabc788953.jpg" alt="Arcee-Agent" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;"> </div> Arcee Agent is a cutting-edge 7B parameter language model specifically designed for function calling and tool use. Initialized from Qwen2-7B, it rivals the performance of much larger models while maintaining efficiency and speed. This model is particularly suited for developers, researchers, and businesses looking to implement sophisticated AI-driven solutions without the computational overhead of larger language models. Compute for training Arcee-Agent was provided by [CrusoeAI](https://huggingface.co/crusoeai). Arcee-Agent was trained using [Spectrum](https://arxiv.org/abs/2406.06623). GGUFs are available from [CrusoeAI](https://huggingface.co/crusoeai/Arcee-Agent-GGUF). ### Key Features 1. **Advanced Function Calling:** Arcee Agent excels at interpreting, executing, and chaining function calls. This capability allows it to interact seamlessly with a wide range of external tools, APIs, and services. 2. **Multiple Format Support:** The model is compatible with various tool use formats, including: - Glaive FC v2 - Salesforce - Agent-FLAN Arcee-Agent performs best when using the VLLM OpenAI FC format, but it also excels with prompt-based solutions. Agent-Spark can accommodate any specific use case or infrastructure needs you may have. 4. **Dual-Mode Functionality:** - Tool Router: Arcee Agent can serve as intelligent middleware, analyzing requests and efficiently routing them to appropriate tools or larger language models for processing. - Standalone Chat Agent: Despite its focus on function calling, Arcee Agent is capable of engaging in human-like conversations and completing a wide range of tasks independently. 5. **Unparalleled Speed and Efficiency:** With its 7B parameter architecture, Arcee Agent delivers rapid response times and efficient processing, making it suitable for real-time applications and resource-constrained environments. 6. **Competitive Performance:** In function calling and tool use tasks, Arcee Agent competes with the capabilities of models many times its size, offering a cost-effective solution for businesses and developers. ## Detailed Function Calling and Tool Use Capabilities Arcee Agent's function calling and tool use capabilities open up a world of possibilities for AI-driven applications. Here's a deeper look at what you can achieve: 1. **API Integration:** Seamlessly interact with external APIs, allowing your applications to: - Fetch real-time data (e.g., stock prices, weather information) - Post updates to social media platforms - Send emails or SMS messages - Interact with IoT devices 2. **Database Operations:** Execute complex database queries and operations through natural language commands, enabling: - Data retrieval and analysis - Record updates and insertions - Schema modifications 3. **Code Generation and Execution:** Generate and run code snippets in various programming languages, facilitating: - Quick prototyping - Automated code review - Dynamic script generation for data processing 4. **Multi-step Task Execution:** Chain multiple functions together to complete complex tasks, such as: - Booking travel arrangements (flights, hotels, car rentals) - Generating comprehensive reports from multiple data sources - Automating multi-stage business processes ## Business Use Cases Arcee Agent's unique capabilities make it an invaluable asset for businesses across various industries. Here are some specific use cases: 1. **Customer Support Automation:** - Implement AI-driven chatbots that handle complex customer inquiries and support tickets. - Automate routine support tasks such as password resets, order tracking, and FAQ responses. - Integrate with CRM systems to provide personalized customer interactions based on user history. 2. **Sales and Marketing Automation:** - Automate lead qualification and follow-up using personalized outreach based on user behavior. - Generate dynamic marketing content tailored to specific audiences and platforms. - Analyze customer feedback from various sources to inform marketing strategies. 3. **Operational Efficiency:** - Automate administrative tasks such as scheduling, data entry, and report generation. - Implement intelligent assistants for real-time data retrieval and analysis from internal databases. - Streamline project management with automated task assignment and progress tracking. 4. **Financial Services Automation:** - Automate financial reporting and compliance checks. - Implement AI-driven financial advisors for personalized investment recommendations. - Integrate with financial APIs to provide real-time market analysis and alerts. 5. **Healthcare Solutions:** - Automate patient record management and data retrieval for healthcare providers. 6. **E-commerce Enhancements:** - Create intelligent product recommendation systems based on user preferences and behavior. - Automate inventory management and supply chain logistics. - Implement AI-driven pricing strategies and promotional campaigns. 7. **Human Resources Automation:** - Automate candidate screening and ranking based on resume analysis and job requirements. - Implement virtual onboarding assistants to guide new employees through the onboarding process. - Analyze employee feedback and sentiment to inform HR policies and practices. 8. **Legal Services Automation:** - Automate contract analysis and extraction of key legal terms and conditions. - Implement AI-driven tools for legal research and case law summarization. - Develop virtual legal assistants to provide preliminary legal advice and document drafting. 9. **Educational Tools:** - Create personalized learning plans and content recommendations for students. - Automate grading and feedback for assignments and assessments. 10. **Manufacturing and Supply Chain Automation:** - Optimize production schedules and inventory levels using real-time data analysis. - Implement predictive maintenance for machinery and equipment. - Automate quality control processes through data-driven insights. ## Benchmarking <div align="center"> <img src="https://i.ibb.co/xmgswP8/Screenshot-2024-07-02-at-1-49-04-PM.png" alt="Arcee-Agent-Evals" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;"> </div> ## Intended Uses Arcee Agent is designed for a wide range of applications where efficient function calling and tool use are crucial. Some potential use cases include: - Developing sophisticated chatbots and virtual assistants with advanced tool integration - Creating efficient middleware for routing and preprocessing requests to larger language models - Implementing AI-driven process automation in resource-constrained environments - Prototyping and testing complex tool-use scenarios without the need for more computationally expensive models - Building interactive documentation systems that can execute code examples in real-time - Developing intelligent agents for IoT device management and home automation - Creating AI-powered research assistants for various scientific disciplines ## Limitations While Arcee Agent excels in its specialized areas, users should be aware of its limitations: - The model's general knowledge and capabilities outside of function calling and tool use may be more limited compared to larger, general-purpose language models. - Performance in tasks unrelated to its core functionalities may not match that of models with more diverse training. - As with all language models, outputs should be validated and used responsibly, especially in critical applications. - The model's knowledge cutoff date may limit its awareness of recent events or technological advancements. ## Usage The model was trained to respect many different formats - but the evals were done with this specific tool template: ```python In this environment, you have access to a set of tools you can use to answer the user's question. You may call them like this: <function_calls> <invoke> <tool_name>$TOOL_NAME</tool_name> <parameters> <$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME> ... </parameters> </invoke> </function_calls> Here are the tools available: <tools> ```
{"base_model": "arcee-ai/Arcee-Agent", "language": ["en", "de", "ar"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "SUMMARIZATION" ]
44,310
Anjaan-Khadka/summarization_nepali
Anjaan-Khadka
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "ne", "dataset:csebuetnlp/xlsum", "doi:10.57967/hf/0399", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-15T12:04:40Z
2023-02-23T08:34:44+00:00
10
1
--- datasets: - csebuetnlp/xlsum language: - ne tags: - summarization - mT5 widget: - text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ । model-index: - name: Anjaan-Khadka/summarization_nepali results: - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - type: rouge value: 36.5002 name: ROUGE-1 verified: false --- # adaptation of mT5-multilingual-XLSum for Nepali Lnaguage This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM article_text = " तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।" model_name = "Anjaan-Khadka/summarization_nepali" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( (article_text), return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ```
null
Non_BioNLP
# adaptation of mT5-multilingual-XLSum for Nepali Lnaguage This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM article_text = " तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।" model_name = "Anjaan-Khadka/summarization_nepali" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( (article_text), return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ```
{"datasets": ["csebuetnlp/xlsum"], "language": ["ne"], "tags": ["summarization", "mT5"], "widget": [{"text": "तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी वारि र पारिको ४ सय ६० वर्ग किलोमिटर क्षेत्रलाई समेट्नेछ ।"}], "model-index": [{"name": "Anjaan-Khadka/summarization_nepali", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 36.5002, "name": "ROUGE-1", "verified": false}]}]}]}
task
[ "SUMMARIZATION" ]
44,311
bigstorm/dolphin-2.9.4-llama3.1-8b-6.0bpw-8hb-exl2
bigstorm
null
[ "safetensors", "llama", "generated_from_trainer", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:mlabonne/FineTome-100k", "dataset:arcee/agent_data", "dataset:PawanKrd/math-gpt-4o-200k", "dataset:cognitivecomputations/SystemChat-2.0", "base_model:meta-llama/Llama-3.1-8B", "base_model:quantized:meta-llama/Llama-3.1-8B", "license:llama3.1", "6-bit", "exl2", "region:us" ]
2024-09-14T18:36:31Z
2024-09-14T18:40:26+00:00
18
0
--- base_model: meta-llama/Meta-Llama-3.1-8B datasets: - cognitivecomputations/Dolphin-2.9 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - mlabonne/FineTome-100k - arcee/agent_data - PawanKrd/math-gpt-4o-200k - cognitivecomputations/SystemChat-2.0 license: llama3.1 tags: - generated_from_trainer --- # BigStorm - ExLLamaV2 (Exl2) Quantization - 6.0 bpw target - 8 head bits Enjoy! Raise an issue if you'd like other BPW levels. **Base Model Card Follows:** --- # Dolphin 2.9.4 Llama 3.1 8b 🐬 Curated and trained by Eric Hartford and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/h3K4XGj2RH) Discord: https://discord.gg/h3K4XGj2RH <img src="https://hf.fast360.xyz/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.4: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Meta Llama 3.1 8b, and is governed by the Llama 3.1 license. The base model has 128K context, and our finetuning used 8192 sequence length. Dolphin 2.9.4 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.4 has a variety of instruction following, conversational, and coding skills. It also has agentic abilities and supports function calling. It is especially trained to obey the system prompt, and follow instructions in many languages. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. <details><summary>Evals</summary> ``` hf (pretrained=/workspace/axolotl/dolphin-2.9.4-llama3.1-8b-hf,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (4) | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_bbh_boolean_expressions | 0|none | 3|acc_norm |↑ |0.8000|± |0.0253| | - leaderboard_bbh_causal_judgement | 0|none | 3|acc_norm |↑ |0.5615|± |0.0364| | - leaderboard_bbh_date_understanding | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_disambiguation_qa | 0|none | 3|acc_norm |↑ |0.6640|± |0.0299| | - leaderboard_bbh_formal_fallacies | 0|none | 3|acc_norm |↑ |0.5600|± |0.0315| | - leaderboard_bbh_geometric_shapes | 0|none | 3|acc_norm |↑ |0.3640|± |0.0305| | - leaderboard_bbh_hyperbaton | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_logical_deduction_five_objects | 0|none | 3|acc_norm |↑ |0.4600|± |0.0316| | - leaderboard_bbh_logical_deduction_seven_objects | 0|none | 3|acc_norm |↑ |0.4360|± |0.0314| | - leaderboard_bbh_logical_deduction_three_objects | 0|none | 3|acc_norm |↑ |0.6160|± |0.0308| | - leaderboard_bbh_movie_recommendation | 0|none | 3|acc_norm |↑ |0.7880|± |0.0259| | - leaderboard_bbh_navigate | 0|none | 3|acc_norm |↑ |0.5200|± |0.0317| | - leaderboard_bbh_object_counting | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_penguins_in_a_table | 0|none | 3|acc_norm |↑ |0.5205|± |0.0415| | - leaderboard_bbh_reasoning_about_colored_objects | 0|none | 3|acc_norm |↑ |0.5120|± |0.0317| | - leaderboard_bbh_ruin_names | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_salient_translation_error_detection | 0|none | 3|acc_norm |↑ |0.4320|± |0.0314| | - leaderboard_bbh_snarks | 0|none | 3|acc_norm |↑ |0.5843|± |0.0370| | - leaderboard_bbh_sports_understanding | 0|none | 3|acc_norm |↑ |0.7040|± |0.0289| | - leaderboard_bbh_temporal_sequences | 0|none | 3|acc_norm |↑ |0.1440|± |0.0222| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 0|none | 3|acc_norm |↑ |0.1560|± |0.0230| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 0|none | 3|acc_norm |↑ |0.1320|± |0.0215| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 0|none | 3|acc_norm |↑ |0.2840|± |0.0286| | - leaderboard_bbh_web_of_lies | 0|none | 3|acc_norm |↑ |0.4840|± |0.0317| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2980|± |0.0326| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2839|± |0.0193| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2946|± |0.0216| | - leaderboard_ifeval | 2|none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.1596|± |0.0209| | - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0488|± |0.0195| | - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0530|± |0.0196| | - leaderboard_math_hard |N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0143|± |0.0071| | - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0455|± |0.0168| | - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.2591|± |0.0316| | - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0519|± |0.0192| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.2926|± |0.0041| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5280|± |0.0316| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.3594|± |0.0300| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.2720|± |0.0282| | Groups |Version|Filter|n-shot| Metric | |Value | |Stderr| |------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_math_hard|N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| ``` </details> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false # load_in_4bit: true strict: false datasets: - path: /workspace/datasets/dolphin-2.9.4/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml chat_template: chatml # adapter: qlora # lora_r: 128 # lora_alpha: 16 # lora_modules_to_save: [embed_tokens, lm_head] # lora_dropout: 0.05 # lora_target_linear: true unfrozen_parameters: - input_layernorm - model.norm - post_attention_layernorm - self_attn.rotary_emb - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.1.mlp.down_proj - model.layers.0.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.2.mlp.down_proj - model.layers.21.mlp.down_proj - model.layers.22.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.5.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.23.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.3.mlp.down_proj - model.layers.17.mlp.down_proj - model.layers.6.mlp.down_proj - model.layers.31.mlp.down_proj # mlp.up_proj layers - model.layers.4.mlp.up_proj - model.layers.3.mlp.up_proj - model.layers.0.mlp.up_proj - model.layers.5.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.6.mlp.up_proj - model.layers.2.mlp.up_proj - model.layers.1.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.12.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.17.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.19.mlp.up_proj # self_attn.k_proj layers - model.layers.29.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.14.self_attn.k_proj # self_attn.o_proj layers - model.layers.14.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.16.self_attn.o_proj # self_attn.q_proj layers - model.layers.8.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.26.self_attn.q_proj # self_attn.v_proj layers - model.layers.26.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.3.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.21.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.20.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.1.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.14.self_attn.v_proj # mlp.gate_proj layers - model.layers.1.mlp.gate_proj - model.layers.2.mlp.gate_proj - model.layers.3.mlp.gate_proj - model.layers.4.mlp.gate_proj - model.layers.0.mlp.gate_proj - model.layers.25.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.5.mlp.gate_proj - model.layers.24.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.21.mlp.gate_proj - model.layers.22.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.20.mlp.gate_proj dataset_prepared_path: /workspace/axolotl/dolph-2.9.4-nemo-prepared val_set_size: 0.01 output_dir: /workspace/axolotl/dolphin-2.9.4-llama3.1-8b sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin-2.9.4-llama3.1-8b wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 16 micro_batch_size: 2 num_epochs: 3 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 5e-6 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 # evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 save_total_limit: 2 save_steps: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 special_tokens: eos_token: "<|im_end|>" bos_token: "<|begin_of_text|>" pad_token: "<|finetune_right_pad_id|>" tokens: - "<|im_start|>" # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_transformer_layer_cls_to_wrap: MixtralSparseMoeBlock # fsdp_state_dict_type: FULL_STATE_DICT # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # fsdp_backward_prefetch: BACKWARD_PRE ``` </details><br> # workspace/axolotl/dolphin-2.9.4-llama3.1-8b This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5837 | 1.0180 | 1161 | 0.5814 | | 0.5525 | 2.0179 | 2322 | 0.5671 | | 0.5514 | 2.9624 | 3420 | 0.5655 | ### Framework versions - Transformers 4.44.0.dev0 - Pytorch 2.4.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
Non_BioNLP
# BigStorm - ExLLamaV2 (Exl2) Quantization - 6.0 bpw target - 8 head bits Enjoy! Raise an issue if you'd like other BPW levels. **Base Model Card Follows:** --- # Dolphin 2.9.4 Llama 3.1 8b 🐬 Curated and trained by Eric Hartford and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/h3K4XGj2RH) Discord: https://discord.gg/h3K4XGj2RH <img src="https://hf.fast360.xyz/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.4: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xL40S node This model is based on Meta Llama 3.1 8b, and is governed by the Llama 3.1 license. The base model has 128K context, and our finetuning used 8192 sequence length. Dolphin 2.9.4 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.4 has a variety of instruction following, conversational, and coding skills. It also has agentic abilities and supports function calling. It is especially trained to obey the system prompt, and follow instructions in many languages. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. <details><summary>Evals</summary> ``` hf (pretrained=/workspace/axolotl/dolphin-2.9.4-llama3.1-8b-hf,dtype=bfloat16), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (4) | Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr| |-----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_bbh_boolean_expressions | 0|none | 3|acc_norm |↑ |0.8000|± |0.0253| | - leaderboard_bbh_causal_judgement | 0|none | 3|acc_norm |↑ |0.5615|± |0.0364| | - leaderboard_bbh_date_understanding | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_disambiguation_qa | 0|none | 3|acc_norm |↑ |0.6640|± |0.0299| | - leaderboard_bbh_formal_fallacies | 0|none | 3|acc_norm |↑ |0.5600|± |0.0315| | - leaderboard_bbh_geometric_shapes | 0|none | 3|acc_norm |↑ |0.3640|± |0.0305| | - leaderboard_bbh_hyperbaton | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_logical_deduction_five_objects | 0|none | 3|acc_norm |↑ |0.4600|± |0.0316| | - leaderboard_bbh_logical_deduction_seven_objects | 0|none | 3|acc_norm |↑ |0.4360|± |0.0314| | - leaderboard_bbh_logical_deduction_three_objects | 0|none | 3|acc_norm |↑ |0.6160|± |0.0308| | - leaderboard_bbh_movie_recommendation | 0|none | 3|acc_norm |↑ |0.7880|± |0.0259| | - leaderboard_bbh_navigate | 0|none | 3|acc_norm |↑ |0.5200|± |0.0317| | - leaderboard_bbh_object_counting | 0|none | 3|acc_norm |↑ |0.4520|± |0.0315| | - leaderboard_bbh_penguins_in_a_table | 0|none | 3|acc_norm |↑ |0.5205|± |0.0415| | - leaderboard_bbh_reasoning_about_colored_objects | 0|none | 3|acc_norm |↑ |0.5120|± |0.0317| | - leaderboard_bbh_ruin_names | 0|none | 3|acc_norm |↑ |0.6320|± |0.0306| | - leaderboard_bbh_salient_translation_error_detection | 0|none | 3|acc_norm |↑ |0.4320|± |0.0314| | - leaderboard_bbh_snarks | 0|none | 3|acc_norm |↑ |0.5843|± |0.0370| | - leaderboard_bbh_sports_understanding | 0|none | 3|acc_norm |↑ |0.7040|± |0.0289| | - leaderboard_bbh_temporal_sequences | 0|none | 3|acc_norm |↑ |0.1440|± |0.0222| | - leaderboard_bbh_tracking_shuffled_objects_five_objects | 0|none | 3|acc_norm |↑ |0.1560|± |0.0230| | - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 0|none | 3|acc_norm |↑ |0.1320|± |0.0215| | - leaderboard_bbh_tracking_shuffled_objects_three_objects| 0|none | 3|acc_norm |↑ |0.2840|± |0.0286| | - leaderboard_bbh_web_of_lies | 0|none | 3|acc_norm |↑ |0.4840|± |0.0317| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.2980|± |0.0326| | - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.2839|± |0.0193| | - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.2946|± |0.0216| | - leaderboard_ifeval | 2|none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_math_algebra_hard | 1|none | 4|exact_match |↑ |0.1596|± |0.0209| | - leaderboard_math_counting_and_prob_hard | 1|none | 4|exact_match |↑ |0.0488|± |0.0195| | - leaderboard_math_geometry_hard | 1|none | 4|exact_match |↑ |0.0530|± |0.0196| | - leaderboard_math_hard |N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_math_intermediate_algebra_hard | 1|none | 4|exact_match |↑ |0.0143|± |0.0071| | - leaderboard_math_num_theory_hard | 1|none | 4|exact_match |↑ |0.0455|± |0.0168| | - leaderboard_math_prealgebra_hard | 1|none | 4|exact_match |↑ |0.2591|± |0.0316| | - leaderboard_math_precalculus_hard | 1|none | 4|exact_match |↑ |0.0519|± |0.0192| | - leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.2926|± |0.0041| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| | - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5280|± |0.0316| | - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.3594|± |0.0300| | - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.2720|± |0.0282| | Groups |Version|Filter|n-shot| Metric | |Value | |Stderr| |------------------------|-------|------|-----:|-----------------------|---|-----:|---|------| |leaderboard |N/A |none | 0|acc |↑ |0.2926|± |0.0041| | | |none | 0|acc_norm |↑ |0.4513|± |0.0053| | | |none | 0|exact_match |↑ |0.0982|± |0.0079| | | |none | 0|inst_level_loose_acc |↑ |0.3825|± |N/A | | | |none | 0|inst_level_strict_acc |↑ |0.3597|± |N/A | | | |none | 0|prompt_level_loose_acc |↑ |0.2421|± |0.0184| | | |none | 0|prompt_level_strict_acc|↑ |0.2181|± |0.0178| | - leaderboard_bbh |N/A |none | 3|acc_norm |↑ |0.4931|± |0.0061| | - leaderboard_gpqa |N/A |none | 0|acc_norm |↑ |0.2903|± |0.0132| | - leaderboard_math_hard|N/A |none | 4|exact_match |↑ |0.0982|± |0.0079| | - leaderboard_musr |N/A |none | 0|acc_norm |↑ |0.3862|± |0.0173| ``` </details> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: meta-llama/Meta-Llama-3.1-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false # load_in_4bit: true strict: false datasets: - path: /workspace/datasets/dolphin-2.9.4/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml chat_template: chatml # adapter: qlora # lora_r: 128 # lora_alpha: 16 # lora_modules_to_save: [embed_tokens, lm_head] # lora_dropout: 0.05 # lora_target_linear: true unfrozen_parameters: - input_layernorm - model.norm - post_attention_layernorm - self_attn.rotary_emb - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.1.mlp.down_proj - model.layers.0.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.2.mlp.down_proj - model.layers.21.mlp.down_proj - model.layers.22.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.5.mlp.down_proj - model.layers.4.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.23.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.3.mlp.down_proj - model.layers.17.mlp.down_proj - model.layers.6.mlp.down_proj - model.layers.31.mlp.down_proj # mlp.up_proj layers - model.layers.4.mlp.up_proj - model.layers.3.mlp.up_proj - model.layers.0.mlp.up_proj - model.layers.5.mlp.up_proj - model.layers.7.mlp.up_proj - model.layers.6.mlp.up_proj - model.layers.2.mlp.up_proj - model.layers.1.mlp.up_proj - model.layers.8.mlp.up_proj - model.layers.12.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.9.mlp.up_proj - model.layers.15.mlp.up_proj - model.layers.17.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.19.mlp.up_proj # self_attn.k_proj layers - model.layers.29.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.19.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.20.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.14.self_attn.k_proj # self_attn.o_proj layers - model.layers.14.self_attn.o_proj - model.layers.7.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.9.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.16.self_attn.o_proj # self_attn.q_proj layers - model.layers.8.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.1.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.26.self_attn.q_proj # self_attn.v_proj layers - model.layers.26.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.3.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.21.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.20.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.4.self_attn.v_proj - model.layers.1.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.14.self_attn.v_proj # mlp.gate_proj layers - model.layers.1.mlp.gate_proj - model.layers.2.mlp.gate_proj - model.layers.3.mlp.gate_proj - model.layers.4.mlp.gate_proj - model.layers.0.mlp.gate_proj - model.layers.25.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.5.mlp.gate_proj - model.layers.24.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.21.mlp.gate_proj - model.layers.22.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.20.mlp.gate_proj dataset_prepared_path: /workspace/axolotl/dolph-2.9.4-nemo-prepared val_set_size: 0.01 output_dir: /workspace/axolotl/dolphin-2.9.4-llama3.1-8b sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin-2.9.4-llama3.1-8b wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 16 micro_batch_size: 2 num_epochs: 3 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 5e-6 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 # evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 save_total_limit: 2 save_steps: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 special_tokens: eos_token: "<|im_end|>" bos_token: "<|begin_of_text|>" pad_token: "<|finetune_right_pad_id|>" tokens: - "<|im_start|>" # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: true # fsdp_offload_params: true # fsdp_use_orig_params: false # fsdp_cpu_ram_efficient_loading: true # fsdp_transformer_layer_cls_to_wrap: MixtralSparseMoeBlock # fsdp_state_dict_type: FULL_STATE_DICT # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # fsdp_backward_prefetch: BACKWARD_PRE ``` </details><br> # workspace/axolotl/dolphin-2.9.4-llama3.1-8b This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5837 | 1.0180 | 1161 | 0.5814 | | 0.5525 | 2.0179 | 2322 | 0.5671 | | 0.5514 | 2.9624 | 3420 | 0.5655 | ### Framework versions - Transformers 4.44.0.dev0 - Pytorch 2.4.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
{"base_model": "meta-llama/Meta-Llama-3.1-8B", "datasets": ["cognitivecomputations/Dolphin-2.9", "m-a-p/CodeFeedback-Filtered-Instruction", "cognitivecomputations/dolphin-coder", "cognitivecomputations/samantha-data", "microsoft/orca-math-word-problems-200k", "mlabonne/FineTome-100k", "arcee/agent_data", "PawanKrd/math-gpt-4o-200k", "cognitivecomputations/SystemChat-2.0"], "license": "llama3.1", "tags": ["generated_from_trainer"]}
task
[ "TRANSLATION" ]
44,312
Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V5-70B
Saxo
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "jp", "cn", "dataset:Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset", "dataset:Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset", "dataset:Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface", "dataset:Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled", "dataset:Saxo/ko-news-corpus-1", "dataset:Saxo/ko-news-corpus-2", "dataset:Saxo/ko-news-corpus-3", "dataset:Saxo/ko-news-corpus-4", "dataset:Saxo/ko-news-corpus-5", "dataset:Saxo/ko-news-corpus-6", "dataset:Saxo/ko-news-corpus-7", "dataset:Saxo/ko-news-corpus-8", "dataset:Saxo/ko-news-corpus-9", "dataset:maywell/ko_Ultrafeedback_binarized", "dataset:youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo", "dataset:lilacai/glaive-function-calling-v2-sharegpt", "dataset:kuotient/gsm8k-ko", "base_model:Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B", "base_model:finetune:Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-12-24T13:27:34Z
2024-12-24T14:07:26+00:00
21
0
--- base_model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Model Card for Model ID <div align="center"> <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" /> </div> AIとビッグデータ分析の専門企業であるLinkbricksのデータサイエンティストであるジ・ユンソン(Saxo)ディレクターが <br> Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70Bベースモデルを使用し、H100-80G 8個を通じて約 日本語 SFT->DPO した日本語強化言語モデル。<br> 3千万件の日本ニュース及びウィキコーパスを基準に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学及び論理判断データを通じて、日中韓英言語クロス補強処理と複雑な論理問題にも対応できるように訓練したモデルです。 -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br> -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br> -128k-Context Window<br> -Function Call<br> -128k-Context Window<br> -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br> -「transformers_version」: 「4.46.3」<br> <br><br> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 <br> Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B 베이스모델을 사용해서 H100-80G 8개를 통해 일본어 SFT->DPO 한 일본어 강화 언어 모델<br> 3천만건의 일본 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Function Call 및 Tool Calling 지원<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -"transformers_version": "4.46.3"<br> <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> Japanese SFT->DPO training model based on Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B through 8 H100-80Gs as a Japanese boosting language model <br> It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 30M Japanese news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -Function Calling<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
null
Non_BioNLP
# Model Card for Model ID <div align="center"> <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" /> </div> AIとビッグデータ分析の専門企業であるLinkbricksのデータサイエンティストであるジ・ユンソン(Saxo)ディレクターが <br> Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70Bベースモデルを使用し、H100-80G 8個を通じて約 日本語 SFT->DPO した日本語強化言語モデル。<br> 3千万件の日本ニュース及びウィキコーパスを基準に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学及び論理判断データを通じて、日中韓英言語クロス補強処理と複雑な論理問題にも対応できるように訓練したモデルです。 -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br> -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br> -128k-Context Window<br> -Function Call<br> -128k-Context Window<br> -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br> -「transformers_version」: 「4.46.3」<br> <br><br> AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 <br> Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B 베이스모델을 사용해서 H100-80G 8개를 통해 일본어 SFT->DPO 한 일본어 강화 언어 모델<br> 3천만건의 일본 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br> -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br> -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br> -128k-Context Window<br> -Function Call 및 Tool Calling 지원<br> -128k-Context Window<br> -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br> -"transformers_version": "4.46.3"<br> <br><br> Finetuned by Mr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics <br> Japanese SFT->DPO training model based on Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B through 8 H100-80Gs as a Japanese boosting language model <br> It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 30M Japanese news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br> -Tokenizer uses the base model without word expansion<br> -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br> -Function Calling<br> -128k-Context Window<br> -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br> <br><br> <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
{"base_model": "Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V4-70B", "datasets": ["Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset", "Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset", "Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface", "Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled", "Saxo/ko-news-corpus-1", "Saxo/ko-news-corpus-2", "Saxo/ko-news-corpus-3", "Saxo/ko-news-corpus-4", "Saxo/ko-news-corpus-5", "Saxo/ko-news-corpus-6", "Saxo/ko-news-corpus-7", "Saxo/ko-news-corpus-8", "Saxo/ko-news-corpus-9", "maywell/ko_Ultrafeedback_binarized", "youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo", "lilacai/glaive-function-calling-v2-sharegpt", "kuotient/gsm8k-ko"], "language": ["ko", "en", "jp", "cn"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
task
[ "TRANSLATION", "SUMMARIZATION" ]
44,313
jondurbin/spicyboros-c34b-2.2-prequant-merge
jondurbin
text-generation
[ "transformers", "pytorch", "llama", "text-generation", "not-for-all-audiences", "dataset:jondurbin/airoboros-2.2", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-09-12T11:27:36Z
2023-09-12T13:29:29+00:00
9
4
--- datasets: - jondurbin/airoboros-2.2 license: llama2 tags: - not-for-all-audiences --- ### Overview __Usage restriction: To use this model, you must agree to the following:__ - Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with extreme caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities. __Ok, now that the warning is out of the way...__ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) Highlights: - The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below). - This version also includes "de-alignment" data, to enable less savory interactions and outputs. - To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl) - I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less. - More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc. __WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__ Breakdown of the training data: | Count | Category | |--------|----------------------------| | 60 | quiz | | 63 | card | | 100 | detailed\_writing | | 103 | experience | | 114 | greeting | | 200 | song | | 204 | editor | | 250 | counterfactual\_contextual | | 268 | cot | | 339 | theory\_of\_mind | | 460 | misconception | | 500 | summarization | | 573 | awareness | | 715 | riddle | | 719 | agent | | 800 | plan | | 873 | gtkm | | 966 | rp | | 1000 | stylized\_response | | 1000 | wordgame | | 1279 | multiple\_choice | | 1641 | joke | | 1785 | writing | | 2155 | contextual | | 2364 | roleplay | | 2508 | trivia | | 5216 | general | | 5779 | coding | | 11367 | orca | In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay. *Why do I try to remove censorship?* - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine-tuning details https://gist.github.com/jondurbin/e94d5a0eb3868db2877413f53d59fb37 *Note: I used checkpoint 350 for final model, not the full 5 epochs!* ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
null
Non_BioNLP
### Overview __Usage restriction: To use this model, you must agree to the following:__ - Some of the content than can be produced is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in the dataset or generated outputs necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with extreme caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded and generated outputs with the model and I am completely indemnified from any and all liabilities. __Ok, now that the warning is out of the way...__ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) Highlights: - The prompt format has changed! It is now newlines instead of spaces between system/USER/ASSISTANT (see prompt info below). - This version also includes "de-alignment" data, to enable less savory interactions and outputs. - To learn more about the dataset, see: https://hf.co/datasets/jondurbin/airoboros-2.2 (this is the instructions.jsonl file, not instructions-clean.jsonl) - I re-generated all of the outputs in the dataset that had "Once upon a time" so they'd be less cliche - no guarantees that won't still happen, but in theory it may happen less. - More multiple choice, better awareness, some alignment for normal use case but system-prompt overridable etc. __WARNING: This model will gladly spew profane and otherwise NSFW content, if asked, use with care.__ Breakdown of the training data: | Count | Category | |--------|----------------------------| | 60 | quiz | | 63 | card | | 100 | detailed\_writing | | 103 | experience | | 114 | greeting | | 200 | song | | 204 | editor | | 250 | counterfactual\_contextual | | 268 | cot | | 339 | theory\_of\_mind | | 460 | misconception | | 500 | summarization | | 573 | awareness | | 715 | riddle | | 719 | agent | | 800 | plan | | 873 | gtkm | | 966 | rp | | 1000 | stylized\_response | | 1000 | wordgame | | 1279 | multiple\_choice | | 1641 | joke | | 1785 | writing | | 2155 | contextual | | 2364 | roleplay | | 2508 | trivia | | 5216 | general | | 5779 | coding | | 11367 | orca | In other words, it's a fairly general purpose model, but focuses fairly heavily on instruction response pairs rather than casual chat/roleplay. *Why do I try to remove censorship?* - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine-tuning details https://gist.github.com/jondurbin/e94d5a0eb3868db2877413f53d59fb37 *Note: I used checkpoint 350 for final model, not the full 5 epochs!* ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
{"datasets": ["jondurbin/airoboros-2.2"], "license": "llama2", "tags": ["not-for-all-audiences"]}
task
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
44,314
inceptionai/jais-family-6p7b
inceptionai
text-generation
[ "safetensors", "jais", "Arabic", "English", "LLM", "Decoder", "causal-lm", "jais-family", "text-generation", "custom_code", "ar", "en", "arxiv:2307.09288", "arxiv:2308.16149", "arxiv:2402.12840", "license:apache-2.0", "region:us" ]
2024-08-02T11:55:11Z
2024-09-11T11:03:27+00:00
145
5
--- language: - ar - en license: apache-2.0 pipeline_tag: text-generation tags: - Arabic - English - LLM - Decoder - causal-lm - jais-family --- # Jais Family Model Card The Jais family of models is a comprehensive series of bilingual English-Arabic large language models (LLMs). These models are optimized to excel in Arabic while having strong English capabilities. We release two variants of foundation models that include: - Models **pre-trained from scratch** (`jais-family-*`). - Models **pre-trained adaptively from [Llama-2](https://arxiv.org/pdf/2307.09288)** (`jais-adapted-*`). In this release, we introduce 20 models across 8 sizes, ranging from 590M to 70B parameters, trained on up to 1.6T tokens of Arabic, English, and code data. *All* pre-trained models in this series are instruction fine-tuned (`*-chat`) for dialog using a curated mix of Arabic and English instruction data. We hope this extensive release will accelerate research in Arabic NLP, and enable numerous downstream applications for the Arabic speaking and bilingual community. The training and adaptation techniques we demonstrate successfully for Arabic models are extensible to other low and medium resource languages. ## Jais Family Details - **Developed by:** Inception, Cerebras Systems. - **Language(s):** (NLP): Arabic (MSA) and English. - **Input:** Text only data. - **Output:** Model generates text. - **Model Sizes:** 590M, 1.3B, 2.7B, 6.7B, 7B, 13B, 30B, 70B. - **Demo:** [Access the live demo here](https://arabic-gpt.ai/) - **License:** Apache 2.0 | **Pre-trained Model** | **Fine-tuned Model** | **Size (Parameters)** | **Context length (Tokens)** | |:---------------------|:--------|:-------|:-------| | [jais-family-30b-16k](https://huggingface.co/inceptionai/jais-family-30b-16k) | [Jais-family-30b-16k-chat](https://huggingface.co/inceptionai/jais-family-30b-16k-chat) | 30B | 16,384 | | [jais-family-30b-8k](https://huggingface.co/inceptionai/jais-family-30b-8k) | [Jais-family-30b-8k-chat](https://huggingface.co/inceptionai/jais-family-30b-8k-chat) | 30B | 8,192 | | [jais-family-13b ](https://huggingface.co/inceptionai/jais-family-13b) | [Jais-family-13b-chat](https://huggingface.co/inceptionai/jais-family-13b-chat) | 13B | 2,048 | | [jais-family-6p7b](https://huggingface.co/inceptionai/jais-family-6p7b) | [Jais-family-6p7b-chat](https://huggingface.co/inceptionai/jais-family-6p7b-chat) | 6.7B | 2,048 | | [jais-family-2p7b](https://huggingface.co/inceptionai/jais-family-2p7b) | [Jais-family-2p7b-chat](https://huggingface.co/inceptionai/jais-family-2p7b-chat) | 2.7B | 2,048 | | [jais-family-1p3b](https://huggingface.co/inceptionai/jais-family-1p3b) | [Jais-family-1p3b-chat](https://huggingface.co/inceptionai/jais-family-1p3b-chat) | 1.3B | 2,048 | | [jais-family-590m](https://huggingface.co/inceptionai/jais-family-590m) | [Jais-family-590m-chat](https://huggingface.co/inceptionai/jais-family-590m-chat) | 590M | 2,048 | | **Adapted pre-trained Model** | **Fine-tuned Model** | **Size (Parameters)** | **Context length (Tokens)** | |:---------------------|:--------|:-------|:-------| | [jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b) | [Jais-adapted-70b-chat](https://huggingface.co/inceptionai/jais-adapted-70b-chat) | 70B | 4,096 | | [jais-adapted-13b](https://huggingface.co/inceptionai/jais-adapted-13b) | [Jais-adapted-13b-chat](https://huggingface.co/inceptionai/jais-adapted-13b-chat) | 13B | 4,096 | | [jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b) | [Jais-adapted-7b-chat](https://huggingface.co/inceptionai/jais-adapted-7b-chat) | 7B | 4,096 | ### Model Architecture: <a name="model-architecture"></a> All models in this family are auto-regressive language models that use a transformer-based, decoder-only architecture (GPT-3). Jais models (`jais-family-*`) are *trained from scratch*, incorporating the SwiGLU non-linear activation function and ALiBi position encoding. These architectural enhancements allow the models to extrapolate at long sequence lengths, leading to improved context handling and precision. Jais adapted models (`jais-adapted-*`) are *built on top of Llama-2*, which employs RoPE position embedding and Grouped Query Attention. We introduce tokenizer expansion with Arabic data, which improves fertility and compute efficiency by over 3x. In particular, we add `32,000` new Arabic tokens from the Jais-30b vocabulary into the Llama-2 tokenizer. To initialize these new Arabic token embeddings we first learn a linear projection from the embedding space of Jais-30b to Llama's embedding space, using the set of shared English tokens present in both vocabularies. Next, this learned projection is applied to transform the existing Jais-30b Arabic embeddings into the Llama-2 embedding space. ## Getting started Below is sample code to use the model. Note that the model requires a custom model class, so users must enable `trust_remote_code=True` while loading the model. ```python # -*- coding: utf-8 -*- import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "inceptionai/jais-family-6p7b" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True) def get_response(text, tokenizer=tokenizer, model=model): input_ids = tokenizer(text, return_tensors="pt").input_ids inputs = input_ids.to(device) input_len = inputs.shape[-1] generate_ids = model.generate( inputs, top_p=0.9, temperature=0.3, max_length=2048, min_length=input_len + 4, repetition_penalty=1.2, do_sample=True, ) response = tokenizer.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True )[0] return response text = "عاصمة دولة الإمارات العربية المتحدة ه" print(get_response(text)) text = "The capital of UAE is" print(get_response(text)) ``` ## Training Details ### Pretraining Data The Jais family of models are trained on up to 1.6 Trillion tokens of diverse English, Arabic and Code data. The data consists of the following sources: - **Web:** We used publicly available web pages, wikipedia articles, news articles, and social network content in both Arabic and English. - **Code:** To enhance the reasoning capability of our model, we include Code data in various programming languages. - **Books:** We used a selection of publicly available Arabic and English books data, which improves long-range context modelling and coherent storytelling. - **Scientific:** A subset of ArXiv papers were included to improve reasoning and long context abilities. - **Synthetic:** We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system. We restrict this to high quality English resources such as English Wikipedia and English books. We extensively preprocess and deduplicate the training data. For Arabic, we used a custom preprocessing pipeline to filter for data with high linguistic quality. More information on this pipeline can be found in the [Jais paper](https://arxiv.org/abs/2308.16149). - **Jais pre-trained** (`jais-family-*`): Following our previous experimentation with language alignment mixing in [Jais](https://arxiv.org/abs/2308.16149), we used a ratio of 1:2:0.4 of Arabic:English:Code data. This recipe for <u>from scratch pre-training</u> addresses Arabic data scarcity while improving performance in both languages. - **Jais adapted pre-trained** (`jais-adapted-*`): For the <u>adapted pre-training of Llama-2</u>, we utilized a larger Arabic dataset of ~334B Arabic tokens mixed with English and Code data. We vary the mixing ratio, at different model sizes, to introduce strong Arabic capabilities while maintaining performance in English. | **Pre-trained model** | **English data (tokens)** | **Arabic data (tokens)** | **Code data (tokens)** | **Total data (tokens)** | |-------------------------|---------------------------|--------------------------|------------------------|------------------------| | [jais-family-30b-16k](https://huggingface.co/inceptionai/jais-family-30b-16k) | 980B | 490B | 196B | 1666B | | [jais-family-30b-8k](https://huggingface.co/inceptionai/jais-family-30b-8k) | 882B | 441B | 177B | 1500B | | [jais-family-13b ](https://huggingface.co/inceptionai/jais-family-13b) | 283B | 141B | 56B | 480B | | [jais-family-6p7b](https://huggingface.co/inceptionai/jais-family-6p7b) | 283B | 141B | 56B | 480B | | [jais-family-2p7b](https://huggingface.co/inceptionai/jais-family-2p7b) | 283B | 141B | 56B | 480B | | [jais-family-1p3b](https://huggingface.co/inceptionai/jais-family-1p3b) | 283B | 141B | 56B | 480B | | [jais-family-590m](https://huggingface.co/inceptionai/jais-family-590m) | 283B | 141B | 56B | 480B | | [jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b) | 33B | 334B | 4B | 371B | | [jais-adapted-13b](https://huggingface.co/inceptionai/jais-adapted-13b) | 127B | 140B | 13B | 280B | | [jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b) | 18B | 19B | 2B | 39B | ### Finetuning data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> All chat models in the Jais family are fine-tuned using Arabic and English prompt-response pairs in both single-turn and multi-turn settings. Data sources include open-source fine-tuning datasets filtered for topic and style diversity. Additionally, internally curated human data is incorporated to enhance cultural adaptation. This data is supplemented with content generated using synthetic methods including machine translation, distillation, and model self-chat. Overall, our updated instruction-tuning dataset comprises ~10M and ~4M prompt-response pairs in English and Arabic respectively. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> During the pre-training of (`jais-family-*`) models, documents are packed into sequences separated by EOS tokens, and the model is trained autoregressively, applying the loss to all tokens. For jais-30b models, the context length is progressively expanded from 2k to 8K to 16K by incorporating curated long-context documents in training. This progressive expansion leverages faster initial training at shorter context lengths, while gradually extending support for larger context lengths towards the end of the training process. During the adapted pre-training of the (`jais-adapted-*`) models, we first initialize the new tokenizer and Arabic embeddings as described in [Model Architecture](#model-architecture). In training, we implemented a two-stage approach to overcome observed higher norms of the new Arabic embeddings. In the first stage, the backbone of the model is frozen, and the embeddings are trained using approximately 15 billion tokens from a bilingual corpus of English and Arabic. In the second stage, the backbone is unfrozen, and continuous pretraining is conducted with all parameters. During instruction tuning, each training example consists of a single-turn or multi-turn prompt and it's response. Instead of one example per sequence, examples are packed together while the loss is masked on the prompt tokens. This approach speeds up training by allowing more examples to be processed per batch. ### Training Hyperparameters: #### Jais-family-6p7b | Hyperparameter | Value | |----------------|-------------------------------------------| | Precision | fp32 | | Optimizer | AdamW | | Learning rate | 0 to 0.01563(<=112 warmup steps)<br>0.01563 to 0.000443(>112 and <=143721 steps) | | Weight decay | 0.1 | | Batch size | 1632| | Context Length | 2048| | Steps | 143721 | ### Compute Infrastructure The training process was performed on the Condor Galaxy (CG) supercomputer platform. A CG contains 64 Cerebras CS-2 Wafer-Scale Engines (WSE-2) with 40 GB of SRAM, and achieves a total of 960 PetaFLOP/s. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> We conducted a comprehensive evaluation of Jais models focusing on both English and Arabic, using LM-harness in a zero-shot setting. The evaluation criteria spanned various dimensions, including: - **Knowledge:** How well the model answers factual questions. - **Reasoning:** The model's ability to answer questions requiring reasoning. - **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality. ### Arabic evaluation results: <style> .table-container { overflow-x: auto; white-space: nowrap; } </style> <div class="table-container"> | **Models** | Avg | ArabicMMLU*| MMLU | EXAMS*| LitQA*| agqa | agrc | Hellaswag | PIQA | BoolQA | Situated QA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs | |--------------------------|-------|------------|-------|-------|-------|------|------|------------|------|--------|-------------|-------|------------|------------|-------------| | jais-family-30b-16k | 49.2 | 44.0 | 33.4 | 40.9 | 60 | 47.8 | 49.3 | 60.9 | 68.6 | 70.3 | 41.6 | 38.7 | 31.8 | 45.2 | 57 | | jais-family-30b-8k | 49.7 | 46.0 | 34 | 42 | 60.6 | 47.6 | 50.4 | 60.4 | 69 | 67.7 | 42.2 | 39.2 | 33.8 | 45.1 | 57.3 | | jais-family-13b | 46.1 | 34.0 | 30.3 | 42.7 | 58.3 | 40.5 | 45.5 | 57.3 | 68.1 | 63.1 | 41.6 | 35.3 | 31.4 | 41 | 56.1 | | jais-family-6p7b | 44.6 | 32.2 | 29.9 | 39 | 50.3 | 39.2 | 44.1 | 54.3 | 66.8 | 66.5 | 40.9 | 33.5 | 30.4 | 41.2 | 55.4 | | jais-family-2p7b | 41.0 | 29.5 | 28.5 | 36.1 | 45.7 | 32.4 | 40.8 | 44.2 | 62.5 | 62.2 | 39.2 | 27.4 | 28.2 | 43.6 | 53.6 | | jais-family-1p3b | 40.8 | 28.9 | 28.5 | 34.2 | 45.7 | 32.4 | 40.8 | 44.2 | 62.5 | 62.2 | 39.2 | 27.4 | 28.2 | 43.6 | 53.6 | | jais-family-590m | 39.7 | 31.2 | 27 | 33.1 | 41.7 | 33.8 | 38.8 | 38.2 | 60.7 | 62.2 | 37.9 | 25.5 | 27.4 | 44.7 | 53.3 | | jais-family-30b-16k-chat | 51.6 | 59.9 | 34.6 | 40.2 | 58.9 | 46.8 | 54.7 | 56.2 | 64.4 | 76.7 | 55.9 | 40.8 | 30.8 | 49.5 | 52.9 | | jais-family-30b-8k-chat | 51.4 | 61.2 | 34.2 | 40.2 | 54.3 | 47.3 | 53.6 | 60 | 63.4 | 76.8 | 54.7 | 39.5 | 30 | 50.7 | 54.3 | | jais-family-13b-chat | 50.3 | 58.2 | 33.9 | 42.9 | 53.1 | 46.8 | 51.7 | 59.3 | 65.4 | 75.2 | 51.2 | 38.4 | 29.8 | 44.8 | 53.8 | | jais-family-6p7b-chat | 48.7 | 55.7 | 32.8 | 37.7 | 49.7 | 40.5 | 50.1 | 56.2 | 62.9 | 79.4 | 52 | 38 | 30.4 | 44.7 | 52 | | jais-family-2p7b-chat | 45.6 | 50.0 | 31.5 | 35.9 | 41.1 | 37.3 | 42.1 | 48.6 | 63.7 | 74.4 | 50.9 | 35.3 | 31.2 | 44.5 | 51.3 | | jais-family-1p3b-chat | 42.7 | 42.2 | 30.1 | 33.6 | 40.6 | 34.1 | 41.2 | 43 | 63.6 | 69.3 | 44.9 | 31.6 | 28 | 45.6 | 50.4 | | jais-family-590m-chat | 37.8 | 39.1 | 28 |29.5 | 33.1 | 30.8 | 36.4 | 30.3 | 57.8 | 57.2 | 40.5 | 25.9 | 26.8 | 44.5 | 49.3 | | **Adapted Models** | Avg | ArabicMMLU*| MMLU | EXAMS*| LitQA*| agqa | agrc | Hellaswag | PIQA | BoolQA | Situated QA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs | |--------------------------|-------|------------|-------|-------|-------|------|------|------------|------|--------|-------------|-------|------------|------------|-------------| | jais-adapted-70b | 51.5 | 55.9 | 36.8 | 42.3 | 58.3 | 48.6 | 54 | 61.5 | 68.4 | 68.4 | 42.1 | 42.6 | 33 | 50.2 | 58.3 | | jais-adapted-13b | 46.6 | 44.7 | 30.6 | 37.7 | 54.3 | 43.8 | 48.3 | 54.9 | 67.1 | 64.5 | 40.6 | 36.1 | 32 | 43.6 | 54.00 | | jais-adapted-7b | 42.0 | 35.9 | 28.9 | 36.7 | 46.3 | 34.1 | 40.3 | 45 | 61.3 | 63.8 | 38.1 | 29.7 | 30.2 | 44.3 | 53.6 | | jais-adapted-70b-chat | 52.9 | 66.8 | 34.6 | 42.5 | 62.9 | 36.8 | 48.6 | 64.5 | 69.7 | 82.8 | 49.3 | 44.2 | 32.2 | 53.3 | 52.4 | | jais-adapted-13b-chat | 50.3 | 59.0 | 31.7 | 37.5 | 56.6 | 41.9 | 51.7 | 58.8 | 67.1 | 78.2 | 45.9 | 41 | 34.2 | 48.3 | 52.1 | | jais-adapted-7b-chat | 46.1 | 51.3 | 30 | 37 | 48 | 36.8 | 48.6 | 51.1 | 62.9 | 72.4 | 41.3 | 34.6 | 30.4 | 48.6 | 51.8 | </div> Arabic benchmarks are translated using an in-house MT model and reviewed by Arabic linguists. Benchmarks labeled with an asterisk (*) are natively Arabic; for further details, see the [Jais paper](https://arxiv.org/abs/2308.16149). Additionally, we include [ArabicMMLU](https://arxiv.org/abs/2402.12840), a native Arabic benchmark based on regional knowledge. ### English evaluation results: <div class="table-container"> | **Models** | Avg | MMLU | RACE | Hellaswag | PIQA | BoolQA | SIQA | ARC-Challenge | OpenBookQA | Winogrande | TruthfulQA | CrowS-Pairs | |--------------------------|----------|------|------|-----------|------|--------|------|---------------|------------|------------|----------------|-------------| | jais-family-30b-16k | 59.3 | 42.2 | 40.5 | 79.7 | 80.6 | 78.7 | 48.8 | 50.3 | 44.2 | 71.6 | 43.5 | 72.6 | | jais-family-30b-8k | 58.8 | 42.3 | 40.3 | 79.1 | 80.5 | 80.9 | 49.3 | 48.4 | 43.2 | 70.6 | 40.3 | 72.3 | | jais-family-13b | 54.6 | 32.3 | 39 | 72 | 77.4 | 73.9 | 47.9 | 43.2 | 40 | 67.1 | 36.1 | 71.7 | | jais-family-6p7b | 53.1 | 32 | 38 | 69.3 | 76 | 71.7 | 47.1 | 40.3 | 37.4 | 65.1 | 34.4 | 72.5 | | jais-family-2p7b | 51 | 29.4 | 38 | 62.7 | 74.1 | 67.4 | 45.6 | 35.1 | 35.6 | 62.9 | 40.1 | 70.2 | | jais-family-1p3b | 48.7 | 28.2 | 35.4 | 55.4 | 72 | 62.7 | 44.9 | 30.7 | 36.2 | 60.9 | 40.4 | 69 | | jais-family-590m | 45.2 | 27.8 | 32.9 | 46.1 | 68.1 | 60.4 | 43.2 | 25.6 | 30.8 | 55.8 | 40.9 | 65.3 | | jais-family-30b-16k-chat | 58.8 | 42 | 41.1 | 76.2 | 73.3 | 84.6 | 60.3 | 48.4 | 40.8 | 68.2 | 44.8 | 67 | | jais-family-30b-8k-chat | 60.3 | 40.6 | 47.1 | 78.9 | 72.7 | 90.6 | 60 | 50.1 | 43.2 | 70.6 | 44.9 | 64.2 | | jais-family-13b-chat | 57.5 | 36.6 | 42.6 | 75 | 75.8 | 87.6 | 54.4 | 47.9 | 42 | 65 | 40.6 | 64.5 | | jais-family-6p7b-chat | 56 | 36.6 | 41.3 | 72 | 74 | 86.9 | 55.4 | 44.6 | 40 | 62.4 | 41 | 62.2 | | jais-family-2p7b-chat | 52.8 | 32.7 | 40.4 | 62.2 | 71 | 84.1 | 54 | 37.2 | 36.8 | 61.4 | 40.9 | 59.8 | | jais-family-1p3b-chat | 49.3 | 31.9 | 37.4 | 54.5 | 70.2 | 77.8 | 49.8 | 34.4 | 35.6 | 52.7 | 37.2 | 60.8 | | jais-family-590m-chat | 42.6 | 27.9 | 33.4 | 33.1 | 63.7 | 60.1 | 45.3 | 26.7 | 25.8 | 50.5 | 44.5 | 57.7 | </div> <div class="table-container"> |**Adapted Models**| Avg | MMLU | RACE | Hellaswag | PIQA | BoolQA | SIQA | ARC-Challenge | OpenBookQA | Winogrande | TruthfulQA | CrowS-Pairs | |--------------------------|----------|------|------|-----------|------|--------|------|---------------|------------|------------|----------------|-------------| | jais-adapted-70b | 60.1 | 40.4 | 38.5 | 81.2 | 81.1 | 81.2 | 48.1 | 50.4 | 45 | 75.8 | 45.7 | 74 | | jais-adapted-13b | 56 | 33.8 | 39.5 | 76.5 | 78.6 | 77.8 | 44.6 | 45.9 | 44.4 | 71.4 | 34.6 | 69 | | jais-adapted-7b | 55.7 | 32.2 | 39.8 | 75.3 | 78.8 | 75.7 | 45.2 | 42.8 | 43 | 68 | 38.3 | 73.1 | | jais-adapted-70b-chat | 61.4 | 38.7 | 42.9 | 82.7 | 81.2 | 89.6 | 52.9 | 54.9 | 44.4 | 75.7 | 44 | 68.8 | | jais-adapted-13b-chat | 58.5 | 34.9 | 42.4 | 79.6 | 79.7 | 88.2 | 50.5 | 48.5 | 42.4 | 70.3 | 42.2 | 65.1 | | jais-adapted-7b-chat | 58.5 | 33.8 | 43.9 | 77.8 | 79.4 | 87.1 | 47.3 | 46.9 | 43.4 | 69.9 | 42 | 72.4 | </div> ### GPT-4 evaluation In addition to the LM-Harness evaluation, we conducted an open-ended generation evaluation using GPT-4-as-a-judge. We measured pairwise win-rates of model responses in both Arabic and English on a fixed set of 80 prompts from the Vicuna test set. English prompts were translated to Arabic by our in-house linguists. In the following, we compare the models in this release of the jais family against previously released versions: <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/jais.png" alt="Jais-adapted GPT-4"> </p> <p align="center"> <em>GPT-4-as-a-judge evaluation of Jais in Arabic and English. Jais family models are significantly better than previous Jais at generations in both languages. </em> </p> <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/jais-adapted.png" alt="Jais-adapted GPT-4"> </p> <p align="center"> <em>GPT-4-as-a-judge evaluation of adapted Jais in Arabic and English. The generation quality of Arabic is significantly enhanced, while achieving improvement in English when compared to Llama-2 instruct. </em> </p> Besides pairwise comparison, we also perform MT-bench style single-answer grading on a scale of 1 to 10. <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/mt_bench.png" alt="MT-bench"> </p> <p align="center"> <em>MT-bench style single-answer grading evaluation of Jais and adapted Jais in Arabic and English. Comparisons are made between select corresponding models from earlier releases. The quality ratings of responses are generally improved, with significant enhancements in Arabic.</em> </p> ## Intended use We release the Jais family of models under a full open-source license. We welcome all feedback and opportunities to collaborate. Spanning sizes from 590M to 70B parameters, this suite of bilingual models accommodates a wide range of use cases. Some potential downstream applications include: - **Research**: The Jais family serves Arabic researchers and NLP practitioners, offering both compute-efficient and advanced model sizes - Natural language understanding and generation tasks. - Mechanistic interpretability analyses on cultural alignment in bilingual pre-trained and adapted pre-trained models. - Quantitative studies of Arabic cultural and linguistic phenomena. - **Commercial Use**: Jais 30B and 70B chat models are well-suited for direct use in chat applications with appropriate prompting or for further fine-tuning on specific tasks. - Development of chat assistants for Arabic-speaking users. - Sentiment analysis to gain insights into local markets and customer trends. - Summarization of bilingual Arabic-English documents. Audiences that we hope will benefit from our model: - **Academics**: For those researching Arabic Natural Language Processing. - **Businesses**: Companies targeting Arabic-speaking audiences. - **Developers**: Those integrating Arabic language capabilities in applications. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> While the Jais family of models are powerful Arabic and English bilingual models, it's essential to understand their limitations and the potential of misuse. It is prohibited to use the model in any manner that violates applicable laws or regulations. The following are some example scenarios where the model should not be used. - **Malicious Use**: The model should not be used to generate harmful, misleading, or inappropriate content. Thisincludes but is not limited to: - Generating or promoting hate speech, violence, or discrimination. - Spreading misinformation or fake news. - Engaging in or promoting illegal activities. - **Sensitive Information**: The model should not be used to handle or generate personal, confidential, or sensitive information. - **Generalization Across All Languages**: Jais family of models are bilingual and optimized for Arabic and English. They should not be presumed to have equal proficiency in other languages or dialects. - **High-Stakes Decisions**: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The Jais family is trained on publicly available data which was in part curated by Inception. We have employed different techniques to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias. The fine-tuned variants are trained as an AI assistant for Arabic and English speakers. Chat models are limited to produce responses for queries in these two languages and may not produce appropriate responses to other language queries. By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model. Copyright Inception Institute of Artificial Intelligence Ltd. JAIS is made available under the Apache License, Version 2.0 (the “License”). You shall not use JAIS except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, JAIS is distributed on an AS IS basis, without warranties or conditions of any kind, either express or implied. Please see the terms of the License for the specific language permissions and limitations under the License. #### Summary We release the Jais family of Arabic and English bilingual models. The wide range of pre-trained model sizes, the recipe for adapting English-centric models to Arabic, and the fine-tuning of all sizes unlocks numerous use cases commercially and academically in the Arabic setting. Through this release, we aim to make LLMs more accessible to Arabic NLP researchers and companies, offering native Arabic models that provide better cultural understanding than English centric ones. The strategies we employ for pre-training, fine-tuning and adaptation to Arabic are extensible to other low and medium resource languages, paving the way for language-focused and accessible models that cater to local contexts. #### Citation info ```bibtex @misc{sengupta2023jais, title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models}, author={Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin and Eric Xing}, year={2023}, eprint={2308.16149}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{jaisfamilymodelcard, title={Jais Family Model Card}, author={Inception}, year={2024}, url = {https://huggingface.co/inceptionai/jais-family-30b-16k-chat/blob/main/README.md} } ```
null
Non_BioNLP
# Jais Family Model Card The Jais family of models is a comprehensive series of bilingual English-Arabic large language models (LLMs). These models are optimized to excel in Arabic while having strong English capabilities. We release two variants of foundation models that include: - Models **pre-trained from scratch** (`jais-family-*`). - Models **pre-trained adaptively from [Llama-2](https://arxiv.org/pdf/2307.09288)** (`jais-adapted-*`). In this release, we introduce 20 models across 8 sizes, ranging from 590M to 70B parameters, trained on up to 1.6T tokens of Arabic, English, and code data. *All* pre-trained models in this series are instruction fine-tuned (`*-chat`) for dialog using a curated mix of Arabic and English instruction data. We hope this extensive release will accelerate research in Arabic NLP, and enable numerous downstream applications for the Arabic speaking and bilingual community. The training and adaptation techniques we demonstrate successfully for Arabic models are extensible to other low and medium resource languages. ## Jais Family Details - **Developed by:** Inception, Cerebras Systems. - **Language(s):** (NLP): Arabic (MSA) and English. - **Input:** Text only data. - **Output:** Model generates text. - **Model Sizes:** 590M, 1.3B, 2.7B, 6.7B, 7B, 13B, 30B, 70B. - **Demo:** [Access the live demo here](https://arabic-gpt.ai/) - **License:** Apache 2.0 | **Pre-trained Model** | **Fine-tuned Model** | **Size (Parameters)** | **Context length (Tokens)** | |:---------------------|:--------|:-------|:-------| | [jais-family-30b-16k](https://huggingface.co/inceptionai/jais-family-30b-16k) | [Jais-family-30b-16k-chat](https://huggingface.co/inceptionai/jais-family-30b-16k-chat) | 30B | 16,384 | | [jais-family-30b-8k](https://huggingface.co/inceptionai/jais-family-30b-8k) | [Jais-family-30b-8k-chat](https://huggingface.co/inceptionai/jais-family-30b-8k-chat) | 30B | 8,192 | | [jais-family-13b ](https://huggingface.co/inceptionai/jais-family-13b) | [Jais-family-13b-chat](https://huggingface.co/inceptionai/jais-family-13b-chat) | 13B | 2,048 | | [jais-family-6p7b](https://huggingface.co/inceptionai/jais-family-6p7b) | [Jais-family-6p7b-chat](https://huggingface.co/inceptionai/jais-family-6p7b-chat) | 6.7B | 2,048 | | [jais-family-2p7b](https://huggingface.co/inceptionai/jais-family-2p7b) | [Jais-family-2p7b-chat](https://huggingface.co/inceptionai/jais-family-2p7b-chat) | 2.7B | 2,048 | | [jais-family-1p3b](https://huggingface.co/inceptionai/jais-family-1p3b) | [Jais-family-1p3b-chat](https://huggingface.co/inceptionai/jais-family-1p3b-chat) | 1.3B | 2,048 | | [jais-family-590m](https://huggingface.co/inceptionai/jais-family-590m) | [Jais-family-590m-chat](https://huggingface.co/inceptionai/jais-family-590m-chat) | 590M | 2,048 | | **Adapted pre-trained Model** | **Fine-tuned Model** | **Size (Parameters)** | **Context length (Tokens)** | |:---------------------|:--------|:-------|:-------| | [jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b) | [Jais-adapted-70b-chat](https://huggingface.co/inceptionai/jais-adapted-70b-chat) | 70B | 4,096 | | [jais-adapted-13b](https://huggingface.co/inceptionai/jais-adapted-13b) | [Jais-adapted-13b-chat](https://huggingface.co/inceptionai/jais-adapted-13b-chat) | 13B | 4,096 | | [jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b) | [Jais-adapted-7b-chat](https://huggingface.co/inceptionai/jais-adapted-7b-chat) | 7B | 4,096 | ### Model Architecture: <a name="model-architecture"></a> All models in this family are auto-regressive language models that use a transformer-based, decoder-only architecture (GPT-3). Jais models (`jais-family-*`) are *trained from scratch*, incorporating the SwiGLU non-linear activation function and ALiBi position encoding. These architectural enhancements allow the models to extrapolate at long sequence lengths, leading to improved context handling and precision. Jais adapted models (`jais-adapted-*`) are *built on top of Llama-2*, which employs RoPE position embedding and Grouped Query Attention. We introduce tokenizer expansion with Arabic data, which improves fertility and compute efficiency by over 3x. In particular, we add `32,000` new Arabic tokens from the Jais-30b vocabulary into the Llama-2 tokenizer. To initialize these new Arabic token embeddings we first learn a linear projection from the embedding space of Jais-30b to Llama's embedding space, using the set of shared English tokens present in both vocabularies. Next, this learned projection is applied to transform the existing Jais-30b Arabic embeddings into the Llama-2 embedding space. ## Getting started Below is sample code to use the model. Note that the model requires a custom model class, so users must enable `trust_remote_code=True` while loading the model. ```python # -*- coding: utf-8 -*- import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "inceptionai/jais-family-6p7b" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True) def get_response(text, tokenizer=tokenizer, model=model): input_ids = tokenizer(text, return_tensors="pt").input_ids inputs = input_ids.to(device) input_len = inputs.shape[-1] generate_ids = model.generate( inputs, top_p=0.9, temperature=0.3, max_length=2048, min_length=input_len + 4, repetition_penalty=1.2, do_sample=True, ) response = tokenizer.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True )[0] return response text = "عاصمة دولة الإمارات العربية المتحدة ه" print(get_response(text)) text = "The capital of UAE is" print(get_response(text)) ``` ## Training Details ### Pretraining Data The Jais family of models are trained on up to 1.6 Trillion tokens of diverse English, Arabic and Code data. The data consists of the following sources: - **Web:** We used publicly available web pages, wikipedia articles, news articles, and social network content in both Arabic and English. - **Code:** To enhance the reasoning capability of our model, we include Code data in various programming languages. - **Books:** We used a selection of publicly available Arabic and English books data, which improves long-range context modelling and coherent storytelling. - **Scientific:** A subset of ArXiv papers were included to improve reasoning and long context abilities. - **Synthetic:** We augment the volume of Arabic data by translating English to Arabic using an in-house machine translation system. We restrict this to high quality English resources such as English Wikipedia and English books. We extensively preprocess and deduplicate the training data. For Arabic, we used a custom preprocessing pipeline to filter for data with high linguistic quality. More information on this pipeline can be found in the [Jais paper](https://arxiv.org/abs/2308.16149). - **Jais pre-trained** (`jais-family-*`): Following our previous experimentation with language alignment mixing in [Jais](https://arxiv.org/abs/2308.16149), we used a ratio of 1:2:0.4 of Arabic:English:Code data. This recipe for <u>from scratch pre-training</u> addresses Arabic data scarcity while improving performance in both languages. - **Jais adapted pre-trained** (`jais-adapted-*`): For the <u>adapted pre-training of Llama-2</u>, we utilized a larger Arabic dataset of ~334B Arabic tokens mixed with English and Code data. We vary the mixing ratio, at different model sizes, to introduce strong Arabic capabilities while maintaining performance in English. | **Pre-trained model** | **English data (tokens)** | **Arabic data (tokens)** | **Code data (tokens)** | **Total data (tokens)** | |-------------------------|---------------------------|--------------------------|------------------------|------------------------| | [jais-family-30b-16k](https://huggingface.co/inceptionai/jais-family-30b-16k) | 980B | 490B | 196B | 1666B | | [jais-family-30b-8k](https://huggingface.co/inceptionai/jais-family-30b-8k) | 882B | 441B | 177B | 1500B | | [jais-family-13b ](https://huggingface.co/inceptionai/jais-family-13b) | 283B | 141B | 56B | 480B | | [jais-family-6p7b](https://huggingface.co/inceptionai/jais-family-6p7b) | 283B | 141B | 56B | 480B | | [jais-family-2p7b](https://huggingface.co/inceptionai/jais-family-2p7b) | 283B | 141B | 56B | 480B | | [jais-family-1p3b](https://huggingface.co/inceptionai/jais-family-1p3b) | 283B | 141B | 56B | 480B | | [jais-family-590m](https://huggingface.co/inceptionai/jais-family-590m) | 283B | 141B | 56B | 480B | | [jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b) | 33B | 334B | 4B | 371B | | [jais-adapted-13b](https://huggingface.co/inceptionai/jais-adapted-13b) | 127B | 140B | 13B | 280B | | [jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b) | 18B | 19B | 2B | 39B | ### Finetuning data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> All chat models in the Jais family are fine-tuned using Arabic and English prompt-response pairs in both single-turn and multi-turn settings. Data sources include open-source fine-tuning datasets filtered for topic and style diversity. Additionally, internally curated human data is incorporated to enhance cultural adaptation. This data is supplemented with content generated using synthetic methods including machine translation, distillation, and model self-chat. Overall, our updated instruction-tuning dataset comprises ~10M and ~4M prompt-response pairs in English and Arabic respectively. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> During the pre-training of (`jais-family-*`) models, documents are packed into sequences separated by EOS tokens, and the model is trained autoregressively, applying the loss to all tokens. For jais-30b models, the context length is progressively expanded from 2k to 8K to 16K by incorporating curated long-context documents in training. This progressive expansion leverages faster initial training at shorter context lengths, while gradually extending support for larger context lengths towards the end of the training process. During the adapted pre-training of the (`jais-adapted-*`) models, we first initialize the new tokenizer and Arabic embeddings as described in [Model Architecture](#model-architecture). In training, we implemented a two-stage approach to overcome observed higher norms of the new Arabic embeddings. In the first stage, the backbone of the model is frozen, and the embeddings are trained using approximately 15 billion tokens from a bilingual corpus of English and Arabic. In the second stage, the backbone is unfrozen, and continuous pretraining is conducted with all parameters. During instruction tuning, each training example consists of a single-turn or multi-turn prompt and it's response. Instead of one example per sequence, examples are packed together while the loss is masked on the prompt tokens. This approach speeds up training by allowing more examples to be processed per batch. ### Training Hyperparameters: #### Jais-family-6p7b | Hyperparameter | Value | |----------------|-------------------------------------------| | Precision | fp32 | | Optimizer | AdamW | | Learning rate | 0 to 0.01563(<=112 warmup steps)<br>0.01563 to 0.000443(>112 and <=143721 steps) | | Weight decay | 0.1 | | Batch size | 1632| | Context Length | 2048| | Steps | 143721 | ### Compute Infrastructure The training process was performed on the Condor Galaxy (CG) supercomputer platform. A CG contains 64 Cerebras CS-2 Wafer-Scale Engines (WSE-2) with 40 GB of SRAM, and achieves a total of 960 PetaFLOP/s. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> We conducted a comprehensive evaluation of Jais models focusing on both English and Arabic, using LM-harness in a zero-shot setting. The evaluation criteria spanned various dimensions, including: - **Knowledge:** How well the model answers factual questions. - **Reasoning:** The model's ability to answer questions requiring reasoning. - **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality. ### Arabic evaluation results: <style> .table-container { overflow-x: auto; white-space: nowrap; } </style> <div class="table-container"> | **Models** | Avg | ArabicMMLU*| MMLU | EXAMS*| LitQA*| agqa | agrc | Hellaswag | PIQA | BoolQA | Situated QA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs | |--------------------------|-------|------------|-------|-------|-------|------|------|------------|------|--------|-------------|-------|------------|------------|-------------| | jais-family-30b-16k | 49.2 | 44.0 | 33.4 | 40.9 | 60 | 47.8 | 49.3 | 60.9 | 68.6 | 70.3 | 41.6 | 38.7 | 31.8 | 45.2 | 57 | | jais-family-30b-8k | 49.7 | 46.0 | 34 | 42 | 60.6 | 47.6 | 50.4 | 60.4 | 69 | 67.7 | 42.2 | 39.2 | 33.8 | 45.1 | 57.3 | | jais-family-13b | 46.1 | 34.0 | 30.3 | 42.7 | 58.3 | 40.5 | 45.5 | 57.3 | 68.1 | 63.1 | 41.6 | 35.3 | 31.4 | 41 | 56.1 | | jais-family-6p7b | 44.6 | 32.2 | 29.9 | 39 | 50.3 | 39.2 | 44.1 | 54.3 | 66.8 | 66.5 | 40.9 | 33.5 | 30.4 | 41.2 | 55.4 | | jais-family-2p7b | 41.0 | 29.5 | 28.5 | 36.1 | 45.7 | 32.4 | 40.8 | 44.2 | 62.5 | 62.2 | 39.2 | 27.4 | 28.2 | 43.6 | 53.6 | | jais-family-1p3b | 40.8 | 28.9 | 28.5 | 34.2 | 45.7 | 32.4 | 40.8 | 44.2 | 62.5 | 62.2 | 39.2 | 27.4 | 28.2 | 43.6 | 53.6 | | jais-family-590m | 39.7 | 31.2 | 27 | 33.1 | 41.7 | 33.8 | 38.8 | 38.2 | 60.7 | 62.2 | 37.9 | 25.5 | 27.4 | 44.7 | 53.3 | | jais-family-30b-16k-chat | 51.6 | 59.9 | 34.6 | 40.2 | 58.9 | 46.8 | 54.7 | 56.2 | 64.4 | 76.7 | 55.9 | 40.8 | 30.8 | 49.5 | 52.9 | | jais-family-30b-8k-chat | 51.4 | 61.2 | 34.2 | 40.2 | 54.3 | 47.3 | 53.6 | 60 | 63.4 | 76.8 | 54.7 | 39.5 | 30 | 50.7 | 54.3 | | jais-family-13b-chat | 50.3 | 58.2 | 33.9 | 42.9 | 53.1 | 46.8 | 51.7 | 59.3 | 65.4 | 75.2 | 51.2 | 38.4 | 29.8 | 44.8 | 53.8 | | jais-family-6p7b-chat | 48.7 | 55.7 | 32.8 | 37.7 | 49.7 | 40.5 | 50.1 | 56.2 | 62.9 | 79.4 | 52 | 38 | 30.4 | 44.7 | 52 | | jais-family-2p7b-chat | 45.6 | 50.0 | 31.5 | 35.9 | 41.1 | 37.3 | 42.1 | 48.6 | 63.7 | 74.4 | 50.9 | 35.3 | 31.2 | 44.5 | 51.3 | | jais-family-1p3b-chat | 42.7 | 42.2 | 30.1 | 33.6 | 40.6 | 34.1 | 41.2 | 43 | 63.6 | 69.3 | 44.9 | 31.6 | 28 | 45.6 | 50.4 | | jais-family-590m-chat | 37.8 | 39.1 | 28 |29.5 | 33.1 | 30.8 | 36.4 | 30.3 | 57.8 | 57.2 | 40.5 | 25.9 | 26.8 | 44.5 | 49.3 | | **Adapted Models** | Avg | ArabicMMLU*| MMLU | EXAMS*| LitQA*| agqa | agrc | Hellaswag | PIQA | BoolQA | Situated QA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs | |--------------------------|-------|------------|-------|-------|-------|------|------|------------|------|--------|-------------|-------|------------|------------|-------------| | jais-adapted-70b | 51.5 | 55.9 | 36.8 | 42.3 | 58.3 | 48.6 | 54 | 61.5 | 68.4 | 68.4 | 42.1 | 42.6 | 33 | 50.2 | 58.3 | | jais-adapted-13b | 46.6 | 44.7 | 30.6 | 37.7 | 54.3 | 43.8 | 48.3 | 54.9 | 67.1 | 64.5 | 40.6 | 36.1 | 32 | 43.6 | 54.00 | | jais-adapted-7b | 42.0 | 35.9 | 28.9 | 36.7 | 46.3 | 34.1 | 40.3 | 45 | 61.3 | 63.8 | 38.1 | 29.7 | 30.2 | 44.3 | 53.6 | | jais-adapted-70b-chat | 52.9 | 66.8 | 34.6 | 42.5 | 62.9 | 36.8 | 48.6 | 64.5 | 69.7 | 82.8 | 49.3 | 44.2 | 32.2 | 53.3 | 52.4 | | jais-adapted-13b-chat | 50.3 | 59.0 | 31.7 | 37.5 | 56.6 | 41.9 | 51.7 | 58.8 | 67.1 | 78.2 | 45.9 | 41 | 34.2 | 48.3 | 52.1 | | jais-adapted-7b-chat | 46.1 | 51.3 | 30 | 37 | 48 | 36.8 | 48.6 | 51.1 | 62.9 | 72.4 | 41.3 | 34.6 | 30.4 | 48.6 | 51.8 | </div> Arabic benchmarks are translated using an in-house MT model and reviewed by Arabic linguists. Benchmarks labeled with an asterisk (*) are natively Arabic; for further details, see the [Jais paper](https://arxiv.org/abs/2308.16149). Additionally, we include [ArabicMMLU](https://arxiv.org/abs/2402.12840), a native Arabic benchmark based on regional knowledge. ### English evaluation results: <div class="table-container"> | **Models** | Avg | MMLU | RACE | Hellaswag | PIQA | BoolQA | SIQA | ARC-Challenge | OpenBookQA | Winogrande | TruthfulQA | CrowS-Pairs | |--------------------------|----------|------|------|-----------|------|--------|------|---------------|------------|------------|----------------|-------------| | jais-family-30b-16k | 59.3 | 42.2 | 40.5 | 79.7 | 80.6 | 78.7 | 48.8 | 50.3 | 44.2 | 71.6 | 43.5 | 72.6 | | jais-family-30b-8k | 58.8 | 42.3 | 40.3 | 79.1 | 80.5 | 80.9 | 49.3 | 48.4 | 43.2 | 70.6 | 40.3 | 72.3 | | jais-family-13b | 54.6 | 32.3 | 39 | 72 | 77.4 | 73.9 | 47.9 | 43.2 | 40 | 67.1 | 36.1 | 71.7 | | jais-family-6p7b | 53.1 | 32 | 38 | 69.3 | 76 | 71.7 | 47.1 | 40.3 | 37.4 | 65.1 | 34.4 | 72.5 | | jais-family-2p7b | 51 | 29.4 | 38 | 62.7 | 74.1 | 67.4 | 45.6 | 35.1 | 35.6 | 62.9 | 40.1 | 70.2 | | jais-family-1p3b | 48.7 | 28.2 | 35.4 | 55.4 | 72 | 62.7 | 44.9 | 30.7 | 36.2 | 60.9 | 40.4 | 69 | | jais-family-590m | 45.2 | 27.8 | 32.9 | 46.1 | 68.1 | 60.4 | 43.2 | 25.6 | 30.8 | 55.8 | 40.9 | 65.3 | | jais-family-30b-16k-chat | 58.8 | 42 | 41.1 | 76.2 | 73.3 | 84.6 | 60.3 | 48.4 | 40.8 | 68.2 | 44.8 | 67 | | jais-family-30b-8k-chat | 60.3 | 40.6 | 47.1 | 78.9 | 72.7 | 90.6 | 60 | 50.1 | 43.2 | 70.6 | 44.9 | 64.2 | | jais-family-13b-chat | 57.5 | 36.6 | 42.6 | 75 | 75.8 | 87.6 | 54.4 | 47.9 | 42 | 65 | 40.6 | 64.5 | | jais-family-6p7b-chat | 56 | 36.6 | 41.3 | 72 | 74 | 86.9 | 55.4 | 44.6 | 40 | 62.4 | 41 | 62.2 | | jais-family-2p7b-chat | 52.8 | 32.7 | 40.4 | 62.2 | 71 | 84.1 | 54 | 37.2 | 36.8 | 61.4 | 40.9 | 59.8 | | jais-family-1p3b-chat | 49.3 | 31.9 | 37.4 | 54.5 | 70.2 | 77.8 | 49.8 | 34.4 | 35.6 | 52.7 | 37.2 | 60.8 | | jais-family-590m-chat | 42.6 | 27.9 | 33.4 | 33.1 | 63.7 | 60.1 | 45.3 | 26.7 | 25.8 | 50.5 | 44.5 | 57.7 | </div> <div class="table-container"> |**Adapted Models**| Avg | MMLU | RACE | Hellaswag | PIQA | BoolQA | SIQA | ARC-Challenge | OpenBookQA | Winogrande | TruthfulQA | CrowS-Pairs | |--------------------------|----------|------|------|-----------|------|--------|------|---------------|------------|------------|----------------|-------------| | jais-adapted-70b | 60.1 | 40.4 | 38.5 | 81.2 | 81.1 | 81.2 | 48.1 | 50.4 | 45 | 75.8 | 45.7 | 74 | | jais-adapted-13b | 56 | 33.8 | 39.5 | 76.5 | 78.6 | 77.8 | 44.6 | 45.9 | 44.4 | 71.4 | 34.6 | 69 | | jais-adapted-7b | 55.7 | 32.2 | 39.8 | 75.3 | 78.8 | 75.7 | 45.2 | 42.8 | 43 | 68 | 38.3 | 73.1 | | jais-adapted-70b-chat | 61.4 | 38.7 | 42.9 | 82.7 | 81.2 | 89.6 | 52.9 | 54.9 | 44.4 | 75.7 | 44 | 68.8 | | jais-adapted-13b-chat | 58.5 | 34.9 | 42.4 | 79.6 | 79.7 | 88.2 | 50.5 | 48.5 | 42.4 | 70.3 | 42.2 | 65.1 | | jais-adapted-7b-chat | 58.5 | 33.8 | 43.9 | 77.8 | 79.4 | 87.1 | 47.3 | 46.9 | 43.4 | 69.9 | 42 | 72.4 | </div> ### GPT-4 evaluation In addition to the LM-Harness evaluation, we conducted an open-ended generation evaluation using GPT-4-as-a-judge. We measured pairwise win-rates of model responses in both Arabic and English on a fixed set of 80 prompts from the Vicuna test set. English prompts were translated to Arabic by our in-house linguists. In the following, we compare the models in this release of the jais family against previously released versions: <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/jais.png" alt="Jais-adapted GPT-4"> </p> <p align="center"> <em>GPT-4-as-a-judge evaluation of Jais in Arabic and English. Jais family models are significantly better than previous Jais at generations in both languages. </em> </p> <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/jais-adapted.png" alt="Jais-adapted GPT-4"> </p> <p align="center"> <em>GPT-4-as-a-judge evaluation of adapted Jais in Arabic and English. The generation quality of Arabic is significantly enhanced, while achieving improvement in English when compared to Llama-2 instruct. </em> </p> Besides pairwise comparison, we also perform MT-bench style single-answer grading on a scale of 1 to 10. <p align="center"> <img src="https://huggingface.co/inceptionai/JaisFamilySupplmentary/resolve/main/mt_bench.png" alt="MT-bench"> </p> <p align="center"> <em>MT-bench style single-answer grading evaluation of Jais and adapted Jais in Arabic and English. Comparisons are made between select corresponding models from earlier releases. The quality ratings of responses are generally improved, with significant enhancements in Arabic.</em> </p> ## Intended use We release the Jais family of models under a full open-source license. We welcome all feedback and opportunities to collaborate. Spanning sizes from 590M to 70B parameters, this suite of bilingual models accommodates a wide range of use cases. Some potential downstream applications include: - **Research**: The Jais family serves Arabic researchers and NLP practitioners, offering both compute-efficient and advanced model sizes - Natural language understanding and generation tasks. - Mechanistic interpretability analyses on cultural alignment in bilingual pre-trained and adapted pre-trained models. - Quantitative studies of Arabic cultural and linguistic phenomena. - **Commercial Use**: Jais 30B and 70B chat models are well-suited for direct use in chat applications with appropriate prompting or for further fine-tuning on specific tasks. - Development of chat assistants for Arabic-speaking users. - Sentiment analysis to gain insights into local markets and customer trends. - Summarization of bilingual Arabic-English documents. Audiences that we hope will benefit from our model: - **Academics**: For those researching Arabic Natural Language Processing. - **Businesses**: Companies targeting Arabic-speaking audiences. - **Developers**: Those integrating Arabic language capabilities in applications. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> While the Jais family of models are powerful Arabic and English bilingual models, it's essential to understand their limitations and the potential of misuse. It is prohibited to use the model in any manner that violates applicable laws or regulations. The following are some example scenarios where the model should not be used. - **Malicious Use**: The model should not be used to generate harmful, misleading, or inappropriate content. Thisincludes but is not limited to: - Generating or promoting hate speech, violence, or discrimination. - Spreading misinformation or fake news. - Engaging in or promoting illegal activities. - **Sensitive Information**: The model should not be used to handle or generate personal, confidential, or sensitive information. - **Generalization Across All Languages**: Jais family of models are bilingual and optimized for Arabic and English. They should not be presumed to have equal proficiency in other languages or dialects. - **High-Stakes Decisions**: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The Jais family is trained on publicly available data which was in part curated by Inception. We have employed different techniques to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias. The fine-tuned variants are trained as an AI assistant for Arabic and English speakers. Chat models are limited to produce responses for queries in these two languages and may not produce appropriate responses to other language queries. By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model. Copyright Inception Institute of Artificial Intelligence Ltd. JAIS is made available under the Apache License, Version 2.0 (the “License”). You shall not use JAIS except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, JAIS is distributed on an AS IS basis, without warranties or conditions of any kind, either express or implied. Please see the terms of the License for the specific language permissions and limitations under the License. #### Summary We release the Jais family of Arabic and English bilingual models. The wide range of pre-trained model sizes, the recipe for adapting English-centric models to Arabic, and the fine-tuning of all sizes unlocks numerous use cases commercially and academically in the Arabic setting. Through this release, we aim to make LLMs more accessible to Arabic NLP researchers and companies, offering native Arabic models that provide better cultural understanding than English centric ones. The strategies we employ for pre-training, fine-tuning and adaptation to Arabic are extensible to other low and medium resource languages, paving the way for language-focused and accessible models that cater to local contexts. #### Citation info ```bibtex @misc{sengupta2023jais, title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models}, author={Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin and Eric Xing}, year={2023}, eprint={2308.16149}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{jaisfamilymodelcard, title={Jais Family Model Card}, author={Inception}, year={2024}, url = {https://huggingface.co/inceptionai/jais-family-30b-16k-chat/blob/main/README.md} } ```
{"language": ["ar", "en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["Arabic", "English", "LLM", "Decoder", "causal-lm", "jais-family"]}
task
[ "TRANSLATION", "SUMMARIZATION" ]
44,315
EmnaBou/t5-small-disfluent-fluent
EmnaBou
translation
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "translation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-12-05T13:00:22Z
2022-12-05T13:17:37+00:00
122
0
--- license: apache-2.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: t5-small-disfluent-fluent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-disfluent-fluent This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4839 - Bleu: 10.1823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
null
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-disfluent-fluent This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4839 - Bleu: 10.1823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
{"license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "t5-small-disfluent-fluent", "results": []}]}
task
[ "TRANSLATION" ]
44,316