id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Alireza1044/mobilebert_cola
|
Alireza1044
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-14T08:21:45Z |
2022-06-14T09:02:15+00:00
| 124 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.5277813760438573
name: Matthews Correlation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6337
- Matthews Correlation: 0.5278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cola
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6337
- Matthews Correlation: 0.5278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5277813760438573, "name": "Matthews Correlation"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,132 |
steveleancommerce/iris-7b-Q4_K_M-GGUF
|
steveleancommerce
|
translation
|
[
"transformers",
"gguf",
"finetuned",
"llama-cpp",
"gguf-my-repo",
"translation",
"en",
"ko",
"base_model:davidkim205/iris-7b",
"base_model:quantized:davidkim205/iris-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-04T13:34:09Z |
2024-09-04T13:34:28+00:00
| 5 | 0 |
---
base_model: davidkim205/iris-7b
language:
- en
- ko
library_name: transformers
license: apache-2.0
pipeline_tag: translation
tags:
- finetuned
- llama-cpp
- gguf-my-repo
inference: true
widget:
- messages:
- role: user
content: 다음 문장을 한글로 번역하세요. Iris is a model for Korean-English sentence translation
based on deep learning.
---
# steveleancommerce/iris-7b-Q4_K_M-GGUF
This model was converted to GGUF format from [`davidkim205/iris-7b`](https://huggingface.co/davidkim205/iris-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/davidkim205/iris-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -c 2048
```
| null |
Non_BioNLP
|
# steveleancommerce/iris-7b-Q4_K_M-GGUF
This model was converted to GGUF format from [`davidkim205/iris-7b`](https://huggingface.co/davidkim205/iris-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/davidkim205/iris-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo steveleancommerce/iris-7b-Q4_K_M-GGUF --hf-file iris-7b-q4_k_m.gguf -c 2048
```
|
{"base_model": "davidkim205/iris-7b", "language": ["en", "ko"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "translation", "tags": ["finetuned", "llama-cpp", "gguf-my-repo"], "inference": true, "widget": [{"messages": [{"role": "user", "content": "다음 문장을 한글로 번역하세요. Iris is a model for Korean-English sentence translation based on deep learning."}]}]}
|
task
|
[
"TRANSLATION"
] | 42,133 |
thilina/mt5-sinhalese-english
|
thilina
|
translation
|
[
"transformers",
"pytorch",
"tf",
"mt5",
"text2text-generation",
"translation",
"si",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-01-03T21:14:26+00:00
| 238 | 8 |
---
language:
- si
- en
license: apache-2.0
metrics:
- sacrebleu
tags:
- translation
---
# mt5-sinhalese-english
## Model description
An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa.
## Training details
- English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md)
- [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base)
## Eval results
SacreBLEU score:
- English to Sinhalese: 10.3
- Sinhalese to English: 24.4
| null |
Non_BioNLP
|
# mt5-sinhalese-english
## Model description
An mT5-base model fine-tuned on the Sinhalese-English dataset in the Tatoeba Challenge. Can be used to translate from Sinhalese to English and vice versa.
## Training details
- English - Sinhala dataset from the Tatoeba Challenge [Datasets](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/Data.md)
- [mT5-base pre-trained weights](https://huggingface.co/google/mt5-base)
## Eval results
SacreBLEU score:
- English to Sinhalese: 10.3
- Sinhalese to English: 24.4
|
{"language": ["si", "en"], "license": "apache-2.0", "metrics": ["sacrebleu"], "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,134 |
cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
|
cardiffnlp
|
token-classification
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-23T20:41:42Z |
2023-06-23T20:57:35+00:00
| 19 | 0 |
---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: Jacob Collier is a Grammy awarded artist from England.
example_title: NER Example 1
model-index:
- name: cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: tner/tweetner7
type: tner/tweetner7
args: tner/tweetner7
metrics:
- type: f1
value: 0.6528115974857014
name: F1
- type: precision
value: 0.6396626345577627
name: Precision
- type: recall
value: 0.6665124884366328
name: Recall
- type: f1_macro
value: 0.6049985470954377
name: F1 (macro)
- type: precision_macro
value: 0.5897437616700211
name: Precision (macro)
- type: recall_macro
value: 0.6233545992999288
name: Recall (macro)
- type: f1_entity_span
value: 0.7878581945860234
name: F1 (entity span)
- type: precision_entity_span
value: 0.7719454000665853
name: Precision (entity span)
- type: recall_entity_span
value: 0.804440846536371
name: Recall (entity span)
---
# cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.6528115974857014
- Precision (micro): 0.6396626345577627
- Recall (micro): 0.6665124884366328
- F1 (macro): 0.6049985470954377
- Precision (macro): 0.5897437616700211
- Recall (macro): 0.6233545992999288
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.5229050279329609
- event: 0.4694835680751174
- group: 0.6115595737810786
- location: 0.651814131126671
- person: 0.8390510948905111
- product: 0.6531234128999492
- work_of_art: 0.4870530209617756
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- F1 (macro):
Full evaluation can be found at [metric file of NER](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2020
- dataset_name: None
- local_dataset: None
- model: cardiffnlp/twitter-roberta-large-2022-154m
- crf: True
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 10
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
| null |
Non_BioNLP
|
# cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.6528115974857014
- Precision (micro): 0.6396626345577627
- Recall (micro): 0.6665124884366328
- F1 (macro): 0.6049985470954377
- Precision (macro): 0.5897437616700211
- Recall (macro): 0.6233545992999288
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.5229050279329609
- event: 0.4694835680751174
- group: 0.6115595737810786
- location: 0.651814131126671
- person: 0.8390510948905111
- product: 0.6531234128999492
- work_of_art: 0.4870530209617756
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- F1 (macro):
Full evaluation can be found at [metric file of NER](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweetner7']
- dataset_split: train_2020
- dataset_name: None
- local_dataset: None
- model: cardiffnlp/twitter-roberta-large-2022-154m
- crf: True
- max_length: 128
- epoch: 30
- batch_size: 32
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.3
- max_grad_norm: 10
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
{"datasets": ["tner/tweetner7"], "metrics": ["f1", "precision", "recall"], "pipeline_tag": "token-classification", "widget": [{"text": "Jacob Collier is a Grammy awarded artist from England.", "example_title": "NER Example 1"}], "model-index": [{"name": "cardiffnlp/twitter-roberta-large-2022-154m-tweetner7-2020", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "tner/tweetner7", "type": "tner/tweetner7", "args": "tner/tweetner7"}, "metrics": [{"type": "f1", "value": 0.6528115974857014, "name": "F1"}, {"type": "precision", "value": 0.6396626345577627, "name": "Precision"}, {"type": "recall", "value": 0.6665124884366328, "name": "Recall"}, {"type": "f1_macro", "value": 0.6049985470954377, "name": "F1 (macro)"}, {"type": "precision_macro", "value": 0.5897437616700211, "name": "Precision (macro)"}, {"type": "recall_macro", "value": 0.6233545992999288, "name": "Recall (macro)"}, {"type": "f1_entity_span", "value": 0.7878581945860234, "name": "F1 (entity span)"}, {"type": "precision_entity_span", "value": 0.7719454000665853, "name": "Precision (entity span)"}, {"type": "recall_entity_span", "value": 0.804440846536371, "name": "Recall (entity span)"}]}]}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,135 |
muralik0115/bert-sentiment-analysis
|
muralik0115
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-27T21:23:21Z |
2024-05-28T22:18:22+00:00
| 5 | 0 |
---
base_model: google-bert/bert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.946
name: Accuracy
- type: f1
value: 0.945838163643056
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-sentiment-analysis
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1285
- Accuracy: 0.946
- F1: 0.9458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4104 | 1.0 | 667 | 0.1901 | 0.93 | 0.9306 |
| 0.1271 | 2.0 | 1334 | 0.1207 | 0.94 | 0.9391 |
| 0.0832 | 3.0 | 2001 | 0.1285 | 0.946 | 0.9458 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-sentiment-analysis
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1285
- Accuracy: 0.946
- F1: 0.9458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4104 | 1.0 | 667 | 0.1901 | 0.93 | 0.9306 |
| 0.1271 | 2.0 | 1334 | 0.1207 | 0.94 | 0.9391 |
| 0.0832 | 3.0 | 2001 | 0.1285 | 0.946 | 0.9458 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"base_model": "google-bert/bert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.946, "name": "Accuracy"}, {"type": "f1", "value": 0.945838163643056, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,136 |
slone/LaBSE-en-ru-myv-v1
|
slone
|
feature-extraction
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"pretraining",
"erzya",
"mordovian",
"fill-mask",
"embeddings",
"masked-lm",
"feature-extraction",
"sentence-similarity",
"ru",
"myv",
"dataset:slone/myv_ru_2022",
"arxiv:2209.09368",
"license:cc-by-sa-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2022-09-15T05:37:02Z |
2023-05-11T22:51:41+00:00
| 20 | 0 |
---
datasets:
- slone/myv_ru_2022
language:
- ru
- myv
license: cc-by-sa-4.0
tags:
- erzya
- mordovian
- fill-mask
- pretraining
- embeddings
- masked-lm
- feature-extraction
- sentence-similarity
---
This is an Erzya (`myv`, cyrillic script) sentence encoder from the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
It is based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) ([license here](https://tfhub.dev/google/LaBSE/2)), but with updated vocabulary and checkpoint:
- Removed all tokens except the most popular ones for English or Russian;
- Added extra tokens for Erzya language;
- Fine-tuned on the [slone/myv_ru_2022](https://huggingface.co/slone/myv_ru_2022) corpus using a mixture of tasks:
- Cross-lingual distillation of sentence embeddings from the original LaBSE model, using the parallel `ru-myv` corpus;
- Masked language modelling on `myv` monolingual data;
- Sentence pair classification to distinguish correct `ru-myv` translations from random pairs.
The model can be used as a sentence encoder or a masked language modelling predictor for Erzya, or fine-tuned for any downstream NLU dask.
Sentence embeddings can be produced with the code below:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("slone/LaBSE-en-ru-myv-v1")
model = AutoModel.from_pretrained("slone/LaBSE-en-ru-myv-v1")
sentences = ["Hello World", "Привет Мир", "Шумбратадо Мастор"]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings.shape) # torch.Size([3, 768])
```
| null |
Non_BioNLP
|
This is an Erzya (`myv`, cyrillic script) sentence encoder from the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
It is based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) ([license here](https://tfhub.dev/google/LaBSE/2)), but with updated vocabulary and checkpoint:
- Removed all tokens except the most popular ones for English or Russian;
- Added extra tokens for Erzya language;
- Fine-tuned on the [slone/myv_ru_2022](https://huggingface.co/slone/myv_ru_2022) corpus using a mixture of tasks:
- Cross-lingual distillation of sentence embeddings from the original LaBSE model, using the parallel `ru-myv` corpus;
- Masked language modelling on `myv` monolingual data;
- Sentence pair classification to distinguish correct `ru-myv` translations from random pairs.
The model can be used as a sentence encoder or a masked language modelling predictor for Erzya, or fine-tuned for any downstream NLU dask.
Sentence embeddings can be produced with the code below:
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("slone/LaBSE-en-ru-myv-v1")
model = AutoModel.from_pretrained("slone/LaBSE-en-ru-myv-v1")
sentences = ["Hello World", "Привет Мир", "Шумбратадо Мастор"]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings.shape) # torch.Size([3, 768])
```
|
{"datasets": ["slone/myv_ru_2022"], "language": ["ru", "myv"], "license": "cc-by-sa-4.0", "tags": ["erzya", "mordovian", "fill-mask", "pretraining", "embeddings", "masked-lm", "feature-extraction", "sentence-similarity"]}
|
task
|
[
"TRANSLATION"
] | 42,137 |
Star3073/distilbert-base-uncased-finetuned-emotion
|
Star3073
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-09T05:30:15Z |
2023-03-10T04:56:19+00:00
| 0 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.926
name: Accuracy
- type: f1
value: 0.9260434989792637
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8052 | 1.0 | 250 | 0.2976 | 0.9095 | 0.9068 |
| 0.2392 | 2.0 | 500 | 0.2105 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8052 | 1.0 | 250 | 0.2976 | 0.9095 | 0.9068 |
| 0.2392 | 2.0 | 500 | 0.2105 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9260434989792637, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,138 |
Helsinki-NLP/opus-mt-fr-bg
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:36:10+00:00
| 71 | 0 |
---
language:
- fr
- bg
license: apache-2.0
tags:
- translation
---
### fra-bul
* source group: French
* target group: Bulgarian
* OPUS readme: [fra-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md)
* model: transformer
* source language(s): fra
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.bul | 46.3 | 0.657 |
### System Info:
- hf_name: fra-bul
- source_languages: fra
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'bg']
- src_constituents: {'fra'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt
- src_alpha3: fra
- tgt_alpha3: bul
- short_pair: fr-bg
- chrF2_score: 0.657
- bleu: 46.3
- brevity_penalty: 0.953
- ref_len: 3286.0
- src_name: French
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: fr
- tgt_alpha2: bg
- prefer_old: False
- long_pair: fra-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### fra-bul
* source group: French
* target group: Bulgarian
* OPUS readme: [fra-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md)
* model: transformer
* source language(s): fra
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.bul | 46.3 | 0.657 |
### System Info:
- hf_name: fra-bul
- source_languages: fra
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'bg']
- src_constituents: {'fra'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt
- src_alpha3: fra
- tgt_alpha3: bul
- short_pair: fr-bg
- chrF2_score: 0.657
- bleu: 46.3
- brevity_penalty: 0.953
- ref_len: 3286.0
- src_name: French
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: fr
- tgt_alpha2: bg
- prefer_old: False
- long_pair: fra-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["fr", "bg"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,139 |
HPLT/sft-fpft-bg-bloom-560m
|
HPLT
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"bg",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-04T16:12:03Z |
2025-04-08T16:58:39+00:00
| 28 | 0 |
---
language:
- bg
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-560m](https://huggingface.co/bloom-560m)
* Instruction tuning language: Bulgarian
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
| null |
Non_BioNLP
|
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with full-parameter fine-tuning and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org/abs/2309.08958)
#### Instruction tuning details
* Base model: [bloom-560m](https://huggingface.co/bloom-560m)
* Instruction tuning language: Bulgarian
* Training method: full-parameter fine-tuning.
* Best checkpoint: best cross-entropy on a validation set, trained for 3 epochs.
* Dataset: machine-translated from [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). You can download our data [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/training-data).
#### Usage
The model checkpoint should be loaded using `transformers` library.
Please refer to our Github repository [HERE](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main/fpft) for inference and training instructions.
#### Citation
```
@inproceedings{chen-etal-2024-monolingual,
title="Monolingual or multilingual instruction tuning: Which makes a better {Alpaca}",
author="Pinzhen Chen and Shaoxiong Ji and Nikolay Bogoychev and Andrey Kutuzov and Barry Haddow and Kenneth Heafield",
year="2024",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2024",
}
```
|
{"language": ["bg"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,140 |
rombodawg/rombos_Replete-Coder-Llama3-8B
|
rombodawg
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"dataset:Replete-AI/code_bagel_hermes-2.5",
"dataset:Replete-AI/code_bagel",
"dataset:Replete-AI/OpenHermes-2.5-Uncensored",
"dataset:teknium/OpenHermes-2.5",
"dataset:layoric/tiny-codes-alpaca",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:ajibawa-2023/Code-290k-ShareGPT",
"dataset:TIGER-Lab/MathInstruct",
"dataset:chargoddard/commitpack-ft-instruct-rated",
"dataset:iamturun/code_instructions_120k_alpaca",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:nickrosh/Evol-Instruct-Code-80k-v1",
"dataset:coseal/CodeUltraFeedback_binarized",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:jondurbin/airoboros-2.2",
"dataset:camel-ai",
"dataset:lmsys/lmsys-chat-1m",
"dataset:CollectiveCognition/chats-data-2023-09-22",
"dataset:CoT-Alpaca-GPT4",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:teknium/GPT4-LLM-Cleaned",
"dataset:GPTeacher",
"dataset:OpenGPT",
"dataset:meta-math/MetaMathQA",
"dataset:Open-Orca/SlimOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:Unnatural-Instructions-GPT4",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-06T07:50:32Z |
2024-10-06T07:50:33+00:00
| 3,655 | 3 |
---
datasets:
- Replete-AI/code_bagel_hermes-2.5
- Replete-AI/code_bagel
- Replete-AI/OpenHermes-2.5-Uncensored
- teknium/OpenHermes-2.5
- layoric/tiny-codes-alpaca
- glaiveai/glaive-code-assistant-v3
- ajibawa-2023/Code-290k-ShareGPT
- TIGER-Lab/MathInstruct
- chargoddard/commitpack-ft-instruct-rated
- iamturun/code_instructions_120k_alpaca
- ise-uiuc/Magicoder-Evol-Instruct-110K
- cognitivecomputations/dolphin-coder
- nickrosh/Evol-Instruct-Code-80k-v1
- coseal/CodeUltraFeedback_binarized
- glaiveai/glaive-function-calling-v2
- CyberNative/Code_Vulnerability_Security_DPO
- jondurbin/airoboros-2.2
- camel-ai
- lmsys/lmsys-chat-1m
- CollectiveCognition/chats-data-2023-09-22
- CoT-Alpaca-GPT4
- WizardLM/WizardLM_evol_instruct_70k
- WizardLM/WizardLM_evol_instruct_V2_196k
- teknium/GPT4-LLM-Cleaned
- GPTeacher
- OpenGPT
- meta-math/MetaMathQA
- Open-Orca/SlimOrca
- garage-bAInd/Open-Platypus
- anon8231489123/ShareGPT_Vicuna_unfiltered
- Unnatural-Instructions-GPT4
license: other
license_name: llama-3
license_link: https://llama.meta.com/llama3/license/
tags:
- text-generation-inference
- transformers
- unsloth
- llama
model-index:
- name: Replete-Coder-llama3-8b
results:
- task:
type: text-generation
name: HumanEval
dataset:
name: HumanEval
type: openai_humaneval
metrics:
- type: pass@1
value: 0.6468383584267833
name: pass@1
verified: true
- task:
type: text-generation
name: AI2 Reasoning Challenge
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: accuracy
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: accuracy
name: normalized accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: multiple_choice_accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
name: accuracy
source:
url: https://www.placeholderurl.com
name: Open LLM Leaderboard
---
# Replete-Coder-llama3-8b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!

Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental below.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.

__________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: Custom Alpaca
```
### System:
{}
### Instruction:
{}
### Response:
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1VAaxMQJN9-78WLsPU0GWg5tEkasXoTP9?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
| null |
Non_BioNLP
|
# Replete-Coder-llama3-8b
Finetuned by: Rombodawg
### More than just a coding model!
Although Replete-Coder has amazing coding capabilities, its trained on vaste amount of non-coding data, fully cleaned and uncensored. Dont just use it for coding, use it for all your needs! We are truly trying to make the GPT killer!

Thank you to TensorDock for sponsoring Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b
you can check out their website for cloud compute rental below.
- https://tensordock.com
__________________________________________________________________________________________________
Replete-Coder-llama3-8b is a general purpose model that is specially trained in coding in over 100 coding languages. The data used to train the model contains 25% non-code instruction data and 75% coding instruction data totaling up to 3.9 million lines, roughly 1 billion tokens, or 7.27gb of instruct data. The data used to train this model was 100% uncensored, then fully deduplicated, before training happened.
The Replete-Coder models (including Replete-Coder-llama3-8b and Replete-Coder-Qwen2-1.5b) feature the following:
- Advanced coding capabilities in over 100 coding languages
- Advanced code translation (between languages)
- Security and vulnerability prevention related coding capabilities
- General purpose use
- Uncensored use
- Function calling
- Advanced math use
- Use on low end (8b) and mobile (1.5b) platforms
Notice: Replete-Coder series of models are fine-tuned on a context window of 8192 tokens. Performance past this context window is not guaranteed.

__________________________________________________________________________________________________
You can find the 25% non-coding instruction below:
- https://huggingface.co/datasets/Replete-AI/OpenHermes-2.5-Uncensored
And the 75% coding specific instruction data below:
- https://huggingface.co/datasets/Replete-AI/code_bagel
These two datasets were combined to create the final dataset for training, which is linked below:
- https://huggingface.co/datasets/Replete-AI/code_bagel_hermes-2.5
__________________________________________________________________________________________________
## Prompt Template: Custom Alpaca
```
### System:
{}
### Instruction:
{}
### Response:
{}
```
Note: The system prompt varies in training data, but the most commonly used one is:
```
Below is an instruction that describes a task, Write a response that appropriately completes the request.
```
End token:
```
<|endoftext|>
```
__________________________________________________________________________________________________
Thank you to the community for your contributions to the Replete-AI/code_bagel_hermes-2.5 dataset. Without the participation of so many members making their datasets free and open source for any to use, this amazing AI model wouldn't be possible.
Extra special thanks to Teknium for the Open-Hermes-2.5 dataset and jondurbin for the bagel dataset and the naming idea for the code_bagel series of datasets. You can find both of their huggingface accounts linked below:
- https://huggingface.co/teknium
- https://huggingface.co/jondurbin
Another special thanks to unsloth for being the main method of training for Replete-Coder. Bellow you can find their github, as well as the special Replete-Ai secret sause (Unsloth + Qlora + Galore) colab code document that was used to train this model.
- https://github.com/unslothai/unsloth
- https://colab.research.google.com/drive/1VAaxMQJN9-78WLsPU0GWg5tEkasXoTP9?usp=sharing
__________________________________________________________________________________________________
## Join the Replete-Ai discord! We are a great and Loving community!
- https://discord.gg/ZZbnsmVnjD
|
{"datasets": ["Replete-AI/code_bagel_hermes-2.5", "Replete-AI/code_bagel", "Replete-AI/OpenHermes-2.5-Uncensored", "teknium/OpenHermes-2.5", "layoric/tiny-codes-alpaca", "glaiveai/glaive-code-assistant-v3", "ajibawa-2023/Code-290k-ShareGPT", "TIGER-Lab/MathInstruct", "chargoddard/commitpack-ft-instruct-rated", "iamturun/code_instructions_120k_alpaca", "ise-uiuc/Magicoder-Evol-Instruct-110K", "cognitivecomputations/dolphin-coder", "nickrosh/Evol-Instruct-Code-80k-v1", "coseal/CodeUltraFeedback_binarized", "glaiveai/glaive-function-calling-v2", "CyberNative/Code_Vulnerability_Security_DPO", "jondurbin/airoboros-2.2", "camel-ai", "lmsys/lmsys-chat-1m", "CollectiveCognition/chats-data-2023-09-22", "CoT-Alpaca-GPT4", "WizardLM/WizardLM_evol_instruct_70k", "WizardLM/WizardLM_evol_instruct_V2_196k", "teknium/GPT4-LLM-Cleaned", "GPTeacher", "OpenGPT", "meta-math/MetaMathQA", "Open-Orca/SlimOrca", "garage-bAInd/Open-Platypus", "anon8231489123/ShareGPT_Vicuna_unfiltered", "Unnatural-Instructions-GPT4"], "license": "other", "license_name": "llama-3", "license_link": "https://llama.meta.com/llama3/license/", "tags": ["text-generation-inference", "transformers", "unsloth", "llama"], "model-index": [{"name": "Replete-Coder-llama3-8b", "results": [{"task": {"type": "text-generation", "name": "HumanEval"}, "dataset": {"name": "HumanEval", "type": "openai_humaneval"}, "metrics": [{"type": "pass@1", "value": 0.6468383584267833, "name": "pass@1", "verified": true}]}, {"task": {"type": "text-generation", "name": "AI2 Reasoning Challenge"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "accuracy", "name": "normalized accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "accuracy", "name": "normalized accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "multiple_choice_accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "accuracy", "name": "accuracy"}], "source": {"url": "https://www.placeholderurl.com", "name": "Open LLM Leaderboard"}}]}]}
|
task
|
[
"TRANSLATION"
] | 42,141 |
Lediona/BioBERT-finetuned-ner
|
Lediona
|
token-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:Dinithi/BioBERT",
"base_model:finetune:Dinithi/BioBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-20T19:17:16Z |
2024-11-21T15:19:16+00:00
| 5 | 0 |
---
base_model: Dinithi/BioBERT
library_name: transformers
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: BioBERT-finetuned-ner
results: []
---
This is a BioBERT-based model is fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.
This model classifies input tokens into one of five classes:
B-DRUG: beginning of a drug entity
I-DRUG: within a drug entity
B-EFFECT: beginning of an AE entity
I-EFFECT: within an AE entity
O: outside either of the above entities
# BioBERT-finetuned-ner
This model is a fine-tuned version of [Dinithi/BioBERT](https://huggingface.co/Dinithi/BioBERT) on ade_corpus_v2.
It achieves the following results on the evaluation set:
- Loss: 0.1602
- Precision: 0.8136
- Recall: 0.8961
- F1: 0.8528
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1673 | 1.0 | 113 | 0.2197 | 0.7545 | 0.8573 | 0.8027 | 0.9334 |
| 0.174 | 2.0 | 226 | 0.1691 | 0.7820 | 0.8870 | 0.8312 | 0.9472 |
| 0.1832 | 3.0 | 339 | 0.1596 | 0.8043 | 0.8915 | 0.8457 | 0.9506 |
| 0.0327 | 4.0 | 452 | 0.1591 | 0.8068 | 0.8980 | 0.8500 | 0.9526 |
| 0.036 | 5.0 | 565 | 0.1602 | 0.8136 | 0.8961 | 0.8528 | 0.9524 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Tokenizers 0.20.3
### Dataset used
Dataset: https://huggingface.co/datasets/ade_corpus_v2
| null |
BioNLP
|
This is a BioBERT-based model is fine-tuned to perform Named Entity Recognition for drug names and adverse drug effects.
This model classifies input tokens into one of five classes:
B-DRUG: beginning of a drug entity
I-DRUG: within a drug entity
B-EFFECT: beginning of an AE entity
I-EFFECT: within an AE entity
O: outside either of the above entities
# BioBERT-finetuned-ner
This model is a fine-tuned version of [Dinithi/BioBERT](https://huggingface.co/Dinithi/BioBERT) on ade_corpus_v2.
It achieves the following results on the evaluation set:
- Loss: 0.1602
- Precision: 0.8136
- Recall: 0.8961
- F1: 0.8528
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1673 | 1.0 | 113 | 0.2197 | 0.7545 | 0.8573 | 0.8027 | 0.9334 |
| 0.174 | 2.0 | 226 | 0.1691 | 0.7820 | 0.8870 | 0.8312 | 0.9472 |
| 0.1832 | 3.0 | 339 | 0.1596 | 0.8043 | 0.8915 | 0.8457 | 0.9506 |
| 0.0327 | 4.0 | 452 | 0.1591 | 0.8068 | 0.8980 | 0.8500 | 0.9526 |
| 0.036 | 5.0 | 565 | 0.1602 | 0.8136 | 0.8961 | 0.8528 | 0.9524 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Tokenizers 0.20.3
### Dataset used
Dataset: https://huggingface.co/datasets/ade_corpus_v2
|
{"base_model": "Dinithi/BioBERT", "library_name": "transformers", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BioBERT-finetuned-ner", "results": []}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,142 |
mserloth/v18
|
mserloth
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:v18/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-02T16:11:01Z |
2024-05-02T16:12:38+00:00
| 7 | 0 |
---
datasets:
- v18/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6887156963348389
f1_macro: 0.6568419235085902
f1_micro: 0.638095238095238
f1_weighted: 0.6344955456066568
precision_macro: 0.7028995456179922
precision_micro: 0.638095238095238
precision_weighted: 0.7468468005684834
recall_macro: 0.6885887791171937
recall_micro: 0.638095238095238
recall_weighted: 0.638095238095238
accuracy: 0.638095238095238
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6887156963348389
f1_macro: 0.6568419235085902
f1_micro: 0.638095238095238
f1_weighted: 0.6344955456066568
precision_macro: 0.7028995456179922
precision_micro: 0.638095238095238
precision_weighted: 0.7468468005684834
recall_macro: 0.6885887791171937
recall_micro: 0.638095238095238
recall_weighted: 0.638095238095238
accuracy: 0.638095238095238
|
{"datasets": ["v18/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,143 |
alicekwak/setfit-attestation
|
alicekwak
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-11-09T03:26:58Z |
2023-11-09T03:27:20+00:00
| 4 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# alicekwak/setfit-attestation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("alicekwak/setfit-attestation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# alicekwak/setfit-attestation
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("alicekwak/setfit-attestation")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,144 |
lingvanex/samoan-to-english-translation
|
lingvanex
|
translation
|
[
"translation",
"ctranslate2",
"sm",
"en",
"license:mit",
"region:us"
] | 2025-02-27T14:58:37Z |
2025-02-28T13:12:49+00:00
| 31 | 0 |
---
language:
- sm
- en
license: mit
tags:
- translation
- ctranslate2
---
# Samoan to English Translation
This repository provides pre-trained multilingual translation models designed for fast and accurate translations between various languages, such as Kurdish, Samoan, Xhosa, Lao, Corsican, Cebuano, Galician, Yiddish, Swahili, and Yoruba. These models can be used to translate texts from these languages into English and vice versa, making them suitable for machine translation tasks, language localization projects, and building custom translation tools.
# Key Features:
Samoan to English Translation
Support for multiple languages (see full list below)
Pre-trained and optimized for accuracy
Easy integration into existing translation workflows
# Other Languages:
Kurdish
Samoan
Xhosa
Lao
Corsican
Cebuano
Galician
Yiddish
Swahili
Yoruba
# Use Cases:
Machine translation of texts from underrepresented languages
Localization of websites, apps, or documents into multiple languages
Developing multilingual NLP tools for research and production environments
# Requirements:
To run the models, you need to install ctranslate2 and sentencepiece:
pip install ctranslate2 sentencepiece
# Simple Usage Example
The following code demonstrates how to load and use a model for translation from Samoan to English (sm → en).
```python
import sentencepiece as spm
from ctranslate2 import Translator
path_to_model = <here_is_your_path_to_the_model>
source = 'sm'
target = 'en'
translator = Translator(path_to_model, compute_type='int8')
source_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{source}.spm.model')
target_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{target}.spm.model')
text = [
'I need to make a phone call.',
'Can I help you prepare food?',
'We want to go for a walk.'
]
input_tokens = source_tokenizer.EncodeAsPieces(text)
translator_output = translator.translate_batch(
input_tokens,
batch_type='tokens',
beam_size=2,
max_input_length=0,
max_decoding_length=256
)
output_tokens = [item.hypotheses[0] for item in translator_output]
translation = target_tokenizer.DecodePieces(output_tokens)
print('\n'.join(translation))
```
# Keywords:
Kurdish to English Translation, Samoan to English Translation, Xhosa Translation, Lao to English, Corsican Translation, Cebuano Translation, Galician to English Translation, Yiddish to English Translation, Swahili Translation, Yoruba to English Translation, Multilingual Machine Translation, NLP, Neural Networks, eLearning
# Contact:
If you have any questions, just email [email protected]
| null |
Non_BioNLP
|
# Samoan to English Translation
This repository provides pre-trained multilingual translation models designed for fast and accurate translations between various languages, such as Kurdish, Samoan, Xhosa, Lao, Corsican, Cebuano, Galician, Yiddish, Swahili, and Yoruba. These models can be used to translate texts from these languages into English and vice versa, making them suitable for machine translation tasks, language localization projects, and building custom translation tools.
# Key Features:
Samoan to English Translation
Support for multiple languages (see full list below)
Pre-trained and optimized for accuracy
Easy integration into existing translation workflows
# Other Languages:
Kurdish
Samoan
Xhosa
Lao
Corsican
Cebuano
Galician
Yiddish
Swahili
Yoruba
# Use Cases:
Machine translation of texts from underrepresented languages
Localization of websites, apps, or documents into multiple languages
Developing multilingual NLP tools for research and production environments
# Requirements:
To run the models, you need to install ctranslate2 and sentencepiece:
pip install ctranslate2 sentencepiece
# Simple Usage Example
The following code demonstrates how to load and use a model for translation from Samoan to English (sm → en).
```python
import sentencepiece as spm
from ctranslate2 import Translator
path_to_model = <here_is_your_path_to_the_model>
source = 'sm'
target = 'en'
translator = Translator(path_to_model, compute_type='int8')
source_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{source}.spm.model')
target_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{target}.spm.model')
text = [
'I need to make a phone call.',
'Can I help you prepare food?',
'We want to go for a walk.'
]
input_tokens = source_tokenizer.EncodeAsPieces(text)
translator_output = translator.translate_batch(
input_tokens,
batch_type='tokens',
beam_size=2,
max_input_length=0,
max_decoding_length=256
)
output_tokens = [item.hypotheses[0] for item in translator_output]
translation = target_tokenizer.DecodePieces(output_tokens)
print('\n'.join(translation))
```
# Keywords:
Kurdish to English Translation, Samoan to English Translation, Xhosa Translation, Lao to English, Corsican Translation, Cebuano Translation, Galician to English Translation, Yiddish to English Translation, Swahili Translation, Yoruba to English Translation, Multilingual Machine Translation, NLP, Neural Networks, eLearning
# Contact:
If you have any questions, just email [email protected]
|
{"language": ["sm", "en"], "license": "mit", "tags": ["translation", "ctranslate2"]}
|
task
|
[
"TRANSLATION"
] | 42,145 |
zaib32/autotrain-finetune_17-0-3516595138
|
zaib32
|
summarization
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:zaib32/autotrain-data-finetune_17-0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-15T23:34:01Z |
2023-02-15T23:46:54+00:00
| 18 | 0 |
---
datasets:
- zaib32/autotrain-data-finetune_17-0
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.13450186573008246
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 3516595138
- CO2 Emissions (in grams): 0.1345
## Validation Metrics
- Loss: 1.229
- Rouge1: 52.561
- Rouge2: 25.355
- RougeL: 37.474
- RougeLsum: 48.677
- Gen Len: 186.719
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zaib32/autotrain-finetune_17-0-3516595138
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 3516595138
- CO2 Emissions (in grams): 0.1345
## Validation Metrics
- Loss: 1.229
- Rouge1: 52.561
- Rouge2: 25.355
- RougeL: 37.474
- RougeLsum: 48.677
- Gen Len: 186.719
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zaib32/autotrain-finetune_17-0-3516595138
```
|
{"datasets": ["zaib32/autotrain-data-finetune_17-0"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.13450186573008246}}
|
task
|
[
"SUMMARIZATION"
] | 42,146 |
Hvare/Athena-indobert-finetuned-indonli-SentenceTransformer
|
Hvare
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10330",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-12T12:40:50Z |
2024-06-12T12:41:14+00:00
| 22 | 0 |
---
base_model: indobenchmark/indobert-base-p2
datasets: []
language: []
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10330
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Pura Ulun Danu terletak sekitar 56 kilometer dari Kota Denpasar.
sentences:
- Dalam tujuh bulan kehamilan, organ tubuh bayi sudah sempurna.
- Dokter Adeline menjelaskan aturan-aturan agar diabetisi aman berpuasa.
- Pura Ulun Danu terletak sekitar satu jam perjalanan dari Kota Denpasar.
- source_sentence: Di luar ujung barat laut, taiga dominan, mencakup bagian besar
dari seluruh Siberia.
sentences:
- Banyak keraguan mengenai tanggal kelahiran Gaudapa.
- Sebagian besar Siberia terletak di ujung barat laut,.
- Maia menyaksikan balapan tanpa alasan.
- source_sentence: Widodo Cahyono Putro adalah seorang pelatih dan pemain sepak bola
legendaris Indonesia.
sentences:
- Ia berjanji untuk jatuh di lubang yang sama.
- Pemain sepak bola legendaris pasti menjadi pelatih sepak bola.
- Nazaruddin menegaskan bahwa mantan Wakil Ketua Komisi II DPR itu menerima uang
dari proyek e-KTP sebesar $500 ribu.
- source_sentence: Salah satunya seorang lelaki yang sedang memakan permen karet yang
dengan paksa dikeluarkan dari mulutnya.
sentences:
- Charles Leclerc gagal menjadi juara dunia F2.
- Pendukung pembrontakan Cina sudah tidak ada.
- Lelaki itu bukan salah satunya.
- source_sentence: Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong,
Kecamatan Panumbangan, Kabupaten Ciamis.
sentences:
- Peristiwa Pemberontakan Besar di Minahasa memiliki dampak besar pada tentara Sekutu.
- Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya.
- Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian
dilarungkan ke Pantai Laut Selatan.
model-index:
- name: SentenceTransformer based on indobenchmark/indobert-base-p2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: -0.05296221890135024
name: Pearson Cosine
- type: spearman_cosine
value: -0.06107163627723088
name: Spearman Cosine
- type: pearson_manhattan
value: -0.06399377304712585
name: Pearson Manhattan
- type: spearman_manhattan
value: -0.06835801919486152
name: Spearman Manhattan
- type: pearson_euclidean
value: -0.0642574675392147
name: Pearson Euclidean
- type: spearman_euclidean
value: -0.06906447787846218
name: Spearman Euclidean
- type: pearson_dot
value: -0.024528943319169508
name: Pearson Dot
- type: spearman_dot
value: -0.024236369255517205
name: Spearman Dot
- type: pearson_max
value: -0.024528943319169508
name: Pearson Max
- type: spearman_max
value: -0.024236369255517205
name: Spearman Max
---
# SentenceTransformer based on indobenchmark/indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) <!-- at revision 94b4e0a82081fa57f227fcc2024d1ea89b57ac1f -->
- **Maximum Sequence Length:** 75 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Hvare/Athena-indobert-finetuned-indonli")
# Run inference
sentences = [
'Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong, Kecamatan Panumbangan, Kabupaten Ciamis.',
'Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian dilarungkan ke Pantai Laut Selatan.',
'Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:------------|
| pearson_cosine | -0.053 |
| spearman_cosine | -0.0611 |
| pearson_manhattan | -0.064 |
| spearman_manhattan | -0.0684 |
| pearson_euclidean | -0.0643 |
| spearman_euclidean | -0.0691 |
| pearson_dot | -0.0245 |
| spearman_dot | -0.0242 |
| pearson_max | -0.0245 |
| **spearman_max** | **-0.0242** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,330 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 11 tokens</li><li>mean: 29.47 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.25 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>0: ~35.90%</li><li>1: ~32.00%</li><li>2: ~32.10%</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>"" "Akan ada protes dan hal-hal lain, semua nya sudah direncanakan," "ungkap oposisi kepada El Mundo."</code> | <code>Protes dan hal-hal lain sudah direncanakan.</code> | <code>0</code> |
| <code>Tak jarang, bangun kesiangan pun jadi alasan untuk tak berolahraga.</code> | <code>Salah satu alasan tidak berolahraga adalah bangun kesiangan.</code> | <code>0</code> |
| <code>Namun, saingannya Prabowo Subianto juga mendeklarasikan kemenangan, membuat orang Indonesia bingung.</code> | <code>Prabowo menerima bahwa Dia kalah.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-dev_spearman_max |
|:------:|:----:|:-------------:|:--------------------:|
| 0.0991 | 64 | - | -0.0411 |
| 0.1981 | 128 | - | -0.0426 |
| 0.2972 | 192 | - | -0.0419 |
| 0.3963 | 256 | - | -0.0425 |
| 0.4954 | 320 | - | -0.0384 |
| 0.5944 | 384 | - | -0.0260 |
| 0.6935 | 448 | - | -0.0216 |
| 0.7740 | 500 | 0.0531 | - |
| 0.7926 | 512 | - | -0.0243 |
| 0.8916 | 576 | - | -0.0241 |
| 0.9907 | 640 | - | -0.0242 |
| 1.0 | 646 | - | -0.0242 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on indobenchmark/indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) <!-- at revision 94b4e0a82081fa57f227fcc2024d1ea89b57ac1f -->
- **Maximum Sequence Length:** 75 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Hvare/Athena-indobert-finetuned-indonli")
# Run inference
sentences = [
'Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong, Kecamatan Panumbangan, Kabupaten Ciamis.',
'Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian dilarungkan ke Pantai Laut Selatan.',
'Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:------------|
| pearson_cosine | -0.053 |
| spearman_cosine | -0.0611 |
| pearson_manhattan | -0.064 |
| spearman_manhattan | -0.0684 |
| pearson_euclidean | -0.0643 |
| spearman_euclidean | -0.0691 |
| pearson_dot | -0.0245 |
| spearman_dot | -0.0242 |
| pearson_max | -0.0245 |
| **spearman_max** | **-0.0242** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,330 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 11 tokens</li><li>mean: 29.47 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.25 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>0: ~35.90%</li><li>1: ~32.00%</li><li>2: ~32.10%</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------|
| <code>"" "Akan ada protes dan hal-hal lain, semua nya sudah direncanakan," "ungkap oposisi kepada El Mundo."</code> | <code>Protes dan hal-hal lain sudah direncanakan.</code> | <code>0</code> |
| <code>Tak jarang, bangun kesiangan pun jadi alasan untuk tak berolahraga.</code> | <code>Salah satu alasan tidak berolahraga adalah bangun kesiangan.</code> | <code>0</code> |
| <code>Namun, saingannya Prabowo Subianto juga mendeklarasikan kemenangan, membuat orang Indonesia bingung.</code> | <code>Prabowo menerima bahwa Dia kalah.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-dev_spearman_max |
|:------:|:----:|:-------------:|:--------------------:|
| 0.0991 | 64 | - | -0.0411 |
| 0.1981 | 128 | - | -0.0426 |
| 0.2972 | 192 | - | -0.0419 |
| 0.3963 | 256 | - | -0.0425 |
| 0.4954 | 320 | - | -0.0384 |
| 0.5944 | 384 | - | -0.0260 |
| 0.6935 | 448 | - | -0.0216 |
| 0.7740 | 500 | 0.0531 | - |
| 0.7926 | 512 | - | -0.0243 |
| 0.8916 | 576 | - | -0.0241 |
| 0.9907 | 640 | - | -0.0242 |
| 1.0 | 646 | - | -0.0242 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "indobenchmark/indobert-base-p2", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:10330", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Pura Ulun Danu terletak sekitar 56 kilometer dari Kota Denpasar.", "sentences": ["Dalam tujuh bulan kehamilan, organ tubuh bayi sudah sempurna.", "Dokter Adeline menjelaskan aturan-aturan agar diabetisi aman berpuasa.", "Pura Ulun Danu terletak sekitar satu jam perjalanan dari Kota Denpasar."]}, {"source_sentence": "Di luar ujung barat laut, taiga dominan, mencakup bagian besar dari seluruh Siberia.", "sentences": ["Banyak keraguan mengenai tanggal kelahiran Gaudapa.", "Sebagian besar Siberia terletak di ujung barat laut,.", "Maia menyaksikan balapan tanpa alasan."]}, {"source_sentence": "Widodo Cahyono Putro adalah seorang pelatih dan pemain sepak bola legendaris Indonesia.", "sentences": ["Ia berjanji untuk jatuh di lubang yang sama.", "Pemain sepak bola legendaris pasti menjadi pelatih sepak bola.", "Nazaruddin menegaskan bahwa mantan Wakil Ketua Komisi II DPR itu menerima uang dari proyek e-KTP sebesar $500 ribu."]}, {"source_sentence": "Salah satunya seorang lelaki yang sedang memakan permen karet yang dengan paksa dikeluarkan dari mulutnya.", "sentences": ["Charles Leclerc gagal menjadi juara dunia F2.", "Pendukung pembrontakan Cina sudah tidak ada.", "Lelaki itu bukan salah satunya."]}, {"source_sentence": "Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong, Kecamatan Panumbangan, Kabupaten Ciamis.", "sentences": ["Peristiwa Pemberontakan Besar di Minahasa memiliki dampak besar pada tentara Sekutu.", "Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya.", "Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian dilarungkan ke Pantai Laut Selatan."]}], "model-index": [{"name": "SentenceTransformer based on indobenchmark/indobert-base-p2", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": -0.05296221890135024, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": -0.06107163627723088, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": -0.06399377304712585, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": -0.06835801919486152, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": -0.0642574675392147, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": -0.06906447787846218, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": -0.024528943319169508, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": -0.024236369255517205, "name": "Spearman Dot"}, {"type": "pearson_max", "value": -0.024528943319169508, "name": "Pearson Max"}, {"type": "spearman_max", "value": -0.024236369255517205, "name": "Spearman Max"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | 42,147 |
PrimeQA/open-nq-colbert-xlmr-large
|
PrimeQA
| null |
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-11-17T02:47:00Z |
2023-08-31T15:57:31+00:00
| 13 | 5 |
---
license: apache-2.0
---
# Model Description
This is a retriever model based on ColBERT v2 with [bert-base-uncased](https://huggingface.co/bert-base-uncased) language model.<br>
This model was trained with the OpenNQ data.<br>
The architecture of the model and hyper parameters are described in the paper ‘Relevance-guided Supervision for OpenQA with ColBERT’.
## Intended uses & limitations
This model uses the xlm-roberta-large LM. Biases associated with the pre-trained language model we used may be present in this ColBERT v2 model.
## Usage
This model can be used with [PrimeQA](https://github.com/primeqa/primeqa)’s [ColBERT](https://github.com/primeqa/primeqa/blob/main/primeqa/ir/README.md) engine.
## BibTeX entry and citation info
```bibtex
@article{Khattab2021RelevanceguidedSF,
title={Relevance-guided Supervision for OpenQA with ColBERT},
author={O. Khattab and Christopher Potts and Matei A. Zaharia},
journal={Transactions of the Association for Computational Linguistics},
year={2021},
}
```
```bibtex
@article{Lee2019LatentRF,
title={Latent Retrieval for Weakly Supervised Open Domain Question Answering},
author={Kenton Lee and Ming-Wei Chang and Kristina Toutanova},
journal={ACL},
year={2019}
}
```
| null |
Non_BioNLP
|
# Model Description
This is a retriever model based on ColBERT v2 with [bert-base-uncased](https://huggingface.co/bert-base-uncased) language model.<br>
This model was trained with the OpenNQ data.<br>
The architecture of the model and hyper parameters are described in the paper ‘Relevance-guided Supervision for OpenQA with ColBERT’.
## Intended uses & limitations
This model uses the xlm-roberta-large LM. Biases associated with the pre-trained language model we used may be present in this ColBERT v2 model.
## Usage
This model can be used with [PrimeQA](https://github.com/primeqa/primeqa)’s [ColBERT](https://github.com/primeqa/primeqa/blob/main/primeqa/ir/README.md) engine.
## BibTeX entry and citation info
```bibtex
@article{Khattab2021RelevanceguidedSF,
title={Relevance-guided Supervision for OpenQA with ColBERT},
author={O. Khattab and Christopher Potts and Matei A. Zaharia},
journal={Transactions of the Association for Computational Linguistics},
year={2021},
}
```
```bibtex
@article{Lee2019LatentRF,
title={Latent Retrieval for Weakly Supervised Open Domain Question Answering},
author={Kenton Lee and Ming-Wei Chang and Kristina Toutanova},
journal={ACL},
year={2019}
}
```
|
{"license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,148 |
yujiepan/bert-base-uncased-imdb
|
yujiepan
|
text-classification
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:textattack/bert-base-uncased-imdb",
"base_model:finetune:textattack/bert-base-uncased-imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-15T07:46:09Z |
2025-01-21T15:57:11+00:00
| 23 | 0 |
---
base_model: textattack/bert-base-uncased-imdb
datasets:
- imdb
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: baseline
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.92088
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline
This model is a fine-tuned version of [textattack/bert-base-uncased-imdb](https://huggingface.co/textattack/bert-base-uncased-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5238
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training script
```bash
python run_glue.py \
--model_name_or_path textattack/bert-base-uncased-imdb \
--dataset_name imdb \
--do_train \
--do_eval \
--max_seq_length 384 \
--pad_to_max_length False \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--fp16 \
--learning_rate 5e-5 \
--optim adamw_torch \
--num_train_epochs 3 \
--overwrite_output_dir \
--output_dir /tmp/bert-base-uncased-imdb
```
Note: `run_glue.py` is modified to set the "test" split as evaluation dataset.
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline
This model is a fine-tuned version of [textattack/bert-base-uncased-imdb](https://huggingface.co/textattack/bert-base-uncased-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5238
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training script
```bash
python run_glue.py \
--model_name_or_path textattack/bert-base-uncased-imdb \
--dataset_name imdb \
--do_train \
--do_eval \
--max_seq_length 384 \
--pad_to_max_length False \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--fp16 \
--learning_rate 5e-5 \
--optim adamw_torch \
--num_train_epochs 3 \
--overwrite_output_dir \
--output_dir /tmp/bert-base-uncased-imdb
```
Note: `run_glue.py` is modified to set the "test" split as evaluation dataset.
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"base_model": "textattack/bert-base-uncased-imdb", "datasets": ["imdb"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "baseline", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.92088, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,149 |
TransferGraph/Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/lecun_feather_berts-8",
"base_model:adapter:Jeevesh8/lecun_feather_berts-8",
"model-index",
"region:us"
] | 2024-02-29T12:51:17Z |
2024-02-29T12:51:19+00:00
| 0 | 0 |
---
base_model: Jeevesh8/lecun_feather_berts-8
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.6336898395721925
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-8](https://huggingface.co/Jeevesh8/lecun_feather_berts-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.5856 | 1.1833 | 0 |
| 0.6390 | 1.0057 | 1 |
| 0.6364 | 0.9549 | 2 |
| 0.6337 | 0.9384 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [Jeevesh8/lecun_feather_berts-8](https://huggingface.co/Jeevesh8/lecun_feather_berts-8) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2380 | None | 0 |
| 0.5856 | 1.1833 | 0 |
| 0.6390 | 1.0057 | 1 |
| 0.6364 | 0.9549 | 2 |
| 0.6337 | 0.9384 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "Jeevesh8/lecun_feather_berts-8", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "Jeevesh8_lecun_feather_berts-8-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "emotion", "split": "validation", "args": "emotion"}, "metrics": [{"type": "accuracy", "value": 0.6336898395721925, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,150 |
Marialab/finetuned-whisper-medium-1000-step
|
Marialab
|
automatic-speech-recognition
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"dataset:darija-c",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-15T16:01:48Z |
2024-12-15T16:04:11+00:00
| 5 | 0 |
---
base_model: openai/whisper-medium
datasets:
- darija-c
language:
- ar
library_name: transformers
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: Finetuned Whisper medium for darija speech translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned Whisper medium for darija speech translation
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Darija-C dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Bleu: 0.7440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3191 | 12.5 | 50 | 0.3022 | 0.4084 |
| 0.0359 | 25.0 | 100 | 0.0001 | 0.7440 |
| 0.0001 | 37.5 | 150 | 0.0001 | 0.7440 |
| 0.0001 | 50.0 | 200 | 0.0000 | 0.7440 |
| 0.0 | 62.5 | 250 | 0.0000 | 0.7440 |
| 0.0 | 75.0 | 300 | 0.0000 | 0.7440 |
| 0.0 | 87.5 | 350 | 0.0000 | 0.7440 |
| 0.0 | 100.0 | 400 | 0.0000 | 0.7440 |
| 0.0 | 112.5 | 450 | 0.0000 | 0.7440 |
| 0.0 | 125.0 | 500 | 0.0000 | 0.7440 |
| 0.0 | 137.5 | 550 | 0.0000 | 0.7440 |
| 0.0 | 150.0 | 600 | 0.0000 | 0.7440 |
| 0.0 | 162.5 | 650 | 0.0000 | 0.7440 |
| 0.0 | 175.0 | 700 | 0.0000 | 0.7440 |
| 0.0 | 187.5 | 750 | 0.0000 | 0.7440 |
| 0.0 | 200.0 | 800 | 0.0000 | 0.7440 |
| 0.0 | 212.5 | 850 | 0.0000 | 0.7440 |
| 0.0 | 225.0 | 900 | 0.0000 | 0.7440 |
| 0.0 | 237.5 | 950 | 0.0000 | 0.7440 |
| 0.0 | 250.0 | 1000 | 0.0000 | 0.7440 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 2.19.2
- Tokenizers 0.21.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned Whisper medium for darija speech translation
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Darija-C dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Bleu: 0.7440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3191 | 12.5 | 50 | 0.3022 | 0.4084 |
| 0.0359 | 25.0 | 100 | 0.0001 | 0.7440 |
| 0.0001 | 37.5 | 150 | 0.0001 | 0.7440 |
| 0.0001 | 50.0 | 200 | 0.0000 | 0.7440 |
| 0.0 | 62.5 | 250 | 0.0000 | 0.7440 |
| 0.0 | 75.0 | 300 | 0.0000 | 0.7440 |
| 0.0 | 87.5 | 350 | 0.0000 | 0.7440 |
| 0.0 | 100.0 | 400 | 0.0000 | 0.7440 |
| 0.0 | 112.5 | 450 | 0.0000 | 0.7440 |
| 0.0 | 125.0 | 500 | 0.0000 | 0.7440 |
| 0.0 | 137.5 | 550 | 0.0000 | 0.7440 |
| 0.0 | 150.0 | 600 | 0.0000 | 0.7440 |
| 0.0 | 162.5 | 650 | 0.0000 | 0.7440 |
| 0.0 | 175.0 | 700 | 0.0000 | 0.7440 |
| 0.0 | 187.5 | 750 | 0.0000 | 0.7440 |
| 0.0 | 200.0 | 800 | 0.0000 | 0.7440 |
| 0.0 | 212.5 | 850 | 0.0000 | 0.7440 |
| 0.0 | 225.0 | 900 | 0.0000 | 0.7440 |
| 0.0 | 237.5 | 950 | 0.0000 | 0.7440 |
| 0.0 | 250.0 | 1000 | 0.0000 | 0.7440 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 2.19.2
- Tokenizers 0.21.0
|
{"base_model": "openai/whisper-medium", "datasets": ["darija-c"], "language": ["ar"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Finetuned Whisper medium for darija speech translation", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,152 |
RichardErkhov/Qwen_-_Qwen2-1.5B-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-06-22T18:01:39Z |
2024-06-22T18:23:58+00:00
| 51 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-1.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2-1.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2-1.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2-1.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
---
# Qwen2-1.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 1.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-1.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q2_K.gguf) | Q2_K | 0.63GB |
| [Qwen2-1.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Qwen2-1.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [Qwen2-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [Qwen2-1.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Qwen2-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K.gguf) | Q3_K | 0.77GB |
| [Qwen2-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [Qwen2-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [Qwen2-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [Qwen2-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB |
| [Qwen2-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [Qwen2-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [Qwen2-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K.gguf) | Q4_K | 0.92GB |
| [Qwen2-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [Qwen2-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB |
| [Qwen2-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB |
| [Qwen2-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [Qwen2-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K.gguf) | Q5_K | 1.05GB |
| [Qwen2-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [Qwen2-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB |
| [Qwen2-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q6_K.gguf) | Q6_K | 1.19GB |
| [Qwen2-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
---
# Qwen2-1.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 1.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
{}
|
task
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] | 42,153 |
atlasia/Al-Atlas-0.5B
|
atlasia
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"ar",
"ary",
"dataset:atlasia/Atlaset",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-05T22:28:32Z |
2025-03-06T16:02:09+00:00
| 237 | 5 |
---
base_model:
- Qwen/Qwen2.5-0.5B
datasets:
- atlasia/Atlaset
language:
- ar
- ary
library_name: transformers
pipeline_tag: text-generation
---
# Al-Atlas: Large Language Model for Moroccan Arabic
## Overview
Al-Atlas is a 0.5B parameter language model specifically trained on Moroccan Darija, making it the first dedicated foundation model for Morocco's primary spoken dialect. The model was finetuned from [Qwen-2.5](https://huggingface.co/Qwen/Qwen2.5-0.5B) and trained on a carefully curated dataset of 155M tokens, focusing exclusively on authentic Moroccan Darija content.
## Key Features
- **Dedicated Darija Model**: First language model trained specifically for Moroccan Arabic dialect
- **High-Quality Data**: Curated dataset of 155M tokens from authentic Moroccan sources
- **Cultural Understanding**: Captures nuanced cultural context and local expressions
## Dataset
Our training corpus consists of 155M tokens of pure Moroccan Darija content, sourced from:
- Social media conversations
- Transcribed spoken content
- Online forums and discussions
- Local news and media
- User-generated content
Each source was carefully vetted to ensure authentic representation of the dialect and remove any Modern Standard Arabic (MSA) or other Arabic dialect contamination.
## Model Details
- **Architecture**: Transformer-based language model
- **Parameters**: 0.5B
- **Context Window**: 2048 tokens
- **Training Data**: 155M tokens of pure Darija content
## Use Cases
- Chatbots for Moroccan users
- Content generation in Darija
- Text classification for Moroccan content
- Sentiment analysis for local markets
- Customer service automation
- Educational tools for Darija speakers
## Getting Started
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("atlasia/Al-Atlas-0.5B")
tokenizer = AutoTokenizer.from_pretrained("atlasia/Al-Atlas-0.5B")
# Example usage
text = "الذكاء الاصطناعي هو فرع من علوم الكمبيوتر اللي كيركز"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=1024,
pad_token_id=generator.tokenizer.pad_token_id or generator.tokenizer.eos_token_id,
repetition_penalty=1.5,
num_beams=8,
top_p= 0.9,
top_k= 150,
do_sample= True,
early_stopping = True,
)
response = tokenizer.decode(outputs[0])
# response:
الذكاء الاصطناعي هو فرع من علوم الكمبيوتر اللي كيركز گاع على تطوير الآلات اللي قادرة تدير مهام اللي عادة خاصها ذكاء بشري، بحال التعرف على الأنماط، حل المشاكل، اتخاذ القرارات، وفهم اللغة الطبيعية. الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا.
واحد من أهم التطبيقات ديال الذكاء الاصطناعي هو فالصحة. الذكاء الاصطناعي يقدر يعاون الطبة باش يشخصو الأمراض بدقة أكبر، يأوتوماتيزيو المهام الروتينية، ويحسنو نتائج المرضى. مثلا، الخوارزميات ديال الذكاء الاصطناعي تقدر تحلل الصور الطبية باش تكتاشف العلامات الحيوية اللي ممكن ما تكونش واضحة للفحص البشري. زيادة على هادشي، الذكاء الاصطناعي يقدر يعاون الأطباء باش يصاوبو خطط علاج مخصصة حسب الاحتياجات الخاصة ديال كل مريض.
بالإضافة للصحة، الذكاء الاصطناعي عندو إمكانية باش يغير الطريقة اللي كنتفاعلو بيها مع التكنولوجيا. مثلا، الذكاء الاصطناعي يقدر يعاون الشركات باش يحسنو العمليات ديالهم، يأوتوماتيزيو المهام الروتينية، ويحسنو تجربة الزبون. زيادة على هادشي، الذكاء الاصطناعي يقدر يعاون الفلاحة باش يزيدو الإنتاجية، ينقصو التكاليف، ويحسنو جودة المحاصيل.
فالختام، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. من خلال تطوير أنظمة ذكاء اصطناعي متقدمة، نقدرو نحسنو الكفاءة، نحسنو جودة الحياة، ونخلقو عالم أحسن للأجيال الجاية. مع استمرار تطور الذكاء الاصطناعي، من المرجح أنه غادي يلعب دور أكثر أهمية فتشكيل مستقبل البشرية. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا
```
## Future Work
- Scaling to larger model sizes
- Creating evaluation benchmarks for Darija
- Expanding the training dataset
- Developing task-specific fine-tuned versions
- SFT training
```bibtex
@misc{atlasia2025al-atlas-0.5B,
title={Al-Atlas: A Causal Language Model for Moroccan Darija},
author={Abdelaziz Bounhar},
year={2025},
howpublished={\url{https://huggingface.co/atlasia/Al-Atlas-0.5B/}},
organization={AtlasIA}
}
```
| null |
Non_BioNLP
|
# Al-Atlas: Large Language Model for Moroccan Arabic
## Overview
Al-Atlas is a 0.5B parameter language model specifically trained on Moroccan Darija, making it the first dedicated foundation model for Morocco's primary spoken dialect. The model was finetuned from [Qwen-2.5](https://huggingface.co/Qwen/Qwen2.5-0.5B) and trained on a carefully curated dataset of 155M tokens, focusing exclusively on authentic Moroccan Darija content.
## Key Features
- **Dedicated Darija Model**: First language model trained specifically for Moroccan Arabic dialect
- **High-Quality Data**: Curated dataset of 155M tokens from authentic Moroccan sources
- **Cultural Understanding**: Captures nuanced cultural context and local expressions
## Dataset
Our training corpus consists of 155M tokens of pure Moroccan Darija content, sourced from:
- Social media conversations
- Transcribed spoken content
- Online forums and discussions
- Local news and media
- User-generated content
Each source was carefully vetted to ensure authentic representation of the dialect and remove any Modern Standard Arabic (MSA) or other Arabic dialect contamination.
## Model Details
- **Architecture**: Transformer-based language model
- **Parameters**: 0.5B
- **Context Window**: 2048 tokens
- **Training Data**: 155M tokens of pure Darija content
## Use Cases
- Chatbots for Moroccan users
- Content generation in Darija
- Text classification for Moroccan content
- Sentiment analysis for local markets
- Customer service automation
- Educational tools for Darija speakers
## Getting Started
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("atlasia/Al-Atlas-0.5B")
tokenizer = AutoTokenizer.from_pretrained("atlasia/Al-Atlas-0.5B")
# Example usage
text = "الذكاء الاصطناعي هو فرع من علوم الكمبيوتر اللي كيركز"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=1024,
pad_token_id=generator.tokenizer.pad_token_id or generator.tokenizer.eos_token_id,
repetition_penalty=1.5,
num_beams=8,
top_p= 0.9,
top_k= 150,
do_sample= True,
early_stopping = True,
)
response = tokenizer.decode(outputs[0])
# response:
الذكاء الاصطناعي هو فرع من علوم الكمبيوتر اللي كيركز گاع على تطوير الآلات اللي قادرة تدير مهام اللي عادة خاصها ذكاء بشري، بحال التعرف على الأنماط، حل المشاكل، اتخاذ القرارات، وفهم اللغة الطبيعية. الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا.
واحد من أهم التطبيقات ديال الذكاء الاصطناعي هو فالصحة. الذكاء الاصطناعي يقدر يعاون الطبة باش يشخصو الأمراض بدقة أكبر، يأوتوماتيزيو المهام الروتينية، ويحسنو نتائج المرضى. مثلا، الخوارزميات ديال الذكاء الاصطناعي تقدر تحلل الصور الطبية باش تكتاشف العلامات الحيوية اللي ممكن ما تكونش واضحة للفحص البشري. زيادة على هادشي، الذكاء الاصطناعي يقدر يعاون الأطباء باش يصاوبو خطط علاج مخصصة حسب الاحتياجات الخاصة ديال كل مريض.
بالإضافة للصحة، الذكاء الاصطناعي عندو إمكانية باش يغير الطريقة اللي كنتفاعلو بيها مع التكنولوجيا. مثلا، الذكاء الاصطناعي يقدر يعاون الشركات باش يحسنو العمليات ديالهم، يأوتوماتيزيو المهام الروتينية، ويحسنو تجربة الزبون. زيادة على هادشي، الذكاء الاصطناعي يقدر يعاون الفلاحة باش يزيدو الإنتاجية، ينقصو التكاليف، ويحسنو جودة المحاصيل.
فالختام، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. من خلال تطوير أنظمة ذكاء اصطناعي متقدمة، نقدرو نحسنو الكفاءة، نحسنو جودة الحياة، ونخلقو عالم أحسن للأجيال الجاية. مع استمرار تطور الذكاء الاصطناعي، من المرجح أنه غادي يلعب دور أكثر أهمية فتشكيل مستقبل البشرية. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا. ولكن، خاصنا نكونو واعيين بالمخاطر والتحديات المرتبطة بالذكاء الاصطناعي باش نستافدو منو بأحسن طريقة. فالنهاية، الذكاء الاصطناعي عندو إمكانية باش يغير بزاف كيفاش كنعيشو، نخدمو، ونتفاعلو مع بعضياتنا
```
## Future Work
- Scaling to larger model sizes
- Creating evaluation benchmarks for Darija
- Expanding the training dataset
- Developing task-specific fine-tuned versions
- SFT training
```bibtex
@misc{atlasia2025al-atlas-0.5B,
title={Al-Atlas: A Causal Language Model for Moroccan Darija},
author={Abdelaziz Bounhar},
year={2025},
howpublished={\url{https://huggingface.co/atlasia/Al-Atlas-0.5B/}},
organization={AtlasIA}
}
```
|
{"base_model": ["Qwen/Qwen2.5-0.5B"], "datasets": ["atlasia/Atlaset"], "language": ["ar", "ary"], "library_name": "transformers", "pipeline_tag": "text-generation"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,154 |
Azaz666/flan-t5-strategyqa
|
Azaz666
| null |
[
"t5",
"arxiv:2004.06364",
"region:us"
] | 2025-01-28T17:04:08Z |
2025-01-29T21:23:26+00:00
| 17 | 0 |
---
{}
---
FLAN-T5 for StrategyQA
This repository contains a fine-tuned version of the FLAN-T5 model for the StrategyQA dataset. The model is trained to perform multi-step reasoning and answer complex multi-choice questions, leveraging the knowledge stored in external resources.
Model Overview
FLAN-T5 (Fine-tuned Language Agnostic T5) is a variant of T5 (Text-to-Text Transfer Transformer) that has been fine-tuned on a wide variety of tasks to improve its ability to generalize across diverse NLP tasks.
StrategyQA Dataset
StrategyQA is a dataset designed for multi-step reasoning tasks, where each question requires a sequence of logical steps to arrive at the correct answer. It focuses on commonsense reasoning and question answering.
This model has been fine-tuned specifically to answer questions from the StrategyQA dataset by retrieving relevant knowledge and reasoning through it.
Model Description
This model was fine-tuned using the FLAN-T5 architecture on the StrategyQA dataset. The model is designed to answer multi-step reasoning questions by retrieving relevant documents and reasoning over them.
Base Model: FLAN-T5
Fine-tuned Dataset: StrategyQA
Task: Multi-step reasoning for question answering
Retriever Type: Dense retriever (using models like ColBERT or DPR for document retrieval)
Intended Use
This model is designed to be used for multi-step reasoning tasks and can be leveraged for a variety of question-answering tasks where the answer requires more than one step of reasoning. It's particularly useful for domains like commonsense reasoning, knowledge-intensive tasks, and complex decision-making questions.
How to Use
To use the model for inference, follow these steps:
Installation
To install the Hugging Face transformers library and use the model, run the following:
bash
Copy
pip install transformers
Example Code
You can use the model with the following Python code:
python
Copy
from transformers import T5ForConditionalGeneration, T5Tokenizer
# Load the model and tokenizer
model_name = "Azaz666/flan-t5-strategyqa" # Replace with your model name if necessary
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
# Example question
question = "What is the capital of France?"
# Tokenize the input question
input_ids = tokenizer.encode("question: " + question, return_tensors="pt")
# Generate the answer
The following are multiple-choice questions about common knowledge. Generate a step-by-step explanations for each question with given common knowledge. Question: Was Pi an acceptable number of children in 1980s China? A. yes B. no Knowledge: Left-behind children in China . The National Bureau of Statistics of China calculated in 2011 that the number of rural-urban migrants has exceeded 221 million, 16.5% of China’s population. The National Demographic Development Strategy Report published in 2007 expects the number of rural-urban migrants to increase to up to 300 million people over the next 20 years. This increase in rural-urban migration is consistent with an increase in migrant children and left-behind children. The most recent census published by The National Bureau of Statistics of China calculated that there were 65,782,879 migrant children and left-behind children ages 0–17 in the year 2015. Explanation:</s>
Firstly, we need to consider the number of children in the 1980s in China. According to the National Bureau of Statistics, in the year 1980s, there were around 53 million children. Therefore, Pi was not an acceptable number of children. Secondly, we need to know the total number of children in the 1980s in China. According to the most recent census of China, there were around 69 million children. Finally, we need to determine whether Pi was enough to accommodate the influx of children from outside China. According to the National Bureau of Statistics, the number of children from 2017 to 2021 is estimated to be around 70 million. Therefore, Pi was not an acceptable number of children in the 1980s. In fact, it was not even considered an acceptable number. In conclusion, Pi was not an acceptable number of children in 1980s in China, as it was not considered an acceptable number. Hence, the correct answer is (B) no. Answer: B
['B', 'B', 'A', 'B', 'B']
Prediction: B Answer: B
Confidence:
0.6230366492146597
Model Training Details
The model was fine-tuned using the StrategyQA dataset. Here's a brief overview of the training setup:
Pre-trained Model: flan-t5-large
Training Dataset: StrategyQA
Training Steps: The model was fine-tuned on the StrategyQA dataset, which contains questions requiring multiple reasoning steps.
Evaluation Metrics: The model performance was evaluated based on accuracy (whether the predicted answer matched the ground truth).
Limitations
Context Length: The model is limited by the input size, and longer questions or longer passages might be truncated.
Generalization: While fine-tuned for multi-step reasoning, performance may vary depending on the complexity of the question.
Citation
If you use this model or dataset, please cite the following paper:
StrategyQA: https://arxiv.org/abs/2004.06364
License
This model is licensed under the MIT License.
| null |
Non_BioNLP
|
FLAN-T5 for StrategyQA
This repository contains a fine-tuned version of the FLAN-T5 model for the StrategyQA dataset. The model is trained to perform multi-step reasoning and answer complex multi-choice questions, leveraging the knowledge stored in external resources.
Model Overview
FLAN-T5 (Fine-tuned Language Agnostic T5) is a variant of T5 (Text-to-Text Transfer Transformer) that has been fine-tuned on a wide variety of tasks to improve its ability to generalize across diverse NLP tasks.
StrategyQA Dataset
StrategyQA is a dataset designed for multi-step reasoning tasks, where each question requires a sequence of logical steps to arrive at the correct answer. It focuses on commonsense reasoning and question answering.
This model has been fine-tuned specifically to answer questions from the StrategyQA dataset by retrieving relevant knowledge and reasoning through it.
Model Description
This model was fine-tuned using the FLAN-T5 architecture on the StrategyQA dataset. The model is designed to answer multi-step reasoning questions by retrieving relevant documents and reasoning over them.
Base Model: FLAN-T5
Fine-tuned Dataset: StrategyQA
Task: Multi-step reasoning for question answering
Retriever Type: Dense retriever (using models like ColBERT or DPR for document retrieval)
Intended Use
This model is designed to be used for multi-step reasoning tasks and can be leveraged for a variety of question-answering tasks where the answer requires more than one step of reasoning. It's particularly useful for domains like commonsense reasoning, knowledge-intensive tasks, and complex decision-making questions.
How to Use
To use the model for inference, follow these steps:
Installation
To install the Hugging Face transformers library and use the model, run the following:
bash
Copy
pip install transformers
Example Code
You can use the model with the following Python code:
python
Copy
from transformers import T5ForConditionalGeneration, T5Tokenizer
# Load the model and tokenizer
model_name = "Azaz666/flan-t5-strategyqa" # Replace with your model name if necessary
model = T5ForConditionalGeneration.from_pretrained(model_name)
tokenizer = T5Tokenizer.from_pretrained(model_name)
# Example question
question = "What is the capital of France?"
# Tokenize the input question
input_ids = tokenizer.encode("question: " + question, return_tensors="pt")
# Generate the answer
The following are multiple-choice questions about common knowledge. Generate a step-by-step explanations for each question with given common knowledge. Question: Was Pi an acceptable number of children in 1980s China? A. yes B. no Knowledge: Left-behind children in China . The National Bureau of Statistics of China calculated in 2011 that the number of rural-urban migrants has exceeded 221 million, 16.5% of China’s population. The National Demographic Development Strategy Report published in 2007 expects the number of rural-urban migrants to increase to up to 300 million people over the next 20 years. This increase in rural-urban migration is consistent with an increase in migrant children and left-behind children. The most recent census published by The National Bureau of Statistics of China calculated that there were 65,782,879 migrant children and left-behind children ages 0–17 in the year 2015. Explanation:</s>
Firstly, we need to consider the number of children in the 1980s in China. According to the National Bureau of Statistics, in the year 1980s, there were around 53 million children. Therefore, Pi was not an acceptable number of children. Secondly, we need to know the total number of children in the 1980s in China. According to the most recent census of China, there were around 69 million children. Finally, we need to determine whether Pi was enough to accommodate the influx of children from outside China. According to the National Bureau of Statistics, the number of children from 2017 to 2021 is estimated to be around 70 million. Therefore, Pi was not an acceptable number of children in the 1980s. In fact, it was not even considered an acceptable number. In conclusion, Pi was not an acceptable number of children in 1980s in China, as it was not considered an acceptable number. Hence, the correct answer is (B) no. Answer: B
['B', 'B', 'A', 'B', 'B']
Prediction: B Answer: B
Confidence:
0.6230366492146597
Model Training Details
The model was fine-tuned using the StrategyQA dataset. Here's a brief overview of the training setup:
Pre-trained Model: flan-t5-large
Training Dataset: StrategyQA
Training Steps: The model was fine-tuned on the StrategyQA dataset, which contains questions requiring multiple reasoning steps.
Evaluation Metrics: The model performance was evaluated based on accuracy (whether the predicted answer matched the ground truth).
Limitations
Context Length: The model is limited by the input size, and longer questions or longer passages might be truncated.
Generalization: While fine-tuned for multi-step reasoning, performance may vary depending on the complexity of the question.
Citation
If you use this model or dataset, please cite the following paper:
StrategyQA: https://arxiv.org/abs/2004.06364
License
This model is licensed under the MIT License.
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,155 |
YakovElm/Qt10SetFitModel_Train_balance_ratio_2
|
YakovElm
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-06-11T16:13:36Z |
2023-06-11T16:14:21+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Qt10SetFitModel_Train_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt10SetFitModel_Train_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# YakovElm/Qt10SetFitModel_Train_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt10SetFitModel_Train_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,156 |
albertdestajo/distilbert-base-uncased-finetuned-mrpc
|
albertdestajo
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-26T18:31:51Z |
2022-09-27T05:40:59+00:00
| 10 | 0 |
---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- type: accuracy
value: 0.7916666666666666
name: Accuracy
- type: f1
value: 0.8608837970540099
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Accuracy: 0.7917
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4474 | 1.0 | 230 | 0.4502 | 0.7917 | 0.8609 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
| null |
TBD
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Accuracy: 0.7917
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4474 | 1.0 | 230 | 0.4502 | 0.7917 | 0.8609 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "train", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.7916666666666666, "name": "Accuracy"}, {"type": "f1", "value": 0.8608837970540099, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,157 |
leia-llm/Leia-Swallow-7b
|
leia-llm
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ja",
"arxiv:2402.11485",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-17T07:12:28Z |
2024-04-17T10:29:56+00:00
| 16 | 2 |
---
language:
- ja
license: apache-2.0
---
# Leia-Swallow-7B
LEIA is a training technique for autoregressive LLMs that effectively improves their performance in languages other than English by enhancing cross-lingual knowledge transfer from English to a target language.
This model is constructed by applying LEIA to Swallow, a Japanese-English bilingual LLM based on LLaMA 2.
The model achieves enhanced performance on six Japanese question-answering benchmarks, as reported below.
Please refer to our paper or blog post (in Japanese) for further technical details.
- [LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation](https://arxiv.org/abs/2402.11485) (arxiv.org)
- [LEIA: 言語間転移学習でLLMを賢くする新しい方法](#) (zenn.dev)
## Model List
- [Leia-Swallow-7b](https://huggingface.co/leia-llm/Leia-Swallow-7b/)
- [Leia-Swallow-13b](https://huggingface.co/leia-llm/Leia-Swallow-13b/)
## Empirical Results
The model is assessed using the following six question answering benchmarks:
- X-CODAH
- X-CSQA
- JCommonsenseQA
- NIILC
- JEMHopQA
- JAQKET v2
| Model | X-CODAH | X-CSQA | JCommonsenseQA | NIILC | JEMHopQA | JAQKET v2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Swallow | 42.0 | 41.0 | 80.3 | 59.5 | 50.8 | 86.2 |
| LEIA | **42.7** | **42.4** | **80.6** | **60.3** | **54.7** | **86.5** |
For further details of this experiment, please refer to [our paper](https://arxiv.org/abs/2402.11485).
## Contributors
- Ikuya Yamada (Studio Ousia, RIKEN)
- Ryokan Ri (LY Corporation, SB Intuitions)
| null |
Non_BioNLP
|
# Leia-Swallow-7B
LEIA is a training technique for autoregressive LLMs that effectively improves their performance in languages other than English by enhancing cross-lingual knowledge transfer from English to a target language.
This model is constructed by applying LEIA to Swallow, a Japanese-English bilingual LLM based on LLaMA 2.
The model achieves enhanced performance on six Japanese question-answering benchmarks, as reported below.
Please refer to our paper or blog post (in Japanese) for further technical details.
- [LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation](https://arxiv.org/abs/2402.11485) (arxiv.org)
- [LEIA: 言語間転移学習でLLMを賢くする新しい方法](#) (zenn.dev)
## Model List
- [Leia-Swallow-7b](https://huggingface.co/leia-llm/Leia-Swallow-7b/)
- [Leia-Swallow-13b](https://huggingface.co/leia-llm/Leia-Swallow-13b/)
## Empirical Results
The model is assessed using the following six question answering benchmarks:
- X-CODAH
- X-CSQA
- JCommonsenseQA
- NIILC
- JEMHopQA
- JAQKET v2
| Model | X-CODAH | X-CSQA | JCommonsenseQA | NIILC | JEMHopQA | JAQKET v2 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| Swallow | 42.0 | 41.0 | 80.3 | 59.5 | 50.8 | 86.2 |
| LEIA | **42.7** | **42.4** | **80.6** | **60.3** | **54.7** | **86.5** |
For further details of this experiment, please refer to [our paper](https://arxiv.org/abs/2402.11485).
## Contributors
- Ikuya Yamada (Studio Ousia, RIKEN)
- Ryokan Ri (LY Corporation, SB Intuitions)
|
{"language": ["ja"], "license": "apache-2.0"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,158 |
gaudi/opus-mt-en-es-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T14:58:35Z |
2024-10-19T00:10:04+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-es)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-es).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-es --output_dir ./ctranslate2/opus-mt-en-es-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-es-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-es-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-es-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-es)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-es).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-es --output_dir ./ctranslate2/opus-mt-en-es-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-es-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-es-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-es-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,159 |
speedcell4/ec40-64k
|
speedcell4
|
translation
|
[
"transformers",
"safetensors",
"m2m_100",
"feature-extraction",
"translation",
"dataset:ShaomuTan/EC40",
"endpoints_compatible",
"region:us"
] | 2025-02-04T10:17:01Z |
2025-02-09T01:39:25+00:00
| 116 | 0 |
---
datasets:
- ShaomuTan/EC40
library_name: transformers
metrics:
- sacrebleu
pipeline_tag: translation
---
## Uses
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained('speedcell4/ec40-64k')
tokenizer = AutoTokenizer.from_pretrained('speedcell4/ec40-64k')
translation = pipeline(
'translation', model=model, tokenizer=tokenizer,
src_lang='<en>', tgt_lang='<ru>',
max_length=256,
)
result, = translation(['The quick brown fox jumps over the lazy dog.'])
print(result['translation_text'])
# Быстрая коричневая лиса прыгает над ленивым собаком.
```
| null |
Non_BioNLP
|
## Uses
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained('speedcell4/ec40-64k')
tokenizer = AutoTokenizer.from_pretrained('speedcell4/ec40-64k')
translation = pipeline(
'translation', model=model, tokenizer=tokenizer,
src_lang='<en>', tgt_lang='<ru>',
max_length=256,
)
result, = translation(['The quick brown fox jumps over the lazy dog.'])
print(result['translation_text'])
# Быстрая коричневая лиса прыгает над ленивым собаком.
```
|
{"datasets": ["ShaomuTan/EC40"], "library_name": "transformers", "metrics": ["sacrebleu"], "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 42,160 |
adamo1139/Yi-1.5-9B-base-mirror
|
adamo1139
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-12T19:46:17Z |
2024-05-12T19:50:23+00:00
| 4 | 0 |
---
license: apache-2.0
---
LICENSE: APACHE 2.0
Thanks 01.ai!!
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
| null |
Non_BioNLP
|
LICENSE: APACHE 2.0
Thanks 01.ai!!
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
|
{"license": "apache-2.0"}
|
task
|
[
"TRANSLATION"
] | 42,161 |
skywood/NHNDQ-nllb-finetuned-ko2en-ct2-float16
|
skywood
|
translation
|
[
"transformers",
"translation",
"en",
"ko",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 2024-04-07T14:06:37Z |
2024-04-08T12:06:40+00:00
| 19 | 0 |
---
language:
- en
- ko
license: cc-by-4.0
tags:
- translation
---
I only did ctranslate2 convert to the original.
cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-ko2en --quantization float16 --output_dir NHNDQ-nllb-finetuned-ko2en-ct2
All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information.
Original URL : https://huggingface.co/NHNDQ/nllb-finetuned-en2ko
## Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M ct2 model
* Developed by: DanielHeo
## Original Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M model
* Developed by: Juhwan Lee, Jisu Kim, TakSung Heo, and Minsu Jeong
* Model Type: Translation
* Language(s):
* Source Language: English
* Target Language: Korean
* License: CC-BY-4.0
## Dataset
* [AI-hub dataset](https://www.aihub.or.kr/)
## BLEU Score
* Before fine-tuning: 26.58
* After fine-tuning: 55.36
## Uses
This model can be used for translation and text-to-text generation
## Data Augmentation with backtranslation task
You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
| null |
Non_BioNLP
|
I only did ctranslate2 convert to the original.
cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-ko2en --quantization float16 --output_dir NHNDQ-nllb-finetuned-ko2en-ct2
All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information.
Original URL : https://huggingface.co/NHNDQ/nllb-finetuned-en2ko
## Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M ct2 model
* Developed by: DanielHeo
## Original Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M model
* Developed by: Juhwan Lee, Jisu Kim, TakSung Heo, and Minsu Jeong
* Model Type: Translation
* Language(s):
* Source Language: English
* Target Language: Korean
* License: CC-BY-4.0
## Dataset
* [AI-hub dataset](https://www.aihub.or.kr/)
## BLEU Score
* Before fine-tuning: 26.58
* After fine-tuning: 55.36
## Uses
This model can be used for translation and text-to-text generation
## Data Augmentation with backtranslation task
You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
|
{"language": ["en", "ko"], "license": "cc-by-4.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 42,162 |
LVSTCK/domestic-yak-8B-instruct
|
LVSTCK
| null |
[
"safetensors",
"llama",
"mkd",
"mk",
"macedonia",
"dataset:LVSTCK/sft-mk",
"dataset:LVSTCK/Open-Platypus-MK",
"dataset:LVSTCK/ultrachat-sft-mk",
"base_model:LVSTCK/domestic-yak-8B",
"base_model:finetune:LVSTCK/domestic-yak-8B",
"license:llama3.1",
"region:us"
] | 2025-01-14T17:55:36Z |
2025-01-19T22:38:13+00:00
| 28 | 0 |
---
base_model:
- LVSTCK/domestic-yak-8B
datasets:
- LVSTCK/sft-mk
- LVSTCK/Open-Platypus-MK
- LVSTCK/ultrachat-sft-mk
language:
- mk
license: llama3.1
tags:
- mkd
- mk
- macedonia
---
# 🐂 domestic-yak, a Macedonian LM (instruct version)
## Model Summary
This is the **instruct-tuned version** of [domestic-yak-8B](https://huggingface.co/LVSTCK/domestic-yak-8B). It has been fine-tuned specifically to improve instruction-following capabilities in Macedonian. It was fine-tuned on the [sft-mk dataset](https://huggingface.co/datasets/LVSTCK/sft-mk) for three epochs. Building on the foundation of `domestic-yak-8B`, this version is optimized for generating coherent, task-specific responses to user queries, making it ideal for chatbots, virtual assistants, and other interactive applications.
### 📊 Results
The table below compares the performance of our model, domestic-yak-8B-instruct with 4 other models. As we can see our model is on par with Llama 70B, and even beats it on three of the benchmarks. It is also worth noting that this model is currently the best in the 8B parameter range.
The results were obtained using the [macedonian-llm-eval](https://github.com/LVSTCK/macedonian-llm-eval) benchmark.

wn.png)
## 🔑 Key Details
- **Language:** Macedonian (`mk`)
- **Base Model:** [domestic-yak-8B](https://huggingface.co/LVSTCK/domestic-yak-8B)
- **Dataset:** ~100k samples across multiple categories (Question answering (QA), chat-like conversations, reasoning, essays, and code) consolidated from translating publicly available datasets and custom synthetic data. Dataset can be found [here](https://huggingface.co/datasets/LVSTCK/sft-mk).
- **Fine-tuning Objective:** Supervised fine-tuning (SFT) on Macedonian-specific instruction-following data
## Usage
Pipeline automatically uses apply_chat_template which formats the input appropriately. The model was trained using the default Llama 3.1 [format](https://www.llama.com/docs/model-cards-and-prompt-formats/meta-llama-3/).
```python
import transformers
import torch
model_id = "LVSTCK/domestic-yak-8B-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Ти си виртуелен асистент кој помага на корисници на македонски јазик. Одговарај на прашања на јасен, разбирлив и професионален начин. Користи правилна граматика и обиди се одговорите да бидат што е можно покорисни и релевантни."},
{"role": "user", "content": "Кој е највисок врв во Македонија?"},
]
outputs = pipeline(
messages,
max_new_tokens=256, # You can increase this
temperature=0.1,
)
print(outputs[0]["generated_text"][-1])
```
## 📬 Contact
For inquiries, feedback, or contributions, please feel free to reach out to the core team:
- [Stefan Krsteski](https://www.linkedin.com/in/stefan-krsteski-136abb235/) [📧](mailto:[email protected])
- [Matea Tashkovska](https://www.linkedin.com/in/matea-tashkovska-774603198/) [📧](mailto:[email protected])
- [Borjan Sazdov](https://www.linkedin.com/in/borjan-sazdov-4b2187211/) [📧](mailto:[email protected])
## Citation
```
@model{domestic-yak-8B,
title={domestic-yak-8B: A Macedonian Language Model},
authors={Stefan Krsteski, Matea Tashkovska, Borjan Sazdov},
year={2024},
url={https://huggingface.co/LVSTCK/domestic-yak-8B},
note={Macedonian adaptation of Llama 8B.}
}
```
| null |
Non_BioNLP
|
# 🐂 domestic-yak, a Macedonian LM (instruct version)
## Model Summary
This is the **instruct-tuned version** of [domestic-yak-8B](https://huggingface.co/LVSTCK/domestic-yak-8B). It has been fine-tuned specifically to improve instruction-following capabilities in Macedonian. It was fine-tuned on the [sft-mk dataset](https://huggingface.co/datasets/LVSTCK/sft-mk) for three epochs. Building on the foundation of `domestic-yak-8B`, this version is optimized for generating coherent, task-specific responses to user queries, making it ideal for chatbots, virtual assistants, and other interactive applications.
### 📊 Results
The table below compares the performance of our model, domestic-yak-8B-instruct with 4 other models. As we can see our model is on par with Llama 70B, and even beats it on three of the benchmarks. It is also worth noting that this model is currently the best in the 8B parameter range.
The results were obtained using the [macedonian-llm-eval](https://github.com/LVSTCK/macedonian-llm-eval) benchmark.

wn.png)
## 🔑 Key Details
- **Language:** Macedonian (`mk`)
- **Base Model:** [domestic-yak-8B](https://huggingface.co/LVSTCK/domestic-yak-8B)
- **Dataset:** ~100k samples across multiple categories (Question answering (QA), chat-like conversations, reasoning, essays, and code) consolidated from translating publicly available datasets and custom synthetic data. Dataset can be found [here](https://huggingface.co/datasets/LVSTCK/sft-mk).
- **Fine-tuning Objective:** Supervised fine-tuning (SFT) on Macedonian-specific instruction-following data
## Usage
Pipeline automatically uses apply_chat_template which formats the input appropriately. The model was trained using the default Llama 3.1 [format](https://www.llama.com/docs/model-cards-and-prompt-formats/meta-llama-3/).
```python
import transformers
import torch
model_id = "LVSTCK/domestic-yak-8B-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Ти си виртуелен асистент кој помага на корисници на македонски јазик. Одговарај на прашања на јасен, разбирлив и професионален начин. Користи правилна граматика и обиди се одговорите да бидат што е можно покорисни и релевантни."},
{"role": "user", "content": "Кој е највисок врв во Македонија?"},
]
outputs = pipeline(
messages,
max_new_tokens=256, # You can increase this
temperature=0.1,
)
print(outputs[0]["generated_text"][-1])
```
## 📬 Contact
For inquiries, feedback, or contributions, please feel free to reach out to the core team:
- [Stefan Krsteski](https://www.linkedin.com/in/stefan-krsteski-136abb235/) [📧](mailto:[email protected])
- [Matea Tashkovska](https://www.linkedin.com/in/matea-tashkovska-774603198/) [📧](mailto:[email protected])
- [Borjan Sazdov](https://www.linkedin.com/in/borjan-sazdov-4b2187211/) [📧](mailto:[email protected])
## Citation
```
@model{domestic-yak-8B,
title={domestic-yak-8B: A Macedonian Language Model},
authors={Stefan Krsteski, Matea Tashkovska, Borjan Sazdov},
year={2024},
url={https://huggingface.co/LVSTCK/domestic-yak-8B},
note={Macedonian adaptation of Llama 8B.}
}
```
|
{"base_model": ["LVSTCK/domestic-yak-8B"], "datasets": ["LVSTCK/sft-mk", "LVSTCK/Open-Platypus-MK", "LVSTCK/ultrachat-sft-mk"], "language": ["mk"], "license": "llama3.1", "tags": ["mkd", "mk", "macedonia"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,163 |
kabelomalapane/En-Zu_update
|
kabelomalapane
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-28T10:55:08Z |
2022-07-28T13:24:27+00:00
| 18 | 0 |
---
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: En-Zu_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Zu_update
This model is a fine-tuned version of [kabelomalapane/test_model1.2_updated](https://huggingface.co/kabelomalapane/test_model1.2_updated) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7101
- Bleu: 11.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.9111 | 1.0 | 1173 | 1.7594 | 11.7012 |
| 1.7191 | 2.0 | 2346 | 1.7279 | 12.0250 |
| 1.5709 | 3.0 | 3519 | 1.7172 | 10.6222 |
| 1.4924 | 4.0 | 4692 | 1.7042 | 11.4224 |
| 1.4188 | 5.0 | 5865 | 1.7051 | 11.4330 |
| 1.3566 | 6.0 | 7038 | 1.6972 | 11.5300 |
| 1.3141 | 7.0 | 8211 | 1.7041 | 11.4339 |
| 1.2641 | 8.0 | 9384 | 1.7064 | 11.4030 |
| 1.2437 | 9.0 | 10557 | 1.7079 | 11.4014 |
| 1.2333 | 10.0 | 11730 | 1.7101 | 11.5164 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Zu_update
This model is a fine-tuned version of [kabelomalapane/test_model1.2_updated](https://huggingface.co/kabelomalapane/test_model1.2_updated) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7101
- Bleu: 11.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.9111 | 1.0 | 1173 | 1.7594 | 11.7012 |
| 1.7191 | 2.0 | 2346 | 1.7279 | 12.0250 |
| 1.5709 | 3.0 | 3519 | 1.7172 | 10.6222 |
| 1.4924 | 4.0 | 4692 | 1.7042 | 11.4224 |
| 1.4188 | 5.0 | 5865 | 1.7051 | 11.4330 |
| 1.3566 | 6.0 | 7038 | 1.6972 | 11.5300 |
| 1.3141 | 7.0 | 8211 | 1.7041 | 11.4339 |
| 1.2641 | 8.0 | 9384 | 1.7064 | 11.4030 |
| 1.2437 | 9.0 | 10557 | 1.7079 | 11.4014 |
| 1.2333 | 10.0 | 11730 | 1.7101 | 11.5164 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
{"license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "En-Zu_update", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,164 |
HelgeKn/Swag-multi-class-20
|
HelgeKn
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-14T20:48:53Z |
2023-12-14T20:50:09+00:00
| 49 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: She picks up a wine glass and takes a drink. She
- text: Someone smiles as she looks out her window. Their car
- text: Someone turns and her jaw drops at the site of the other woman. Moving in
slow motion, someone
- text: He sneers and winds up with his fist. Someone
- text: He smooths it back with his hand. Finally, appearing confident and relaxed
and with the old familiar glint in his eyes, someone
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.16538461538461538
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 9 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 8 | <ul><li>'Later she meets someone at the bar. He'</li><li>'He heads to them and sits. The bus'</li><li>'Someone leaps to his feet and punches the agent in the face. Seemingly unaffected, the agent'</li></ul> |
| 2 | <ul><li>'A man sits behind a desk. Two people'</li><li>'A man is seen standing at the bottom of a hole while a man records him. Two men'</li><li>'Someone questions his female colleague who shrugs. Through a window, we'</li></ul> |
| 0 | <ul><li>'A woman bends down and puts something on a scale. She then'</li><li>'He pulls down the blind. He'</li><li>'Someone flings his hands forward. The someone fires, but the water'</li></ul> |
| 6 | <ul><li>'People are sitting down on chairs. They'</li><li>'They look up at stained glass skylights. The Americans'</li><li>'The lady and the man dance around each other in a circle. The people'</li></ul> |
| 1 | <ul><li>'An older gentleman kisses her. As he leads her off, someone'</li><li>'The first girl comes back and does it effortlessly as the second girl still struggles. For the last round, the girl'</li><li>'As she leaves, the bartender smiles. Now the blonde'</li></ul> |
| 3 | <ul><li>'Someone lowers his demoralized gaze. Someone'</li><li>'Someone goes into his bedroom. Someone'</li><li>'As someone leaves, someone spots him on the monitor. Someone'</li></ul> |
| 7 | <ul><li>'Four inches of Plexiglas separate the two and they talk on monitored phones. Someone'</li><li>'The American and Russian commanders each watch them returning. As someone'</li><li>'A group of walkers walk along the sidewalk near the lake. A man'</li></ul> |
| 4 | <ul><li>'The secretary flexes the foot of her crossed - leg as she eyes someone. The woman'</li><li>'A man in a white striped shirt is smiling. A woman'</li><li>'He grabs her hair and pulls her head back. She'</li></ul> |
| 5 | <ul><li>'He heads out of the plaza. Someone'</li><li>"As he starts back, he sees someone's scared look just before he slams the door shut. Someone"</li><li>'He nods at her beaming. Someone'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.1654 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HelgeKn/Swag-multi-class-20")
# Run inference
preds = model("He sneers and winds up with his fist. Someone")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 12.1056 | 33 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 20 |
| 2 | 20 |
| 3 | 20 |
| 4 | 20 |
| 5 | 20 |
| 6 | 20 |
| 7 | 20 |
| 8 | 20 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3747 | - |
| 0.1111 | 50 | 0.2052 | - |
| 0.2222 | 100 | 0.1878 | - |
| 0.3333 | 150 | 0.1126 | - |
| 0.4444 | 200 | 0.1862 | - |
| 0.5556 | 250 | 0.1385 | - |
| 0.6667 | 300 | 0.0154 | - |
| 0.7778 | 350 | 0.0735 | - |
| 0.8889 | 400 | 0.0313 | - |
| 1.0 | 450 | 0.0189 | - |
| 1.1111 | 500 | 0.0138 | - |
| 1.2222 | 550 | 0.0046 | - |
| 1.3333 | 600 | 0.0043 | - |
| 1.4444 | 650 | 0.0021 | - |
| 1.5556 | 700 | 0.0033 | - |
| 1.6667 | 750 | 0.001 | - |
| 1.7778 | 800 | 0.0026 | - |
| 1.8889 | 850 | 0.0022 | - |
| 2.0 | 900 | 0.0014 | - |
### Framework Versions
- Python: 3.9.13
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.36.0
- PyTorch: 2.1.1+cpu
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 9 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 8 | <ul><li>'Later she meets someone at the bar. He'</li><li>'He heads to them and sits. The bus'</li><li>'Someone leaps to his feet and punches the agent in the face. Seemingly unaffected, the agent'</li></ul> |
| 2 | <ul><li>'A man sits behind a desk. Two people'</li><li>'A man is seen standing at the bottom of a hole while a man records him. Two men'</li><li>'Someone questions his female colleague who shrugs. Through a window, we'</li></ul> |
| 0 | <ul><li>'A woman bends down and puts something on a scale. She then'</li><li>'He pulls down the blind. He'</li><li>'Someone flings his hands forward. The someone fires, but the water'</li></ul> |
| 6 | <ul><li>'People are sitting down on chairs. They'</li><li>'They look up at stained glass skylights. The Americans'</li><li>'The lady and the man dance around each other in a circle. The people'</li></ul> |
| 1 | <ul><li>'An older gentleman kisses her. As he leads her off, someone'</li><li>'The first girl comes back and does it effortlessly as the second girl still struggles. For the last round, the girl'</li><li>'As she leaves, the bartender smiles. Now the blonde'</li></ul> |
| 3 | <ul><li>'Someone lowers his demoralized gaze. Someone'</li><li>'Someone goes into his bedroom. Someone'</li><li>'As someone leaves, someone spots him on the monitor. Someone'</li></ul> |
| 7 | <ul><li>'Four inches of Plexiglas separate the two and they talk on monitored phones. Someone'</li><li>'The American and Russian commanders each watch them returning. As someone'</li><li>'A group of walkers walk along the sidewalk near the lake. A man'</li></ul> |
| 4 | <ul><li>'The secretary flexes the foot of her crossed - leg as she eyes someone. The woman'</li><li>'A man in a white striped shirt is smiling. A woman'</li><li>'He grabs her hair and pulls her head back. She'</li></ul> |
| 5 | <ul><li>'He heads out of the plaza. Someone'</li><li>"As he starts back, he sees someone's scared look just before he slams the door shut. Someone"</li><li>'He nods at her beaming. Someone'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.1654 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HelgeKn/Swag-multi-class-20")
# Run inference
preds = model("He sneers and winds up with his fist. Someone")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 12.1056 | 33 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 20 |
| 2 | 20 |
| 3 | 20 |
| 4 | 20 |
| 5 | 20 |
| 6 | 20 |
| 7 | 20 |
| 8 | 20 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3747 | - |
| 0.1111 | 50 | 0.2052 | - |
| 0.2222 | 100 | 0.1878 | - |
| 0.3333 | 150 | 0.1126 | - |
| 0.4444 | 200 | 0.1862 | - |
| 0.5556 | 250 | 0.1385 | - |
| 0.6667 | 300 | 0.0154 | - |
| 0.7778 | 350 | 0.0735 | - |
| 0.8889 | 400 | 0.0313 | - |
| 1.0 | 450 | 0.0189 | - |
| 1.1111 | 500 | 0.0138 | - |
| 1.2222 | 550 | 0.0046 | - |
| 1.3333 | 600 | 0.0043 | - |
| 1.4444 | 650 | 0.0021 | - |
| 1.5556 | 700 | 0.0033 | - |
| 1.6667 | 750 | 0.001 | - |
| 1.7778 | 800 | 0.0026 | - |
| 1.8889 | 850 | 0.0022 | - |
| 2.0 | 900 | 0.0014 | - |
### Framework Versions
- Python: 3.9.13
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.36.0
- PyTorch: 2.1.1+cpu
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "She picks up a wine glass and takes a drink. She"}, {"text": "Someone smiles as she looks out her window. Their car"}, {"text": "Someone turns and her jaw drops at the site of the other woman. Moving in slow motion, someone"}, {"text": "He sneers and winds up with his fist. Someone"}, {"text": "He smooths it back with his hand. Finally, appearing confident and relaxed and with the old familiar glint in his eyes, someone"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.16538461538461538, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,165 |
hopkins/eng-kor-wsample.42
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-04T16:00:28Z |
2023-07-04T20:09:49+00:00
| 11 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-kor-wsample.42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-wsample.42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9879
- Bleu: 6.7656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-wsample.42
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9879
- Bleu: 6.7656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-kor-wsample.42", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,166 |
gokulsrinivasagan/bert_base_lda_20_v1_cola
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_20_v1",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_20_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-26T16:47:35Z |
2024-12-04T10:51:30+00:00
| 9 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_20_v1
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- matthews_correlation
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_20_v1_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.0
name: Matthews Correlation
- type: accuracy
value: 0.6912751793861389
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_v1_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20_v1) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6199
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6136 | 1.0 | 34 | 0.6265 | 0.0 | 0.6913 |
| 0.6112 | 2.0 | 68 | 0.6199 | 0.0 | 0.6913 |
| 0.5873 | 3.0 | 102 | 0.6349 | 0.0699 | 0.6414 |
| 0.5381 | 4.0 | 136 | 0.6502 | 0.1054 | 0.5964 |
| 0.4951 | 5.0 | 170 | 0.6613 | 0.0768 | 0.6443 |
| 0.4506 | 6.0 | 204 | 0.7529 | 0.1282 | 0.5810 |
| 0.4248 | 7.0 | 238 | 0.7049 | 0.0539 | 0.6395 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_v1_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20_v1) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6199
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6136 | 1.0 | 34 | 0.6265 | 0.0 | 0.6913 |
| 0.6112 | 2.0 | 68 | 0.6199 | 0.0 | 0.6913 |
| 0.5873 | 3.0 | 102 | 0.6349 | 0.0699 | 0.6414 |
| 0.5381 | 4.0 | 136 | 0.6502 | 0.1054 | 0.5964 |
| 0.4951 | 5.0 | 170 | 0.6613 | 0.0768 | 0.6443 |
| 0.4506 | 6.0 | 204 | 0.7529 | 0.1282 | 0.5810 |
| 0.4248 | 7.0 | 238 | 0.7049 | 0.0539 | 0.6395 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_20_v1", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["matthews_correlation", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_20_v1_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.0, "name": "Matthews Correlation"}, {"type": "accuracy", "value": 0.6912751793861389, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,167 |
Ananthu357/Ananthus-BAAI-for-contracts10.0
|
Ananthu357
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:626",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-large-en",
"base_model:finetune:BAAI/bge-large-en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-23T21:15:06Z |
2024-08-23T21:17:03+00:00
| 5 | 0 |
---
base_model: BAAI/bge-large-en
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:626
- loss:CosineSimilarityLoss
widget:
- source_sentence: What determines the completion of performance of the contract?
sentences:
- In a tender/contract, in case of any difference, contradiction, discrepancy, with
regard to conditions of tender/contract, specifications, drawings, bill of quantities
etc.
- The Contractor shall at all times during the progress and continuance of the works
and also for the period of maintenance specified in the Tender Form
- What determines the completion of performance of the contract?
- source_sentence: Early completion bonus
sentences:
- In case of ambiguity, order of precedence shall be referred.
- Contractor shall be entitled for a bonus of 1% for each 30 days early completion
of work.
- "The Railway shall have the right to let other contracts in connection with the\
\ works. The Contractor shall afford other Contractors reasonable opportunity\
\ for the storage of their materials and the execution of their works and shall\
\ properly connect and coordinate his work with theirs. If any part of the Contractor\x92\
s work depends upon proper execution or result upon the work of another Contractor(s),\
\ the Contractor shall inspect and promptly report to the Engineer any defects\
\ in such works that render it unsuitable for such proper execution and results.\
\ The Contractor's failure so-to inspect and report shall constitute an acceptance\
\ of the other Contractor's work as fit and proper for the reception of his work,\
\ except as to defects which may develop in the other Contractor's work after\
\ the execution of his work."
- source_sentence: Out of scope works
sentences:
- 'as to execution or quality of any work or material, or as to the measurements
of the works the decision of the Engineer thereon shall be final subject to the
appeal (within 7 days of such decision being intimated to the Contractor) to the
Chief Engineer '
- Should works over and above those included in the contract require to be executed
at the site, the Contractor shall have no right to be entrusted with the execution
of such works which may be carried out by another Contractor or Contractors or
by other means at the option of the Railway.
- What is the order of precedence in the case of ambiguity between drawings and
technical specifications?
- source_sentence: Deadline
sentences:
- shall be read in conjunction with the Standard General Conditions of Contract
which are referred to herein and shall be subject to modifications additions or
suppression by Special Conditions of Contract and/or Special Specifications, if
any, annexed to the Tender Forms.
- the sand, stone, clay ballast, earth, trees, rock
- not later than 30 days after the date of receipt
- source_sentence: Can the stones/rocks/bounders obtained during excavation be used
for construction if found technically satisfactory?
sentences:
- use the same for the purpose of the works either free of cost or pay the cost
- Any material found during excavation should be reported to the engineer.
- No certificate other than Maintenance Certificate, if applicable, referred to
in Clause 50 of the Conditions shall be deemed to constitute approval
---
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Ananthu357/Ananthus-BAAI-for-contracts10.0")
# Run inference
sentences = [
'Can the stones/rocks/bounders obtained during excavation be used for construction if found technically satisfactory?',
'use the same for the purpose of the works either free of cost or pay the cost',
'No certificate other than Maintenance Certificate, if applicable, referred to in Clause 50 of the Conditions shall be deemed to constitute approval',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 15
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:-----:|:----:|:-------------:|:------:|
| 2.5 | 100 | 0.0568 | 0.1144 |
| 5.0 | 200 | 0.0099 | 0.0947 |
| 7.5 | 300 | 0.0039 | 0.1039 |
| 10.0 | 400 | 0.0021 | 0.1027 |
| 12.5 | 500 | 0.0014 | 0.1017 |
| 15.0 | 600 | 0.0012 | 0.1019 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Ananthu357/Ananthus-BAAI-for-contracts10.0")
# Run inference
sentences = [
'Can the stones/rocks/bounders obtained during excavation be used for construction if found technically satisfactory?',
'use the same for the purpose of the works either free of cost or pay the cost',
'No certificate other than Maintenance Certificate, if applicable, referred to in Clause 50 of the Conditions shall be deemed to constitute approval',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 15
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss |
|:-----:|:----:|:-------------:|:------:|
| 2.5 | 100 | 0.0568 | 0.1144 |
| 5.0 | 200 | 0.0099 | 0.0947 |
| 7.5 | 300 | 0.0039 | 0.1039 |
| 10.0 | 400 | 0.0021 | 0.1027 |
| 12.5 | 500 | 0.0014 | 0.1017 |
| 15.0 | 600 | 0.0012 | 0.1019 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-large-en", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:626", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "What determines the completion of performance of the contract?", "sentences": ["In a tender/contract, in case of any difference, contradiction, discrepancy, with regard to conditions of tender/contract, specifications, drawings, bill of quantities etc.", "The Contractor shall at all times during the progress and continuance of the works and also for the period of maintenance specified in the Tender Form", "What determines the completion of performance of the contract?"]}, {"source_sentence": "Early completion bonus", "sentences": ["In case of ambiguity, order of precedence shall be referred.", "Contractor shall be entitled for a bonus of 1% for each 30 days early completion of work.", "The Railway shall have the right to let other contracts in connection with the works. The Contractor shall afford other Contractors reasonable opportunity for the storage of their materials and the execution of their works and shall properly connect and coordinate his work with theirs. If any part of the Contractors work depends upon proper execution or result upon the work of another Contractor(s), the Contractor shall inspect and promptly report to the Engineer any defects in such works that render it unsuitable for such proper execution and results. The Contractor's failure so-to inspect and report shall constitute an acceptance of the other Contractor's work as fit and proper for the reception of his work, except as to defects which may develop in the other Contractor's work after the execution of his work."]}, {"source_sentence": "Out of scope works", "sentences": ["as to execution or quality of any work or material, or as to the measurements of the works the decision of the Engineer thereon shall be final subject to the appeal (within 7 days of such decision being intimated to the Contractor) to the Chief Engineer ", "Should works over and above those included in the contract require to be executed at the site, the Contractor shall have no right to be entrusted with the execution of such works which may be carried out by another Contractor or Contractors or by other means at the option of the Railway.", "What is the order of precedence in the case of ambiguity between drawings and technical specifications?"]}, {"source_sentence": "Deadline", "sentences": ["shall be read in conjunction with the Standard General Conditions of Contract which are referred to herein and shall be subject to modifications additions or suppression by Special Conditions of Contract and/or Special Specifications, if any, annexed to the Tender Forms.", "the sand, stone, clay ballast, earth, trees, rock", "not later than 30 days after the date of receipt"]}, {"source_sentence": "Can the stones/rocks/bounders obtained during excavation be used for construction if found technically satisfactory?", "sentences": ["use the same for the purpose of the works either free of cost or pay the cost", "Any material found during excavation should be reported to the engineer.", "No certificate other than Maintenance Certificate, if applicable, referred to in Clause 50 of the Conditions shall be deemed to constitute approval"]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,168 |
gaduhhartawan/indobart-base-v2
|
gaduhhartawan
|
summarization
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"id",
"dataset:id_liputan6",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-06T02:10:56Z |
2024-05-06T03:44:03+00:00
| 20 | 2 |
---
datasets:
- id_liputan6
language:
- id
license: mit
metrics:
- rouge
pipeline_tag: summarization
---
| null |
Non_BioNLP
|
{"datasets": ["id_liputan6"], "language": ["id"], "license": "mit", "metrics": ["rouge"], "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 42,169 |
|
gokulsrinivasagan/bert_base_lda_100_cola
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_100",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_100",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-22T12:45:23Z |
2024-11-22T12:48:43+00:00
| 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_100
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- matthews_correlation
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_100_cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- type: matthews_correlation
value: 0.0
name: Matthews Correlation
- type: accuracy
value: 0.6912751793861389
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.9028 | 1.0 | 34 | 0.6186 | 0.0 | 0.6913 |
| 0.6233 | 2.0 | 68 | 0.6218 | 0.0 | 0.6913 |
| 0.6131 | 3.0 | 102 | 0.6215 | 0.0 | 0.6913 |
| 0.6105 | 4.0 | 136 | 0.6182 | 0.0 | 0.6913 |
| 0.6114 | 5.0 | 170 | 0.6221 | 0.0 | 0.6913 |
| 0.6098 | 6.0 | 204 | 0.6187 | 0.0 | 0.6913 |
| 0.6096 | 7.0 | 238 | 0.6207 | 0.0 | 0.6913 |
| 0.6108 | 8.0 | 272 | 0.6252 | 0.0 | 0.6913 |
| 0.6089 | 9.0 | 306 | 0.6267 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_cola
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6182
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.9028 | 1.0 | 34 | 0.6186 | 0.0 | 0.6913 |
| 0.6233 | 2.0 | 68 | 0.6218 | 0.0 | 0.6913 |
| 0.6131 | 3.0 | 102 | 0.6215 | 0.0 | 0.6913 |
| 0.6105 | 4.0 | 136 | 0.6182 | 0.0 | 0.6913 |
| 0.6114 | 5.0 | 170 | 0.6221 | 0.0 | 0.6913 |
| 0.6098 | 6.0 | 204 | 0.6187 | 0.0 | 0.6913 |
| 0.6096 | 7.0 | 238 | 0.6207 | 0.0 | 0.6913 |
| 0.6108 | 8.0 | 272 | 0.6252 | 0.0 | 0.6913 |
| 0.6089 | 9.0 | 306 | 0.6267 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_100", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["matthews_correlation", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_100_cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE COLA", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.0, "name": "Matthews Correlation"}, {"type": "accuracy", "value": 0.6912751793861389, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,170 |
catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged
|
catallama
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"catalan",
"conversational",
"ca",
"en",
"dataset:catallama/Catalan-DPO-V2",
"dataset:catallama/Catalan-Instruct-V2",
"base_model:catallama/CataLlama-v0.2-Instruct-DPO",
"base_model:finetune:catallama/CataLlama-v0.2-Instruct-DPO",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-14T11:07:47Z |
2024-07-14T15:28:03+00:00
| 14 | 0 |
---
base_model:
- catallama/CataLlama-v0.2-Instruct-SFT
- catallama/CataLlama-v0.2-Instruct-DPO
datasets:
- catallama/Catalan-DPO-V2
- catallama/Catalan-Instruct-V2
language:
- ca
- en
license: llama3
pipeline_tag: text-generation
tags:
- llama
- llama-3
- catalan
model-index:
- name: CataLlama-v0.2-Instruct-SFT-DPO-Merged
results: []
---

# CataLlama-v0.2-Instruct-SFT-DPO-Merged
**CataLlama-v0.2-Instruct-SFT-DPO-Merged** is a merge between [catallama/CataLlama-v0.2-Instruct-SFT](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT) and [catallama/CataLlama-v0.2-Instruct-DPO](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-DPO)
The resulting model scores better than it's parents on both MMLU and GSM8K.
**This is an instruction fine-tuned model, optimised with DPO, proficient on the following tasks in Catalan**
- *Information extraction (suitable for RAG)*
- *Named Entity Recognition (NER)*
- *Translation from English to Catalan and Catalan to English*
- *Summarization - both short form and long form*
- *Sentiment analysis*
- *Chat*
**Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.
**Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.
**License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
## Benchmarks
| Model | CataLlama-v0.2-Instruct-DPO | CataLlama-v0.2-Instruct-SFT | CataLlama-v0.2-Instruct-SFT-DPO-Merged |
| ------------------ | --------------------------- | ------------------------------- | ------------------------------------------ |
| MMLU 5 shot | 58.89 | 59.35 | **60.53** |
| GSM8K CoT 8 shot | 60.05 | 76.04 | **77.26** |
### Use with transformers
See the snippet below for usage with Transformers:
**The model follows the same prompt template as Llama-3 Instruct**
```python
import transformers
import torch
model_id = "catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Ei com estàs avui?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=1024,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Merging procedure
The merge was performed between the 32 layers of the two models, excluding the embedding, norm and the head layers.
The weights of the 32 layers were merged in equal proportion simply by calculating the average of the corresponding weights from the parent models.
The embedding, norm and head layers are copied from CataLlama-v0.2-Instruct-DPO without modification.
**This was done with a custom script, without mergekit.**
## Intended Use
**Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
| null |
Non_BioNLP
|

# CataLlama-v0.2-Instruct-SFT-DPO-Merged
**CataLlama-v0.2-Instruct-SFT-DPO-Merged** is a merge between [catallama/CataLlama-v0.2-Instruct-SFT](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT) and [catallama/CataLlama-v0.2-Instruct-DPO](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-DPO)
The resulting model scores better than it's parents on both MMLU and GSM8K.
**This is an instruction fine-tuned model, optimised with DPO, proficient on the following tasks in Catalan**
- *Information extraction (suitable for RAG)*
- *Named Entity Recognition (NER)*
- *Translation from English to Catalan and Catalan to English*
- *Summarization - both short form and long form*
- *Sentiment analysis*
- *Chat*
**Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.
**Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.
**License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
## Benchmarks
| Model | CataLlama-v0.2-Instruct-DPO | CataLlama-v0.2-Instruct-SFT | CataLlama-v0.2-Instruct-SFT-DPO-Merged |
| ------------------ | --------------------------- | ------------------------------- | ------------------------------------------ |
| MMLU 5 shot | 58.89 | 59.35 | **60.53** |
| GSM8K CoT 8 shot | 60.05 | 76.04 | **77.26** |
### Use with transformers
See the snippet below for usage with Transformers:
**The model follows the same prompt template as Llama-3 Instruct**
```python
import transformers
import torch
model_id = "catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Ei com estàs avui?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=1024,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Merging procedure
The merge was performed between the 32 layers of the two models, excluding the embedding, norm and the head layers.
The weights of the 32 layers were merged in equal proportion simply by calculating the average of the corresponding weights from the parent models.
The embedding, norm and head layers are copied from CataLlama-v0.2-Instruct-DPO without modification.
**This was done with a custom script, without mergekit.**
## Intended Use
**Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
|
{"base_model": ["catallama/CataLlama-v0.2-Instruct-SFT", "catallama/CataLlama-v0.2-Instruct-DPO"], "datasets": ["catallama/Catalan-DPO-V2", "catallama/Catalan-Instruct-V2"], "language": ["ca", "en"], "license": "llama3", "pipeline_tag": "text-generation", "tags": ["llama", "llama-3", "catalan"], "model-index": [{"name": "CataLlama-v0.2-Instruct-SFT-DPO-Merged", "results": []}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"TRANSLATION",
"SUMMARIZATION"
] | 42,171 |
yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8
|
yklymchuk-rztk
|
sentence-similarity
|
[
"sentence-transformers",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:48198697",
"loss:RZTKMatryoshka2dLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-18T11:35:08Z |
2025-01-19T11:12:09+00:00
| 7 | 0 |
---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- dot_accuracy_10
- dot_precision_10
- dot_recall_10
- dot_ndcg_10
- dot_mrr_10
- dot_map_60
- dot_accuracy_1
- dot_accuracy_3
- dot_accuracy_5
- dot_precision_1
- dot_precision_3
- dot_precision_5
- dot_recall_1
- dot_recall_3
- dot_recall_5
- dot_map_100
- dot_ndcg_1
- dot_mrr_1
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:48198697
- loss:RZTKMatryoshka2dLoss
widget:
- source_sentence: 'query: настільні ігри'
sentences:
- 'passage: Професійна LED+UV лампа для манікюру та нарощування нігтів SUN x11 MAX
66 світловіодів 180 W біла з чорним'
- 'passage: Настольные игры Strateg Гарантия 14 дней Количество игроков От 2 до
4 Габариты_old C Стандарт (до 300x200x250 мм) Назначение Для двоих Назначение
Для компании Жанр Карточные Жанр Развлекательные Материал Картон Материал Пластик
Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель
товара Украина Язык Украинский Детский возраст 4 года Детский возраст 7 лет Детский
возраст 9 лет Детский возраст 5 лет Детский возраст 8 лет Детский возраст 10 лет
Детский возраст 6 лет Тематика Карточные Тип гарантийного талона Гарантия по чеку
Возможность доставки Почтоматы Доставка Premium Нет Наличие товара по городам
Киев и область Доставка Доставка в магазины ROZETKA'
- 'passage: Кишеньковий ніж Ganzo G619'
- source_sentence: 'query: ніж'
sentences:
- 'passage: Туристичний ніж Morakniv Robust (23050108)'
- 'passage: Лампа SUN BQ-5T PLUS 180W WHITE LED/UV для полимеризации'
- 'passage: Набор для шугаринга Madam Saharina паста твердая 400 г, тальк, шпатели'
- source_sentence: 'query: ніж'
sentences:
- 'passage: Складной Карманный Нож Финка Сталь 440 C с Гардой GR76'
- 'passage: Настольная игра Strateg Монополия по-Одесски на украинском языке (30318)
(4823113826840)'
- 'passage: Складний самовикидний ніж з запобіжником №1251'
- source_sentence: 'query: лампа для манікюру'
sentences:
- 'passage: Ножі тактичні Grand Way Гарантія 3 місяці Тип Викидні Поверхня клинка
Сатиноване покриття Піхви/чохол Немає Матеріал Метал Тип замка Frame Lock Матеріал
клинка 420 Колір Чорний Примітка *Ножі, представлені в нашому магазині, не належать
до холодної зброї за сукупністю характеристик згідно з висновком НДІ ЕКЦ МВС України
Кількість вантажних місць 1 Країна реєстрації бренда Україна Країна-виробник товару
Китай Тип гарантійного талона Гарантія по чеку Вага, г 155 Доставка Доставка в
магазини ROZETKA Вид Кишенькові'
- 'passage: Лампа SUN X54 White 54W UV/LED для полімеризації'
- 'passage: Кишеньковий ніж Ganzo G620b-1 Black-Black'
- source_sentence: 'query: ніж'
sentences:
- 'passage: Кухонный нож Tramontina Profissional Master для овощей 102 мм Белый
(24625/184)'
- 'passage: Лампа ESPERANZA UV LED Lamp EBN007 для полимеризации'
- 'passage: Ножи тактические BPS Knives Гарантия 14 дней Тип Нескладные Ножны/чехол
Ножны Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель
товара Украина Доставка Готов к отправке Доставка Доставка в магазины ROZETKA
Вид Охотничьи Вид Рыбацкие Вид Туристические Вид Карманные'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: 'validation matryoshka dim 768 '
type: validation--matryoshka_dim-768--
metrics:
- type: dot_accuracy_10
value: 0.507201511991658
name: Dot Accuracy 10
- type: dot_precision_10
value: 0.1442795229405631
name: Dot Precision 10
- type: dot_recall_10
value: 0.22776333806589133
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.2175905090861651
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.264842172422519
name: Dot Mrr 10
- type: dot_map_60
value: 0.184980466857725
name: Dot Map 60
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: bm full
type: bm-full
metrics:
- type: dot_accuracy_1
value: 0.4989384288747346
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.6670205237084218
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7469922151450814
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.843241330502477
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.4989384288747346
name: Dot Precision 1
- type: dot_precision_3
value: 0.5031847133757962
name: Dot Precision 3
- type: dot_precision_5
value: 0.5068648266100496
name: Dot Precision 5
- type: dot_precision_10
value: 0.5027246992215145
name: Dot Precision 10
- type: dot_recall_1
value: 0.012063598815833302
name: Dot Recall 1
- type: dot_recall_3
value: 0.03587460812559255
name: Dot Recall 3
- type: dot_recall_5
value: 0.0600948196481301
name: Dot Recall 5
- type: dot_recall_10
value: 0.11762008717816595
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5029239135947354
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.5983870016513317
name: Dot Mrr 10
- type: dot_map_100
value: 0.36856738827598634
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core uk title
type: core-uk-title
metrics:
- type: dot_accuracy_1
value: 0.6529774127310062
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8501026694045175
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.9229979466119097
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9753593429158111
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6529774127310062
name: Dot Precision 1
- type: dot_precision_3
value: 0.6303901437371663
name: Dot Precision 3
- type: dot_precision_5
value: 0.6223819301848049
name: Dot Precision 5
- type: dot_precision_10
value: 0.5197125256673512
name: Dot Precision 10
- type: dot_recall_1
value: 0.06676169794224511
name: Dot Recall 1
- type: dot_recall_3
value: 0.19317800670092614
name: Dot Recall 3
- type: dot_recall_5
value: 0.3150854347066148
name: Dot Recall 5
- type: dot_recall_10
value: 0.5133241225296425
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5841120630996868
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7654655976011217
name: Dot Mrr 10
- type: dot_map_100
value: 0.5691707661733787
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core ru title
type: core-ru-title
metrics:
- type: dot_accuracy_1
value: 0.6457905544147844
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8634496919917864
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.9240246406570842
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9774127310061602
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6457905544147844
name: Dot Precision 1
- type: dot_precision_3
value: 0.6498973305954825
name: Dot Precision 3
- type: dot_precision_5
value: 0.6244353182751541
name: Dot Precision 5
- type: dot_precision_10
value: 0.5285420944558521
name: Dot Precision 10
- type: dot_recall_1
value: 0.0658006438241068
name: Dot Recall 1
- type: dot_recall_3
value: 0.19884675900725543
name: Dot Recall 3
- type: dot_recall_5
value: 0.31688340916508567
name: Dot Recall 5
- type: dot_recall_10
value: 0.5224551236901762
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5921096158331104
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7635829177666964
name: Dot Mrr 10
- type: dot_map_100
value: 0.5766445569031706
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core uk options
type: core-uk-options
metrics:
- type: dot_accuracy_1
value: 0.5082135523613963
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7628336755646817
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8572895277207392
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9435318275154004
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5082135523613963
name: Dot Precision 1
- type: dot_precision_3
value: 0.5208761122518822
name: Dot Precision 3
- type: dot_precision_5
value: 0.5149897330595483
name: Dot Precision 5
- type: dot_precision_10
value: 0.46581108829568796
name: Dot Precision 10
- type: dot_recall_1
value: 0.05096272351073655
name: Dot Recall 1
- type: dot_recall_3
value: 0.15692054100829317
name: Dot Recall 3
- type: dot_recall_5
value: 0.25669944302251946
name: Dot Recall 5
- type: dot_recall_10
value: 0.45827616824264966
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5041110615142775
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.6539556240018252
name: Dot Mrr 10
- type: dot_map_100
value: 0.5090223360110971
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core ru options
type: core-ru-options
metrics:
- type: dot_accuracy_1
value: 0.5338809034907598
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.784394250513347
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8624229979466119
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9291581108829569
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5338809034907598
name: Dot Precision 1
- type: dot_precision_3
value: 0.5335386721423682
name: Dot Precision 3
- type: dot_precision_5
value: 0.5236139630390144
name: Dot Precision 5
- type: dot_precision_10
value: 0.46642710472279264
name: Dot Precision 10
- type: dot_recall_1
value: 0.05342347674603986
name: Dot Recall 1
- type: dot_recall_3
value: 0.16003637053582698
name: Dot Recall 3
- type: dot_recall_5
value: 0.2611603045630404
name: Dot Recall 5
- type: dot_recall_10
value: 0.45855519604538203
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5097896528840739
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.671221602946449
name: Dot Mrr 10
- type: dot_map_100
value: 0.5138834029383393
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: options uk title
type: options-uk-title
metrics:
- type: dot_accuracy_1
value: 0.7706013363028953
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.9242761692650334
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.9621380846325167
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9955456570155902
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.7706013363028953
name: Dot Precision 1
- type: dot_precision_3
value: 0.7334818114328137
name: Dot Precision 3
- type: dot_precision_5
value: 0.6984409799554565
name: Dot Precision 5
- type: dot_precision_10
value: 0.5443207126948775
name: Dot Precision 10
- type: dot_recall_1
value: 0.11723591305996653
name: Dot Recall 1
- type: dot_recall_3
value: 0.33553158052044463
name: Dot Recall 3
- type: dot_recall_5
value: 0.5277887850827717
name: Dot Recall 5
- type: dot_recall_10
value: 0.7799321564354972
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.7548631799996341
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.851841835472125
name: Dot Mrr 10
- type: dot_map_100
value: 0.729503740667324
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: options ru title
type: options-ru-title
metrics:
- type: dot_accuracy_1
value: 0.77728285077951
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.9265033407572383
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.9665924276169265
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9977728285077951
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.77728285077951
name: Dot Precision 1
- type: dot_precision_3
value: 0.7431328878990348
name: Dot Precision 3
- type: dot_precision_5
value: 0.7002227171492205
name: Dot Precision 5
- type: dot_precision_10
value: 0.5409799554565702
name: Dot Precision 10
- type: dot_recall_1
value: 0.11746039780560939
name: Dot Recall 1
- type: dot_recall_3
value: 0.34071240563445465
name: Dot Recall 3
- type: dot_recall_5
value: 0.5302607686794769
name: Dot Recall 5
- type: dot_recall_10
value: 0.7773921203765302
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.7549796461043052
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.8570208929897127
name: Dot Mrr 10
- type: dot_map_100
value: 0.731989698879385
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: options uk options
type: options-uk-options
metrics:
- type: dot_accuracy_1
value: 0.6347438752783965
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8106904231625836
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8864142538975501
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9420935412026726
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6347438752783965
name: Dot Precision 1
- type: dot_precision_3
value: 0.6139569413511506
name: Dot Precision 3
- type: dot_precision_5
value: 0.5897550111358575
name: Dot Precision 5
- type: dot_precision_10
value: 0.4873051224944321
name: Dot Precision 10
- type: dot_recall_1
value: 0.0929473706199764
name: Dot Recall 1
- type: dot_recall_3
value: 0.26723784134697276
name: Dot Recall 3
- type: dot_recall_5
value: 0.426944757791083
name: Dot Recall 5
- type: dot_recall_10
value: 0.6723484446758389
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.6425132104512846
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7379609007671365
name: Dot Mrr 10
- type: dot_map_100
value: 0.628037661942495
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: options ru options
type: options-ru-options
metrics:
- type: dot_accuracy_1
value: 0.6302895322939867
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8418708240534521
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8819599109131403
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9465478841870824
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6302895322939867
name: Dot Precision 1
- type: dot_precision_3
value: 0.6206384558277653
name: Dot Precision 3
- type: dot_precision_5
value: 0.5893095768374166
name: Dot Precision 5
- type: dot_precision_10
value: 0.4886414253897551
name: Dot Precision 10
- type: dot_recall_1
value: 0.0926663227888172
name: Dot Recall 1
- type: dot_recall_3
value: 0.2693872385965927
name: Dot Recall 3
- type: dot_recall_5
value: 0.4224152919587218
name: Dot Recall 5
- type: dot_recall_10
value: 0.6773074364388395
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.644434921208596
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.741894686605154
name: Dot Mrr 10
- type: dot_map_100
value: 0.6255501320885972
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms uk title
type: rusisms-uk-title
metrics:
- type: dot_accuracy_1
value: 0.6349206349206349
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8063492063492064
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8476190476190476
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8952380952380953
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6349206349206349
name: Dot Precision 1
- type: dot_precision_3
value: 0.6582010582010582
name: Dot Precision 3
- type: dot_precision_5
value: 0.6495238095238096
name: Dot Precision 5
- type: dot_precision_10
value: 0.621904761904762
name: Dot Precision 10
- type: dot_recall_1
value: 0.04608561750016866
name: Dot Recall 1
- type: dot_recall_3
value: 0.12030970402785328
name: Dot Recall 3
- type: dot_recall_5
value: 0.18481151303807217
name: Dot Recall 5
- type: dot_recall_10
value: 0.31615683931646443
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.6656660346071843
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7263277903754093
name: Dot Mrr 10
- type: dot_map_100
value: 0.5822318416027721
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms ru title
type: rusisms-ru-title
metrics:
- type: dot_accuracy_1
value: 0.6507936507936508
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7873015873015873
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.834920634920635
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8888888888888888
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.6507936507936508
name: Dot Precision 1
- type: dot_precision_3
value: 0.6476190476190476
name: Dot Precision 3
- type: dot_precision_5
value: 0.6419047619047619
name: Dot Precision 5
- type: dot_precision_10
value: 0.6174603174603175
name: Dot Precision 10
- type: dot_recall_1
value: 0.04386105773703947
name: Dot Recall 1
- type: dot_recall_3
value: 0.12144099224790127
name: Dot Recall 3
- type: dot_recall_5
value: 0.18110381262728556
name: Dot Recall 5
- type: dot_recall_10
value: 0.3096903539599102
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.659607690171676
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7312055933484504
name: Dot Mrr 10
- type: dot_map_100
value: 0.5842765581060543
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms uk options
type: rusisms-uk-options
metrics:
- type: dot_accuracy_1
value: 0.5142857142857142
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.6349206349206349
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7111111111111111
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.7873015873015873
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5142857142857142
name: Dot Precision 1
- type: dot_precision_3
value: 0.5164021164021164
name: Dot Precision 3
- type: dot_precision_5
value: 0.5187301587301587
name: Dot Precision 5
- type: dot_precision_10
value: 0.5092063492063492
name: Dot Precision 10
- type: dot_recall_1
value: 0.04086070233812709
name: Dot Recall 1
- type: dot_recall_3
value: 0.09125589967667369
name: Dot Recall 3
- type: dot_recall_5
value: 0.1391826787319033
name: Dot Recall 5
- type: dot_recall_10
value: 0.25493013951881177
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5432280320538901
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.5967384731670444
name: Dot Mrr 10
- type: dot_map_100
value: 0.5008966606414169
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms ru options
type: rusisms-ru-options
metrics:
- type: dot_accuracy_1
value: 0.5079365079365079
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.6507936507936508
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7015873015873015
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5079365079365079
name: Dot Precision 1
- type: dot_precision_3
value: 0.5206349206349207
name: Dot Precision 3
- type: dot_precision_5
value: 0.5136507936507937
name: Dot Precision 5
- type: dot_precision_10
value: 0.5123809523809524
name: Dot Precision 10
- type: dot_recall_1
value: 0.03755920266398328
name: Dot Recall 1
- type: dot_recall_3
value: 0.09275752986565318
name: Dot Recall 3
- type: dot_recall_5
value: 0.13889507452669533
name: Dot Recall 5
- type: dot_recall_10
value: 0.24837942719280243
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5416125065389893
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.5956512975560592
name: Dot Mrr 10
- type: dot_map_100
value: 0.5022978361138828
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms corrected uk title
type: rusisms_corrected-uk-title
metrics:
- type: dot_accuracy_1
value: 0.7468354430379747
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8512658227848101
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8987341772151899
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.930379746835443
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.7468354430379747
name: Dot Precision 1
- type: dot_precision_3
value: 0.7267932489451476
name: Dot Precision 3
- type: dot_precision_5
value: 0.7120253164556962
name: Dot Precision 5
- type: dot_precision_10
value: 0.6718354430379747
name: Dot Precision 10
- type: dot_recall_1
value: 0.05413847530306109
name: Dot Recall 1
- type: dot_recall_3
value: 0.13549638337016875
name: Dot Recall 3
- type: dot_recall_5
value: 0.20640018757390072
name: Dot Recall 5
- type: dot_recall_10
value: 0.3504834628281357
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.7301763805249227
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.8087590415913197
name: Dot Mrr 10
- type: dot_map_100
value: 0.6422358164589997
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms corrected ru title
type: rusisms_corrected-ru-title
metrics:
- type: dot_accuracy_1
value: 0.7278481012658228
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.8386075949367089
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8765822784810127
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9240506329113924
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.7278481012658228
name: Dot Precision 1
- type: dot_precision_3
value: 0.7215189873417721
name: Dot Precision 3
- type: dot_precision_5
value: 0.7037974683544304
name: Dot Precision 5
- type: dot_precision_10
value: 0.6715189873417721
name: Dot Precision 10
- type: dot_recall_1
value: 0.05456593757600845
name: Dot Recall 1
- type: dot_recall_3
value: 0.13973934661722753
name: Dot Recall 3
- type: dot_recall_5
value: 0.20571322767056138
name: Dot Recall 5
- type: dot_recall_10
value: 0.34937377972766176
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.7270703377453234
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.7933707554751858
name: Dot Mrr 10
- type: dot_map_100
value: 0.6461514090978902
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms corrected uk options
type: rusisms_corrected-uk-options
metrics:
- type: dot_accuracy_1
value: 0.5886075949367089
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7278481012658228
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7816455696202531
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8544303797468354
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5886075949367089
name: Dot Precision 1
- type: dot_precision_3
value: 0.5843881856540085
name: Dot Precision 3
- type: dot_precision_5
value: 0.5867088607594936
name: Dot Precision 5
- type: dot_precision_10
value: 0.5727848101265823
name: Dot Precision 10
- type: dot_recall_1
value: 0.042248255233976226
name: Dot Recall 1
- type: dot_recall_3
value: 0.11213843292540425
name: Dot Recall 3
- type: dot_recall_5
value: 0.16876113275591878
name: Dot Recall 5
- type: dot_recall_10
value: 0.29811718733233894
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.6143319751035731
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.6718492565802692
name: Dot Mrr 10
- type: dot_map_100
value: 0.5714145545212894
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: rusisms corrected ru options
type: rusisms_corrected-ru-options
metrics:
- type: dot_accuracy_1
value: 0.5791139240506329
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7278481012658228
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7911392405063291
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8639240506329114
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5791139240506329
name: Dot Precision 1
- type: dot_precision_3
value: 0.580168776371308
name: Dot Precision 3
- type: dot_precision_5
value: 0.5854430379746836
name: Dot Precision 5
- type: dot_precision_10
value: 0.5708860759493671
name: Dot Precision 10
- type: dot_recall_1
value: 0.04477724151203175
name: Dot Recall 1
- type: dot_recall_3
value: 0.11025620974877864
name: Dot Recall 3
- type: dot_recall_5
value: 0.16476745621688715
name: Dot Recall 5
- type: dot_recall_10
value: 0.2890210600181688
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.6090163972597312
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.6657725537472372
name: Dot Mrr 10
- type: dot_map_100
value: 0.5706605720157263
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core typos uk title
type: core_typos-uk-title
metrics:
- type: dot_accuracy_1
value: 0.5359342915811088
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7525667351129364
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.837782340862423
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.9075975359342916
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5359342915811088
name: Dot Precision 1
- type: dot_precision_3
value: 0.5321697467488021
name: Dot Precision 3
- type: dot_precision_5
value: 0.5271047227926079
name: Dot Precision 5
- type: dot_precision_10
value: 0.44250513347022585
name: Dot Precision 10
- type: dot_recall_1
value: 0.0548592291269547
name: Dot Recall 1
- type: dot_recall_3
value: 0.16390877679982643
name: Dot Recall 3
- type: dot_recall_5
value: 0.26735461747389516
name: Dot Recall 5
- type: dot_recall_10
value: 0.43859522796592487
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.49578055328230314
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.6623574036048365
name: Dot Mrr 10
- type: dot_map_100
value: 0.4754406732534669
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core typos ru title
type: core_typos-ru-title
metrics:
- type: dot_accuracy_1
value: 0.5585215605749486
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.7659137577002053
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.8367556468172485
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8993839835728953
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.5585215605749486
name: Dot Precision 1
- type: dot_precision_3
value: 0.5468856947296372
name: Dot Precision 3
- type: dot_precision_5
value: 0.5338809034907598
name: Dot Precision 5
- type: dot_precision_10
value: 0.44455852156057496
name: Dot Precision 10
- type: dot_recall_1
value: 0.05721534893687328
name: Dot Recall 1
- type: dot_recall_3
value: 0.16884014376465156
name: Dot Recall 3
- type: dot_recall_5
value: 0.27158115568853575
name: Dot Recall 5
- type: dot_recall_10
value: 0.4409582582165322
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.5016521479115571
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.6763954075812391
name: Dot Mrr 10
- type: dot_map_100
value: 0.4823271804638839
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core typos uk options
type: core_typos-uk-options
metrics:
- type: dot_accuracy_1
value: 0.42813141683778233
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.6611909650924025
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7494866529774127
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8470225872689938
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.42813141683778233
name: Dot Precision 1
- type: dot_precision_3
value: 0.4277891854893908
name: Dot Precision 3
- type: dot_precision_5
value: 0.4260780287474333
name: Dot Precision 5
- type: dot_precision_10
value: 0.3834702258726899
name: Dot Precision 10
- type: dot_recall_1
value: 0.042773227346394396
name: Dot Recall 1
- type: dot_recall_3
value: 0.1286119443996301
name: Dot Recall 3
- type: dot_recall_5
value: 0.21222453983053088
name: Dot Recall 5
- type: dot_recall_10
value: 0.3781351048865242
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.41562231410456363
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.5610007007594269
name: Dot Mrr 10
- type: dot_map_100
value: 0.414986485692781
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: core typos ru options
type: core_typos-ru-options
metrics:
- type: dot_accuracy_1
value: 0.419917864476386
name: Dot Accuracy 1
- type: dot_accuracy_3
value: 0.6591375770020534
name: Dot Accuracy 3
- type: dot_accuracy_5
value: 0.7484599589322382
name: Dot Accuracy 5
- type: dot_accuracy_10
value: 0.8552361396303901
name: Dot Accuracy 10
- type: dot_precision_1
value: 0.419917864476386
name: Dot Precision 1
- type: dot_precision_3
value: 0.4318959616700889
name: Dot Precision 3
- type: dot_precision_5
value: 0.4273100616016427
name: Dot Precision 5
- type: dot_precision_10
value: 0.38305954825462013
name: Dot Precision 10
- type: dot_recall_1
value: 0.04174381486103916
name: Dot Recall 1
- type: dot_recall_3
value: 0.13009388847654338
name: Dot Recall 3
- type: dot_recall_5
value: 0.21404733730724185
name: Dot Recall 5
- type: dot_recall_10
value: 0.37828240427286214
name: Dot Recall 10
- type: dot_ndcg_10
value: 0.41609465156770326
name: Dot Ndcg 10
- type: dot_mrr_10
value: 0.5601051953978028
name: Dot Mrr 10
- type: dot_map_100
value: 0.4159363797020089
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: 'bm full matryoshka dim 768 '
type: bm-full--matryoshka_dim-768--
metrics:
- type: dot_accuracy_1
value: 0.4989384288747346
name: Dot Accuracy 1
- type: dot_precision_1
value: 0.4989384288747346
name: Dot Precision 1
- type: dot_recall_1
value: 0.012063598815833302
name: Dot Recall 1
- type: dot_ndcg_1
value: 0.4989384288747346
name: Dot Ndcg 1
- type: dot_mrr_1
value: 0.4989384288747346
name: Dot Mrr 1
- type: dot_map_100
value: 0.36856738827598634
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: 'bm full matryoshka dim 512 '
type: bm-full--matryoshka_dim-512--
metrics:
- type: dot_accuracy_1
value: 0.5
name: Dot Accuracy 1
- type: dot_precision_1
value: 0.5
name: Dot Precision 1
- type: dot_recall_1
value: 0.011990029695710535
name: Dot Recall 1
- type: dot_ndcg_1
value: 0.5
name: Dot Ndcg 1
- type: dot_mrr_1
value: 0.5
name: Dot Mrr 1
- type: dot_map_100
value: 0.36547264154192
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: 'bm full matryoshka dim 256 '
type: bm-full--matryoshka_dim-256--
metrics:
- type: dot_accuracy_1
value: 0.4964614295824487
name: Dot Accuracy 1
- type: dot_precision_1
value: 0.4964614295824487
name: Dot Precision 1
- type: dot_recall_1
value: 0.011900603068482157
name: Dot Recall 1
- type: dot_ndcg_1
value: 0.4964614295824487
name: Dot Ndcg 1
- type: dot_mrr_1
value: 0.4964614295824487
name: Dot Mrr 1
- type: dot_map_100
value: 0.3530775160276886
name: Dot Map 100
- task:
type: rztkinformation-retrieval
name: RZTKInformation Retrieval
dataset:
name: 'bm full matryoshka dim 128 '
type: bm-full--matryoshka_dim-128--
metrics:
- type: dot_accuracy_1
value: 0.4872611464968153
name: Dot Accuracy 1
- type: dot_precision_1
value: 0.4872611464968153
name: Dot Precision 1
- type: dot_recall_1
value: 0.011534666531071948
name: Dot Recall 1
- type: dot_ndcg_1
value: 0.4872611464968153
name: Dot Ndcg 1
- type: dot_mrr_1
value: 0.4872611464968153
name: Dot Mrr 1
- type: dot_map_100
value: 0.3307544210079215
name: Dot Map 100
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the rozetka_positive_pairs dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- rozetka_positive_pairs
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
RZTKSentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8")
# Run inference
sentences = [
'query: ніж',
'passage: Кухонный нож Tramontina Profissional Master для овощей 102 мм Белый (24625/184)',
'passage: Ножи тактические BPS Knives Гарантия 14 дней Тип Нескладные Ножны/чехол Ножны Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель товара Украина Доставка Готов к отправке Доставка Доставка в магазины ROZETKA Вид Охотничьи Вид Рыбацкие Вид Туристические Вид Карманные',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### RZTKInformation Retrieval
* Dataset: `validation--matryoshka_dim-768--`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | Value |
|:-----------------|:-----------|
| dot_accuracy_10 | 0.5072 |
| dot_precision_10 | 0.1443 |
| dot_recall_10 | 0.2278 |
| **dot_ndcg_10** | **0.2176** |
| dot_mrr_10 | 0.2648 |
| dot_map_60 | 0.185 |
#### RZTKInformation Retrieval
* Datasets: `bm-full`, `core-uk-title`, `core-ru-title`, `core-uk-options`, `core-ru-options`, `options-uk-title`, `options-ru-title`, `options-uk-options`, `options-ru-options`, `rusisms-uk-title`, `rusisms-ru-title`, `rusisms-uk-options`, `rusisms-ru-options`, `rusisms_corrected-uk-title`, `rusisms_corrected-ru-title`, `rusisms_corrected-uk-options`, `rusisms_corrected-ru-options`, `core_typos-uk-title`, `core_typos-ru-title`, `core_typos-uk-options` and `core_typos-ru-options`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | bm-full | core-uk-title | core-ru-title | core-uk-options | core-ru-options | options-uk-title | options-ru-title | options-uk-options | options-ru-options | rusisms-uk-title | rusisms-ru-title | rusisms-uk-options | rusisms-ru-options | rusisms_corrected-uk-title | rusisms_corrected-ru-title | rusisms_corrected-uk-options | rusisms_corrected-ru-options | core_typos-uk-title | core_typos-ru-title | core_typos-uk-options | core_typos-ru-options |
|:-----------------|:-----------|:--------------|:--------------|:----------------|:----------------|:-----------------|:-----------------|:-------------------|:-------------------|:-----------------|:-----------------|:-------------------|:-------------------|:---------------------------|:---------------------------|:-----------------------------|:-----------------------------|:--------------------|:--------------------|:----------------------|:----------------------|
| dot_accuracy_1 | 0.4989 | 0.653 | 0.6458 | 0.5082 | 0.5339 | 0.7706 | 0.7773 | 0.6347 | 0.6303 | 0.6349 | 0.6508 | 0.5143 | 0.5079 | 0.7468 | 0.7278 | 0.5886 | 0.5791 | 0.5359 | 0.5585 | 0.4281 | 0.4199 |
| dot_accuracy_3 | 0.667 | 0.8501 | 0.8634 | 0.7628 | 0.7844 | 0.9243 | 0.9265 | 0.8107 | 0.8419 | 0.8063 | 0.7873 | 0.6349 | 0.6508 | 0.8513 | 0.8386 | 0.7278 | 0.7278 | 0.7526 | 0.7659 | 0.6612 | 0.6591 |
| dot_accuracy_5 | 0.747 | 0.923 | 0.924 | 0.8573 | 0.8624 | 0.9621 | 0.9666 | 0.8864 | 0.882 | 0.8476 | 0.8349 | 0.7111 | 0.7016 | 0.8987 | 0.8766 | 0.7816 | 0.7911 | 0.8378 | 0.8368 | 0.7495 | 0.7485 |
| dot_accuracy_10 | 0.8432 | 0.9754 | 0.9774 | 0.9435 | 0.9292 | 0.9955 | 0.9978 | 0.9421 | 0.9465 | 0.8952 | 0.8889 | 0.7873 | 0.8 | 0.9304 | 0.9241 | 0.8544 | 0.8639 | 0.9076 | 0.8994 | 0.847 | 0.8552 |
| dot_precision_1 | 0.4989 | 0.653 | 0.6458 | 0.5082 | 0.5339 | 0.7706 | 0.7773 | 0.6347 | 0.6303 | 0.6349 | 0.6508 | 0.5143 | 0.5079 | 0.7468 | 0.7278 | 0.5886 | 0.5791 | 0.5359 | 0.5585 | 0.4281 | 0.4199 |
| dot_precision_3 | 0.5032 | 0.6304 | 0.6499 | 0.5209 | 0.5335 | 0.7335 | 0.7431 | 0.614 | 0.6206 | 0.6582 | 0.6476 | 0.5164 | 0.5206 | 0.7268 | 0.7215 | 0.5844 | 0.5802 | 0.5322 | 0.5469 | 0.4278 | 0.4319 |
| dot_precision_5 | 0.5069 | 0.6224 | 0.6244 | 0.515 | 0.5236 | 0.6984 | 0.7002 | 0.5898 | 0.5893 | 0.6495 | 0.6419 | 0.5187 | 0.5137 | 0.712 | 0.7038 | 0.5867 | 0.5854 | 0.5271 | 0.5339 | 0.4261 | 0.4273 |
| dot_precision_10 | 0.5027 | 0.5197 | 0.5285 | 0.4658 | 0.4664 | 0.5443 | 0.541 | 0.4873 | 0.4886 | 0.6219 | 0.6175 | 0.5092 | 0.5124 | 0.6718 | 0.6715 | 0.5728 | 0.5709 | 0.4425 | 0.4446 | 0.3835 | 0.3831 |
| dot_recall_1 | 0.0121 | 0.0668 | 0.0658 | 0.051 | 0.0534 | 0.1172 | 0.1175 | 0.0929 | 0.0927 | 0.0461 | 0.0439 | 0.0409 | 0.0376 | 0.0541 | 0.0546 | 0.0422 | 0.0448 | 0.0549 | 0.0572 | 0.0428 | 0.0417 |
| dot_recall_3 | 0.0359 | 0.1932 | 0.1988 | 0.1569 | 0.16 | 0.3355 | 0.3407 | 0.2672 | 0.2694 | 0.1203 | 0.1214 | 0.0913 | 0.0928 | 0.1355 | 0.1397 | 0.1121 | 0.1103 | 0.1639 | 0.1688 | 0.1286 | 0.1301 |
| dot_recall_5 | 0.0601 | 0.3151 | 0.3169 | 0.2567 | 0.2612 | 0.5278 | 0.5303 | 0.4269 | 0.4224 | 0.1848 | 0.1811 | 0.1392 | 0.1389 | 0.2064 | 0.2057 | 0.1688 | 0.1648 | 0.2674 | 0.2716 | 0.2122 | 0.214 |
| dot_recall_10 | 0.1176 | 0.5133 | 0.5225 | 0.4583 | 0.4586 | 0.7799 | 0.7774 | 0.6723 | 0.6773 | 0.3162 | 0.3097 | 0.2549 | 0.2484 | 0.3505 | 0.3494 | 0.2981 | 0.289 | 0.4386 | 0.441 | 0.3781 | 0.3783 |
| **dot_ndcg_10** | **0.5029** | **0.5841** | **0.5921** | **0.5041** | **0.5098** | **0.7549** | **0.755** | **0.6425** | **0.6444** | **0.6657** | **0.6596** | **0.5432** | **0.5416** | **0.7302** | **0.7271** | **0.6143** | **0.609** | **0.4958** | **0.5017** | **0.4156** | **0.4161** |
| dot_mrr_10 | 0.5984 | 0.7655 | 0.7636 | 0.654 | 0.6712 | 0.8518 | 0.857 | 0.738 | 0.7419 | 0.7263 | 0.7312 | 0.5967 | 0.5957 | 0.8088 | 0.7934 | 0.6718 | 0.6658 | 0.6624 | 0.6764 | 0.561 | 0.5601 |
| dot_map_100 | 0.3686 | 0.5692 | 0.5766 | 0.509 | 0.5139 | 0.7295 | 0.732 | 0.628 | 0.6256 | 0.5822 | 0.5843 | 0.5009 | 0.5023 | 0.6422 | 0.6462 | 0.5714 | 0.5707 | 0.4754 | 0.4823 | 0.415 | 0.4159 |
#### RZTKInformation Retrieval
* Datasets: `bm-full--matryoshka_dim-768--`, `bm-full--matryoshka_dim-512--`, `bm-full--matryoshka_dim-256--` and `bm-full--matryoshka_dim-128--`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | bm-full--matryoshka_dim-768-- | bm-full--matryoshka_dim-512-- | bm-full--matryoshka_dim-256-- | bm-full--matryoshka_dim-128-- |
|:----------------|:------------------------------|:------------------------------|:------------------------------|:------------------------------|
| dot_accuracy_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_precision_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_recall_1 | 0.0121 | 0.012 | 0.0119 | 0.0115 |
| **dot_ndcg_1** | **0.4989** | **0.5** | **0.4965** | **0.4873** |
| dot_mrr_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_map_100 | 0.3686 | 0.3655 | 0.3531 | 0.3308 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### rozetka_positive_pairs
* Dataset: rozetka_positive_pairs
* Size: 48,198,697 training samples
* Columns: <code>query</code> and <code>text</code>
* Approximate statistics based on the first 1000 samples:
| | query | text |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 61.91 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | text |
|:------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: клей огнеупорн</code> | <code>passage: Герметик вогнетривкий Penosil Premium +1500 °C Sealant Black 310 мл (H2130)</code> |
| <code>query: клей огнеупорн</code> | <code>passage: Герметики Penosil Гарантія 14 днів Габарити С Стандарт (до 300x200x250 мм) Кількість вантажних місць 1 Країна реєстрації бренда Естонія Кількість предметів, шт 1 Країна-виробник товару Естонія Вид_old Герметик Колір Чорний Об'єм 310 мл Тип гарантійного талона Без гарантійного талона Можливість доставки Почтомати Доставка Premium Немає Наявність товара по містах Київ і область Доставка Доставка в магазини ROZETKA</code> |
| <code>query: клей огнеупорн</code> | <code>passage: Герметик огнеупорный Penosil Premium +1500°C Sealant Black 310 мл (H2130)</code> |
* Loss: <code>sentence_transformers_training.model.matryoshka2d_loss.RZTKMatryoshka2dLoss</code> with these parameters:
```json
{
"loss": "RZTKMultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Evaluation Dataset
#### rozetka_positive_pairs
* Dataset: rozetka_positive_pairs
* Size: 3,309,787 evaluation samples
* Columns: <code>query</code> and <code>text</code>
* Approximate statistics based on the first 1000 samples:
| | query | text |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 7.58 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 72.78 tokens</li><li>max: 264 tokens</li></ul> |
* Samples:
| query | text |
|:------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: ніж</code> | <code>passage: Мисливський Ніж з Нержавіючої Сталі HK6 SSH BPS Knives - Ніж для риболовлі, мисливства, походів</code> |
| <code>query: ніж</code> | <code>passage: Ножі тактичні BPS Knives Гарантія 14 днів Тип Нескладані Кількість вантажних місць 1 Країна реєстрації бренда Україна Країна-виробник товару Україна Вид Туристичні Вид Авторські вироби Вид Сувенірні Вид Мисливські Вид Рибальські Вид Клинки</code> |
| <code>query: ніж</code> | <code>passage: Охотничий Нож из Нержавеющей Стали HK6 SSH BPS Knives - Нож для рыбалки, охоты, походов</code> |
* Loss: <code>sentence_transformers_training.model.matryoshka2d_loss.RZTKMatryoshka2dLoss</code> with these parameters:
```json
{
"loss": "RZTKMultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 88
- `per_device_eval_batch_size`: 88
- `learning_rate`: 2e-05
- `num_train_epochs`: 1.0
- `warmup_ratio`: 0.1
- `bf16`: True
- `bf16_full_eval`: True
- `tf32`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `optim`: adafactor
- `push_to_hub`: True
- `hub_model_id`: yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8
- `hub_private_repo`: True
- `prompts`: {'query': 'query: ', 'text': 'passage: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 88
- `per_device_eval_batch_size`: 88
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: True
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adafactor
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8
- `hub_strategy`: every_save
- `hub_private_repo`: True
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'query': 'query: ', 'text': 'passage: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `ddp_static_graph`: False
- `ddp_comm_hook`: bf16
- `gradient_as_bucket_view`: False
- `num_proc`: 30
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | validation--matryoshka_dim-768--_dot_ndcg_10 | bm-full_dot_ndcg_10 | core-uk-title_dot_ndcg_10 | core-ru-title_dot_ndcg_10 | core-uk-options_dot_ndcg_10 | core-ru-options_dot_ndcg_10 | options-uk-title_dot_ndcg_10 | options-ru-title_dot_ndcg_10 | options-uk-options_dot_ndcg_10 | options-ru-options_dot_ndcg_10 | rusisms-uk-title_dot_ndcg_10 | rusisms-ru-title_dot_ndcg_10 | rusisms-uk-options_dot_ndcg_10 | rusisms-ru-options_dot_ndcg_10 | rusisms_corrected-uk-title_dot_ndcg_10 | rusisms_corrected-ru-title_dot_ndcg_10 | rusisms_corrected-uk-options_dot_ndcg_10 | rusisms_corrected-ru-options_dot_ndcg_10 | core_typos-uk-title_dot_ndcg_10 | core_typos-ru-title_dot_ndcg_10 | core_typos-uk-options_dot_ndcg_10 | core_typos-ru-options_dot_ndcg_10 | bm-full--matryoshka_dim-768--_dot_ndcg_1 | bm-full--matryoshka_dim-512--_dot_ndcg_1 | bm-full--matryoshka_dim-256--_dot_ndcg_1 | bm-full--matryoshka_dim-128--_dot_ndcg_1 |
|:-------:|:----------:|:-------------:|:---------------:|:--------------------------------------------:|:-------------------:|:-------------------------:|:-------------------------:|:---------------------------:|:---------------------------:|:----------------------------:|:----------------------------:|:------------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|:------------------------------:|:------------------------------:|:--------------------------------------:|:--------------------------------------:|:----------------------------------------:|:----------------------------------------:|:-------------------------------:|:-------------------------------:|:---------------------------------:|:---------------------------------:|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:|
| 0.0050 | 685 | 4.8405 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0100 | 1370 | 4.7324 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0150 | 2055 | 4.2883 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0200 | 2740 | 3.7416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0250 | 3425 | 2.9713 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0300 | 4110 | 2.6176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0350 | 4795 | 2.3556 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0400 | 5480 | 2.3165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0450 | 6165 | 2.1682 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0500 | 6850 | 2.0736 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0550 | 7535 | 2.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0600 | 8220 | 1.9518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0650 | 8905 | 1.8403 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0700 | 9590 | 1.7593 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0750 | 10275 | 1.6818 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0800 | 10960 | 1.6754 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0850 | 11645 | 1.6262 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0900 | 12330 | 1.6023 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0950 | 13015 | 1.5401 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1000 | 13693 | - | 1.0477 | 0.1937 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1001 | 13700 | 1.4887 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1051 | 14385 | 1.4678 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1101 | 15070 | 1.3952 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1151 | 15755 | 1.3644 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1201 | 16440 | 1.3647 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1251 | 17125 | 1.3699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1301 | 17810 | 1.3937 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1351 | 18495 | 1.2854 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1401 | 19180 | 1.2726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1451 | 19865 | 1.231 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1501 | 20550 | 1.2434 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1551 | 21235 | 1.2331 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1601 | 21920 | 1.2234 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1651 | 22605 | 1.2241 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1701 | 23290 | 1.152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1751 | 23975 | 1.2317 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1801 | 24660 | 1.154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1851 | 25345 | 1.1455 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1901 | 26030 | 1.1682 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1951 | 26715 | 1.0611 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2000 | 27386 | - | 0.7107 | 0.2081 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2001 | 27400 | 1.1566 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2051 | 28085 | 1.0885 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2101 | 28770 | 1.1416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2151 | 29455 | 1.0956 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2201 | 30140 | 1.1213 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2251 | 30825 | 1.0704 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2301 | 31510 | 1.0939 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2351 | 32195 | 1.1006 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2401 | 32880 | 1.0622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2451 | 33565 | 1.0879 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2501 | 34250 | 1.0484 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2551 | 34935 | 1.0832 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2601 | 35620 | 1.0217 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2651 | 36305 | 1.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2701 | 36990 | 1.0158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2751 | 37675 | 0.9985 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2801 | 38360 | 1.0457 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 39045 | 1.0438 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2902 | 39730 | 1.0672 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2952 | 40415 | 1.0156 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3000 | 41079 | - | 0.6050 | 0.2112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3002 | 41100 | 1.0236 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3052 | 41785 | 0.9918 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3102 | 42470 | 0.9986 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3152 | 43155 | 0.99 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3202 | 43840 | 1.008 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3252 | 44525 | 1.0002 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3302 | 45210 | 0.9865 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3352 | 45895 | 1.0297 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3402 | 46580 | 0.9625 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3452 | 47265 | 0.9404 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3502 | 47950 | 0.9361 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3552 | 48635 | 0.9837 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3602 | 49320 | 1.0064 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3652 | 50005 | 1.0055 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3702 | 50690 | 0.9738 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3752 | 51375 | 0.9617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3802 | 52060 | 0.9404 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3852 | 52745 | 0.9589 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3902 | 53430 | 0.9787 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3952 | 54115 | 0.9493 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4000 | 54772 | - | 0.5548 | 0.2143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4002 | 54800 | 0.9617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4052 | 55485 | 0.9339 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4102 | 56170 | 0.9481 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4152 | 56855 | 0.9051 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4202 | 57540 | 0.9155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4252 | 58225 | 0.9018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4302 | 58910 | 0.9215 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4352 | 59595 | 0.9184 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4402 | 60280 | 0.9289 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4452 | 60965 | 0.9427 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4502 | 61650 | 0.9684 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4552 | 62335 | 0.9303 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4602 | 63020 | 0.9457 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4652 | 63705 | 0.9265 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4702 | 64390 | 0.8964 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4752 | 65075 | 0.9168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4803 | 65760 | 0.9077 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4853 | 66445 | 0.8908 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4903 | 67130 | 0.9193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4953 | 67815 | 0.9146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5000 | 68465 | - | 0.5362 | 0.2161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5003 | 68500 | 0.9219 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5053 | 69185 | 0.9174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5103 | 69870 | 0.909 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5153 | 70555 | 0.9299 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5203 | 71240 | 0.9047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5253 | 71925 | 0.9051 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5303 | 72610 | 0.9068 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5353 | 73295 | 0.8746 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5403 | 73980 | 0.9063 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5453 | 74665 | 0.9127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5503 | 75350 | 0.8763 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5553 | 76035 | 0.8984 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5603 | 76720 | 0.9007 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5653 | 77405 | 0.9071 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5703 | 78090 | 0.931 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5753 | 78775 | 0.9049 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5803 | 79460 | 0.8839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5853 | 80145 | 0.9217 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5903 | 80830 | 0.9099 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5953 | 81515 | 0.9126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6000 | 82158 | - | 0.5330 | 0.2171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6003 | 82200 | 0.8912 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6053 | 82885 | 0.878 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6103 | 83570 | 0.9114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6153 | 84255 | 0.9083 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6203 | 84940 | 0.9075 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6253 | 85625 | 0.8737 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6303 | 86310 | 0.8795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6353 | 86995 | 0.8832 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6403 | 87680 | 0.8986 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6453 | 88365 | 0.8635 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6503 | 89050 | 0.901 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6553 | 89735 | 0.8941 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6603 | 90420 | 0.8726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6653 | 91105 | 0.9078 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6704 | 91790 | 0.8888 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6754 | 92475 | 0.8544 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6804 | 93160 | 0.9402 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6854 | 93845 | 0.915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6904 | 94530 | 0.8781 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6954 | 95215 | 0.8752 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7000 | 95851 | - | 0.5202 | 0.2182 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7004 | 95900 | 0.8934 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7054 | 96585 | 0.8828 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7104 | 97270 | 0.895 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7154 | 97955 | 0.9171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7204 | 98640 | 0.8893 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7254 | 99325 | 0.8539 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7304 | 100010 | 0.8699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7354 | 100695 | 0.901 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7404 | 101380 | 0.8686 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7454 | 102065 | 0.9085 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7504 | 102750 | 0.8985 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7554 | 103435 | 0.8823 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7604 | 104120 | 0.9014 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7654 | 104805 | 0.8996 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7704 | 105490 | 0.9107 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7754 | 106175 | 0.875 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7804 | 106860 | 0.8662 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7854 | 107545 | 0.8906 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7904 | 108230 | 0.8993 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7954 | 108915 | 0.8606 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.8** | **109544** | **-** | **0.5142** | **0.2174** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8004 | 109600 | 0.8822 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8054 | 110285 | 0.8696 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8104 | 110970 | 0.8954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8154 | 111655 | 0.8714 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8204 | 112340 | 0.9049 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8254 | 113025 | 0.8926 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8304 | 113710 | 0.8731 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8354 | 114395 | 0.8805 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8404 | 115080 | 0.8649 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8454 | 115765 | 0.8999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8504 | 116450 | 0.8942 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8554 | 117135 | 0.8887 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8605 | 117820 | 0.8693 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8655 | 118505 | 0.8588 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8705 | 119190 | 0.8685 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8755 | 119875 | 0.9273 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8805 | 120560 | 0.9164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8855 | 121245 | 0.9086 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8905 | 121930 | 0.8691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8955 | 122615 | 0.8755 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9000 | 123237 | - | 0.5203 | 0.2176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9005 | 123300 | 0.9047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9055 | 123985 | 0.9107 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9105 | 124670 | 0.9226 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9155 | 125355 | 0.8988 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9205 | 126040 | 0.8795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9255 | 126725 | 0.8973 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9305 | 127410 | 0.8622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9355 | 128095 | 0.9181 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9405 | 128780 | 0.8847 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9455 | 129465 | 0.911 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9505 | 130150 | 0.8987 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9555 | 130835 | 0.8915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9605 | 131520 | 0.9008 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9655 | 132205 | 0.8999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9705 | 132890 | 0.8777 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9755 | 133575 | 0.8833 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9805 | 134260 | 0.8979 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9855 | 134945 | 0.8824 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9905 | 135630 | 0.8758 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9955 | 136315 | 0.8929 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0000 | 136927 | - | - | - | 0.5029 | 0.5841 | 0.5921 | 0.5041 | 0.5098 | 0.7549 | 0.7550 | 0.6425 | 0.6444 | 0.6657 | 0.6596 | 0.5432 | 0.5416 | 0.7302 | 0.7271 | 0.6143 | 0.6090 | 0.4958 | 0.5017 | 0.4156 | 0.4161 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.3.0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the rozetka_positive_pairs dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision d13f1b27baf31030b7fd040960d60d909913633f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- rozetka_positive_pairs
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
RZTKSentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8")
# Run inference
sentences = [
'query: ніж',
'passage: Кухонный нож Tramontina Profissional Master для овощей 102 мм Белый (24625/184)',
'passage: Ножи тактические BPS Knives Гарантия 14 дней Тип Нескладные Ножны/чехол Ножны Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель товара Украина Доставка Готов к отправке Доставка Доставка в магазины ROZETKA Вид Охотничьи Вид Рыбацкие Вид Туристические Вид Карманные',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### RZTKInformation Retrieval
* Dataset: `validation--matryoshka_dim-768--`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | Value |
|:-----------------|:-----------|
| dot_accuracy_10 | 0.5072 |
| dot_precision_10 | 0.1443 |
| dot_recall_10 | 0.2278 |
| **dot_ndcg_10** | **0.2176** |
| dot_mrr_10 | 0.2648 |
| dot_map_60 | 0.185 |
#### RZTKInformation Retrieval
* Datasets: `bm-full`, `core-uk-title`, `core-ru-title`, `core-uk-options`, `core-ru-options`, `options-uk-title`, `options-ru-title`, `options-uk-options`, `options-ru-options`, `rusisms-uk-title`, `rusisms-ru-title`, `rusisms-uk-options`, `rusisms-ru-options`, `rusisms_corrected-uk-title`, `rusisms_corrected-ru-title`, `rusisms_corrected-uk-options`, `rusisms_corrected-ru-options`, `core_typos-uk-title`, `core_typos-ru-title`, `core_typos-uk-options` and `core_typos-ru-options`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | bm-full | core-uk-title | core-ru-title | core-uk-options | core-ru-options | options-uk-title | options-ru-title | options-uk-options | options-ru-options | rusisms-uk-title | rusisms-ru-title | rusisms-uk-options | rusisms-ru-options | rusisms_corrected-uk-title | rusisms_corrected-ru-title | rusisms_corrected-uk-options | rusisms_corrected-ru-options | core_typos-uk-title | core_typos-ru-title | core_typos-uk-options | core_typos-ru-options |
|:-----------------|:-----------|:--------------|:--------------|:----------------|:----------------|:-----------------|:-----------------|:-------------------|:-------------------|:-----------------|:-----------------|:-------------------|:-------------------|:---------------------------|:---------------------------|:-----------------------------|:-----------------------------|:--------------------|:--------------------|:----------------------|:----------------------|
| dot_accuracy_1 | 0.4989 | 0.653 | 0.6458 | 0.5082 | 0.5339 | 0.7706 | 0.7773 | 0.6347 | 0.6303 | 0.6349 | 0.6508 | 0.5143 | 0.5079 | 0.7468 | 0.7278 | 0.5886 | 0.5791 | 0.5359 | 0.5585 | 0.4281 | 0.4199 |
| dot_accuracy_3 | 0.667 | 0.8501 | 0.8634 | 0.7628 | 0.7844 | 0.9243 | 0.9265 | 0.8107 | 0.8419 | 0.8063 | 0.7873 | 0.6349 | 0.6508 | 0.8513 | 0.8386 | 0.7278 | 0.7278 | 0.7526 | 0.7659 | 0.6612 | 0.6591 |
| dot_accuracy_5 | 0.747 | 0.923 | 0.924 | 0.8573 | 0.8624 | 0.9621 | 0.9666 | 0.8864 | 0.882 | 0.8476 | 0.8349 | 0.7111 | 0.7016 | 0.8987 | 0.8766 | 0.7816 | 0.7911 | 0.8378 | 0.8368 | 0.7495 | 0.7485 |
| dot_accuracy_10 | 0.8432 | 0.9754 | 0.9774 | 0.9435 | 0.9292 | 0.9955 | 0.9978 | 0.9421 | 0.9465 | 0.8952 | 0.8889 | 0.7873 | 0.8 | 0.9304 | 0.9241 | 0.8544 | 0.8639 | 0.9076 | 0.8994 | 0.847 | 0.8552 |
| dot_precision_1 | 0.4989 | 0.653 | 0.6458 | 0.5082 | 0.5339 | 0.7706 | 0.7773 | 0.6347 | 0.6303 | 0.6349 | 0.6508 | 0.5143 | 0.5079 | 0.7468 | 0.7278 | 0.5886 | 0.5791 | 0.5359 | 0.5585 | 0.4281 | 0.4199 |
| dot_precision_3 | 0.5032 | 0.6304 | 0.6499 | 0.5209 | 0.5335 | 0.7335 | 0.7431 | 0.614 | 0.6206 | 0.6582 | 0.6476 | 0.5164 | 0.5206 | 0.7268 | 0.7215 | 0.5844 | 0.5802 | 0.5322 | 0.5469 | 0.4278 | 0.4319 |
| dot_precision_5 | 0.5069 | 0.6224 | 0.6244 | 0.515 | 0.5236 | 0.6984 | 0.7002 | 0.5898 | 0.5893 | 0.6495 | 0.6419 | 0.5187 | 0.5137 | 0.712 | 0.7038 | 0.5867 | 0.5854 | 0.5271 | 0.5339 | 0.4261 | 0.4273 |
| dot_precision_10 | 0.5027 | 0.5197 | 0.5285 | 0.4658 | 0.4664 | 0.5443 | 0.541 | 0.4873 | 0.4886 | 0.6219 | 0.6175 | 0.5092 | 0.5124 | 0.6718 | 0.6715 | 0.5728 | 0.5709 | 0.4425 | 0.4446 | 0.3835 | 0.3831 |
| dot_recall_1 | 0.0121 | 0.0668 | 0.0658 | 0.051 | 0.0534 | 0.1172 | 0.1175 | 0.0929 | 0.0927 | 0.0461 | 0.0439 | 0.0409 | 0.0376 | 0.0541 | 0.0546 | 0.0422 | 0.0448 | 0.0549 | 0.0572 | 0.0428 | 0.0417 |
| dot_recall_3 | 0.0359 | 0.1932 | 0.1988 | 0.1569 | 0.16 | 0.3355 | 0.3407 | 0.2672 | 0.2694 | 0.1203 | 0.1214 | 0.0913 | 0.0928 | 0.1355 | 0.1397 | 0.1121 | 0.1103 | 0.1639 | 0.1688 | 0.1286 | 0.1301 |
| dot_recall_5 | 0.0601 | 0.3151 | 0.3169 | 0.2567 | 0.2612 | 0.5278 | 0.5303 | 0.4269 | 0.4224 | 0.1848 | 0.1811 | 0.1392 | 0.1389 | 0.2064 | 0.2057 | 0.1688 | 0.1648 | 0.2674 | 0.2716 | 0.2122 | 0.214 |
| dot_recall_10 | 0.1176 | 0.5133 | 0.5225 | 0.4583 | 0.4586 | 0.7799 | 0.7774 | 0.6723 | 0.6773 | 0.3162 | 0.3097 | 0.2549 | 0.2484 | 0.3505 | 0.3494 | 0.2981 | 0.289 | 0.4386 | 0.441 | 0.3781 | 0.3783 |
| **dot_ndcg_10** | **0.5029** | **0.5841** | **0.5921** | **0.5041** | **0.5098** | **0.7549** | **0.755** | **0.6425** | **0.6444** | **0.6657** | **0.6596** | **0.5432** | **0.5416** | **0.7302** | **0.7271** | **0.6143** | **0.609** | **0.4958** | **0.5017** | **0.4156** | **0.4161** |
| dot_mrr_10 | 0.5984 | 0.7655 | 0.7636 | 0.654 | 0.6712 | 0.8518 | 0.857 | 0.738 | 0.7419 | 0.7263 | 0.7312 | 0.5967 | 0.5957 | 0.8088 | 0.7934 | 0.6718 | 0.6658 | 0.6624 | 0.6764 | 0.561 | 0.5601 |
| dot_map_100 | 0.3686 | 0.5692 | 0.5766 | 0.509 | 0.5139 | 0.7295 | 0.732 | 0.628 | 0.6256 | 0.5822 | 0.5843 | 0.5009 | 0.5023 | 0.6422 | 0.6462 | 0.5714 | 0.5707 | 0.4754 | 0.4823 | 0.415 | 0.4159 |
#### RZTKInformation Retrieval
* Datasets: `bm-full--matryoshka_dim-768--`, `bm-full--matryoshka_dim-512--`, `bm-full--matryoshka_dim-256--` and `bm-full--matryoshka_dim-128--`
* Evaluated with <code>sentence_transformers_training.evaluation.information_retrieval_evaluator.RZTKInformationRetrievalEvaluator</code>
| Metric | bm-full--matryoshka_dim-768-- | bm-full--matryoshka_dim-512-- | bm-full--matryoshka_dim-256-- | bm-full--matryoshka_dim-128-- |
|:----------------|:------------------------------|:------------------------------|:------------------------------|:------------------------------|
| dot_accuracy_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_precision_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_recall_1 | 0.0121 | 0.012 | 0.0119 | 0.0115 |
| **dot_ndcg_1** | **0.4989** | **0.5** | **0.4965** | **0.4873** |
| dot_mrr_1 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
| dot_map_100 | 0.3686 | 0.3655 | 0.3531 | 0.3308 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### rozetka_positive_pairs
* Dataset: rozetka_positive_pairs
* Size: 48,198,697 training samples
* Columns: <code>query</code> and <code>text</code>
* Approximate statistics based on the first 1000 samples:
| | query | text |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.75 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 61.91 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | text |
|:------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: клей огнеупорн</code> | <code>passage: Герметик вогнетривкий Penosil Premium +1500 °C Sealant Black 310 мл (H2130)</code> |
| <code>query: клей огнеупорн</code> | <code>passage: Герметики Penosil Гарантія 14 днів Габарити С Стандарт (до 300x200x250 мм) Кількість вантажних місць 1 Країна реєстрації бренда Естонія Кількість предметів, шт 1 Країна-виробник товару Естонія Вид_old Герметик Колір Чорний Об'єм 310 мл Тип гарантійного талона Без гарантійного талона Можливість доставки Почтомати Доставка Premium Немає Наявність товара по містах Київ і область Доставка Доставка в магазини ROZETKA</code> |
| <code>query: клей огнеупорн</code> | <code>passage: Герметик огнеупорный Penosil Premium +1500°C Sealant Black 310 мл (H2130)</code> |
* Loss: <code>sentence_transformers_training.model.matryoshka2d_loss.RZTKMatryoshka2dLoss</code> with these parameters:
```json
{
"loss": "RZTKMultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Evaluation Dataset
#### rozetka_positive_pairs
* Dataset: rozetka_positive_pairs
* Size: 3,309,787 evaluation samples
* Columns: <code>query</code> and <code>text</code>
* Approximate statistics based on the first 1000 samples:
| | query | text |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 7.58 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 72.78 tokens</li><li>max: 264 tokens</li></ul> |
* Samples:
| query | text |
|:------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: ніж</code> | <code>passage: Мисливський Ніж з Нержавіючої Сталі HK6 SSH BPS Knives - Ніж для риболовлі, мисливства, походів</code> |
| <code>query: ніж</code> | <code>passage: Ножі тактичні BPS Knives Гарантія 14 днів Тип Нескладані Кількість вантажних місць 1 Країна реєстрації бренда Україна Країна-виробник товару Україна Вид Туристичні Вид Авторські вироби Вид Сувенірні Вид Мисливські Вид Рибальські Вид Клинки</code> |
| <code>query: ніж</code> | <code>passage: Охотничий Нож из Нержавеющей Стали HK6 SSH BPS Knives - Нож для рыбалки, охоты, походов</code> |
* Loss: <code>sentence_transformers_training.model.matryoshka2d_loss.RZTKMatryoshka2dLoss</code> with these parameters:
```json
{
"loss": "RZTKMultipleNegativesRankingLoss",
"n_layers_per_step": 1,
"last_layer_weight": 1.0,
"prior_layers_weight": 1.0,
"kl_div_weight": 1.0,
"kl_temperature": 0.3,
"matryoshka_dims": [
768,
512,
256,
128
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 88
- `per_device_eval_batch_size`: 88
- `learning_rate`: 2e-05
- `num_train_epochs`: 1.0
- `warmup_ratio`: 0.1
- `bf16`: True
- `bf16_full_eval`: True
- `tf32`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `optim`: adafactor
- `push_to_hub`: True
- `hub_model_id`: yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8
- `hub_private_repo`: True
- `prompts`: {'query': 'query: ', 'text': 'passage: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 88
- `per_device_eval_batch_size`: 88
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: True
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adafactor
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: yklymchuk-rztk/multilingual-e5-base-matryoshka2d-mnr-8
- `hub_strategy`: every_save
- `hub_private_repo`: True
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'query': 'query: ', 'text': 'passage: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `ddp_static_graph`: False
- `ddp_comm_hook`: bf16
- `gradient_as_bucket_view`: False
- `num_proc`: 30
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | validation--matryoshka_dim-768--_dot_ndcg_10 | bm-full_dot_ndcg_10 | core-uk-title_dot_ndcg_10 | core-ru-title_dot_ndcg_10 | core-uk-options_dot_ndcg_10 | core-ru-options_dot_ndcg_10 | options-uk-title_dot_ndcg_10 | options-ru-title_dot_ndcg_10 | options-uk-options_dot_ndcg_10 | options-ru-options_dot_ndcg_10 | rusisms-uk-title_dot_ndcg_10 | rusisms-ru-title_dot_ndcg_10 | rusisms-uk-options_dot_ndcg_10 | rusisms-ru-options_dot_ndcg_10 | rusisms_corrected-uk-title_dot_ndcg_10 | rusisms_corrected-ru-title_dot_ndcg_10 | rusisms_corrected-uk-options_dot_ndcg_10 | rusisms_corrected-ru-options_dot_ndcg_10 | core_typos-uk-title_dot_ndcg_10 | core_typos-ru-title_dot_ndcg_10 | core_typos-uk-options_dot_ndcg_10 | core_typos-ru-options_dot_ndcg_10 | bm-full--matryoshka_dim-768--_dot_ndcg_1 | bm-full--matryoshka_dim-512--_dot_ndcg_1 | bm-full--matryoshka_dim-256--_dot_ndcg_1 | bm-full--matryoshka_dim-128--_dot_ndcg_1 |
|:-------:|:----------:|:-------------:|:---------------:|:--------------------------------------------:|:-------------------:|:-------------------------:|:-------------------------:|:---------------------------:|:---------------------------:|:----------------------------:|:----------------------------:|:------------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|:------------------------------:|:------------------------------:|:--------------------------------------:|:--------------------------------------:|:----------------------------------------:|:----------------------------------------:|:-------------------------------:|:-------------------------------:|:---------------------------------:|:---------------------------------:|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:|:----------------------------------------:|
| 0.0050 | 685 | 4.8405 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0100 | 1370 | 4.7324 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0150 | 2055 | 4.2883 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0200 | 2740 | 3.7416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0250 | 3425 | 2.9713 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0300 | 4110 | 2.6176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0350 | 4795 | 2.3556 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0400 | 5480 | 2.3165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0450 | 6165 | 2.1682 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0500 | 6850 | 2.0736 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0550 | 7535 | 2.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0600 | 8220 | 1.9518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0650 | 8905 | 1.8403 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0700 | 9590 | 1.7593 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0750 | 10275 | 1.6818 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0800 | 10960 | 1.6754 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0850 | 11645 | 1.6262 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0900 | 12330 | 1.6023 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0950 | 13015 | 1.5401 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1000 | 13693 | - | 1.0477 | 0.1937 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1001 | 13700 | 1.4887 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1051 | 14385 | 1.4678 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1101 | 15070 | 1.3952 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1151 | 15755 | 1.3644 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1201 | 16440 | 1.3647 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1251 | 17125 | 1.3699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1301 | 17810 | 1.3937 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1351 | 18495 | 1.2854 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1401 | 19180 | 1.2726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1451 | 19865 | 1.231 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1501 | 20550 | 1.2434 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1551 | 21235 | 1.2331 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1601 | 21920 | 1.2234 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1651 | 22605 | 1.2241 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1701 | 23290 | 1.152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1751 | 23975 | 1.2317 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1801 | 24660 | 1.154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1851 | 25345 | 1.1455 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1901 | 26030 | 1.1682 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1951 | 26715 | 1.0611 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2000 | 27386 | - | 0.7107 | 0.2081 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2001 | 27400 | 1.1566 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2051 | 28085 | 1.0885 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2101 | 28770 | 1.1416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2151 | 29455 | 1.0956 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2201 | 30140 | 1.1213 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2251 | 30825 | 1.0704 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2301 | 31510 | 1.0939 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2351 | 32195 | 1.1006 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2401 | 32880 | 1.0622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2451 | 33565 | 1.0879 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2501 | 34250 | 1.0484 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2551 | 34935 | 1.0832 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2601 | 35620 | 1.0217 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2651 | 36305 | 1.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2701 | 36990 | 1.0158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2751 | 37675 | 0.9985 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2801 | 38360 | 1.0457 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 39045 | 1.0438 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2902 | 39730 | 1.0672 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2952 | 40415 | 1.0156 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3000 | 41079 | - | 0.6050 | 0.2112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3002 | 41100 | 1.0236 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3052 | 41785 | 0.9918 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3102 | 42470 | 0.9986 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3152 | 43155 | 0.99 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3202 | 43840 | 1.008 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3252 | 44525 | 1.0002 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3302 | 45210 | 0.9865 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3352 | 45895 | 1.0297 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3402 | 46580 | 0.9625 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3452 | 47265 | 0.9404 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3502 | 47950 | 0.9361 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3552 | 48635 | 0.9837 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3602 | 49320 | 1.0064 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3652 | 50005 | 1.0055 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3702 | 50690 | 0.9738 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3752 | 51375 | 0.9617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3802 | 52060 | 0.9404 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3852 | 52745 | 0.9589 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3902 | 53430 | 0.9787 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3952 | 54115 | 0.9493 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4000 | 54772 | - | 0.5548 | 0.2143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4002 | 54800 | 0.9617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4052 | 55485 | 0.9339 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4102 | 56170 | 0.9481 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4152 | 56855 | 0.9051 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4202 | 57540 | 0.9155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4252 | 58225 | 0.9018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4302 | 58910 | 0.9215 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4352 | 59595 | 0.9184 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4402 | 60280 | 0.9289 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4452 | 60965 | 0.9427 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4502 | 61650 | 0.9684 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4552 | 62335 | 0.9303 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4602 | 63020 | 0.9457 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4652 | 63705 | 0.9265 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4702 | 64390 | 0.8964 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4752 | 65075 | 0.9168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4803 | 65760 | 0.9077 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4853 | 66445 | 0.8908 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4903 | 67130 | 0.9193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4953 | 67815 | 0.9146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5000 | 68465 | - | 0.5362 | 0.2161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5003 | 68500 | 0.9219 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5053 | 69185 | 0.9174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5103 | 69870 | 0.909 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5153 | 70555 | 0.9299 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5203 | 71240 | 0.9047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5253 | 71925 | 0.9051 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5303 | 72610 | 0.9068 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5353 | 73295 | 0.8746 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5403 | 73980 | 0.9063 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5453 | 74665 | 0.9127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5503 | 75350 | 0.8763 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5553 | 76035 | 0.8984 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5603 | 76720 | 0.9007 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5653 | 77405 | 0.9071 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5703 | 78090 | 0.931 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5753 | 78775 | 0.9049 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5803 | 79460 | 0.8839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5853 | 80145 | 0.9217 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5903 | 80830 | 0.9099 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5953 | 81515 | 0.9126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6000 | 82158 | - | 0.5330 | 0.2171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6003 | 82200 | 0.8912 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6053 | 82885 | 0.878 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6103 | 83570 | 0.9114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6153 | 84255 | 0.9083 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6203 | 84940 | 0.9075 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6253 | 85625 | 0.8737 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6303 | 86310 | 0.8795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6353 | 86995 | 0.8832 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6403 | 87680 | 0.8986 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6453 | 88365 | 0.8635 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6503 | 89050 | 0.901 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6553 | 89735 | 0.8941 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6603 | 90420 | 0.8726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6653 | 91105 | 0.9078 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6704 | 91790 | 0.8888 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6754 | 92475 | 0.8544 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6804 | 93160 | 0.9402 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6854 | 93845 | 0.915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6904 | 94530 | 0.8781 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6954 | 95215 | 0.8752 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7000 | 95851 | - | 0.5202 | 0.2182 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7004 | 95900 | 0.8934 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7054 | 96585 | 0.8828 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7104 | 97270 | 0.895 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7154 | 97955 | 0.9171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7204 | 98640 | 0.8893 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7254 | 99325 | 0.8539 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7304 | 100010 | 0.8699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7354 | 100695 | 0.901 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7404 | 101380 | 0.8686 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7454 | 102065 | 0.9085 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7504 | 102750 | 0.8985 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7554 | 103435 | 0.8823 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7604 | 104120 | 0.9014 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7654 | 104805 | 0.8996 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7704 | 105490 | 0.9107 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7754 | 106175 | 0.875 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7804 | 106860 | 0.8662 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7854 | 107545 | 0.8906 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7904 | 108230 | 0.8993 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7954 | 108915 | 0.8606 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| **0.8** | **109544** | **-** | **0.5142** | **0.2174** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** | **-** |
| 0.8004 | 109600 | 0.8822 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8054 | 110285 | 0.8696 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8104 | 110970 | 0.8954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8154 | 111655 | 0.8714 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8204 | 112340 | 0.9049 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8254 | 113025 | 0.8926 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8304 | 113710 | 0.8731 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8354 | 114395 | 0.8805 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8404 | 115080 | 0.8649 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8454 | 115765 | 0.8999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8504 | 116450 | 0.8942 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8554 | 117135 | 0.8887 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8605 | 117820 | 0.8693 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8655 | 118505 | 0.8588 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8705 | 119190 | 0.8685 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8755 | 119875 | 0.9273 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8805 | 120560 | 0.9164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8855 | 121245 | 0.9086 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8905 | 121930 | 0.8691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8955 | 122615 | 0.8755 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9000 | 123237 | - | 0.5203 | 0.2176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9005 | 123300 | 0.9047 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9055 | 123985 | 0.9107 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9105 | 124670 | 0.9226 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9155 | 125355 | 0.8988 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9205 | 126040 | 0.8795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9255 | 126725 | 0.8973 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9305 | 127410 | 0.8622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9355 | 128095 | 0.9181 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9405 | 128780 | 0.8847 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9455 | 129465 | 0.911 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9505 | 130150 | 0.8987 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9555 | 130835 | 0.8915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9605 | 131520 | 0.9008 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9655 | 132205 | 0.8999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9705 | 132890 | 0.8777 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9755 | 133575 | 0.8833 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9805 | 134260 | 0.8979 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9855 | 134945 | 0.8824 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9905 | 135630 | 0.8758 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9955 | 136315 | 0.8929 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 1.0000 | 136927 | - | - | - | 0.5029 | 0.5841 | 0.5921 | 0.5041 | 0.5098 | 0.7549 | 0.7550 | 0.6425 | 0.6444 | 0.6657 | 0.6596 | 0.5432 | 0.5416 | 0.7302 | 0.7271 | 0.6143 | 0.6090 | 0.4958 | 0.5017 | 0.4156 | 0.4161 | 0.4989 | 0.5 | 0.4965 | 0.4873 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.3.0
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "intfloat/multilingual-e5-base", "library_name": "sentence-transformers", "metrics": ["dot_accuracy_10", "dot_precision_10", "dot_recall_10", "dot_ndcg_10", "dot_mrr_10", "dot_map_60", "dot_accuracy_1", "dot_accuracy_3", "dot_accuracy_5", "dot_precision_1", "dot_precision_3", "dot_precision_5", "dot_recall_1", "dot_recall_3", "dot_recall_5", "dot_map_100", "dot_ndcg_1", "dot_mrr_1"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:48198697", "loss:RZTKMatryoshka2dLoss"], "widget": [{"source_sentence": "query: настільні ігри", "sentences": ["passage: Професійна LED+UV лампа для манікюру та нарощування нігтів SUN x11 MAX 66 світловіодів 180 W біла з чорним", "passage: Настольные игры Strateg Гарантия 14 дней Количество игроков От 2 до 4 Габариты_old C Стандарт (до 300x200x250 мм) Назначение Для двоих Назначение Для компании Жанр Карточные Жанр Развлекательные Материал Картон Материал Пластик Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель товара Украина Язык Украинский Детский возраст 4 года Детский возраст 7 лет Детский возраст 9 лет Детский возраст 5 лет Детский возраст 8 лет Детский возраст 10 лет Детский возраст 6 лет Тематика Карточные Тип гарантийного талона Гарантия по чеку Возможность доставки Почтоматы Доставка Premium Нет Наличие товара по городам Киев и область Доставка Доставка в магазины ROZETKA", "passage: Кишеньковий ніж Ganzo G619"]}, {"source_sentence": "query: ніж", "sentences": ["passage: Туристичний ніж Morakniv Robust (23050108)", "passage: Лампа SUN BQ-5T PLUS 180W WHITE LED/UV для полимеризации", "passage: Набор для шугаринга Madam Saharina паста твердая 400 г, тальк, шпатели"]}, {"source_sentence": "query: ніж", "sentences": ["passage: Складной Карманный Нож Финка Сталь 440 C с Гардой GR76", "passage: Настольная игра Strateg Монополия по-Одесски на украинском языке (30318) (4823113826840)", "passage: Складний самовикидний ніж з запобіжником №1251"]}, {"source_sentence": "query: лампа для манікюру", "sentences": ["passage: Ножі тактичні Grand Way Гарантія 3 місяці Тип Викидні Поверхня клинка Сатиноване покриття Піхви/чохол Немає Матеріал Метал Тип замка Frame Lock Матеріал клинка 420 Колір Чорний Примітка *Ножі, представлені в нашому магазині, не належать до холодної зброї за сукупністю характеристик згідно з висновком НДІ ЕКЦ МВС України Кількість вантажних місць 1 Країна реєстрації бренда Україна Країна-виробник товару Китай Тип гарантійного талона Гарантія по чеку Вага, г 155 Доставка Доставка в магазини ROZETKA Вид Кишенькові", "passage: Лампа SUN X54 White 54W UV/LED для полімеризації", "passage: Кишеньковий ніж Ganzo G620b-1 Black-Black"]}, {"source_sentence": "query: ніж", "sentences": ["passage: Кухонный нож Tramontina Profissional Master для овощей 102 мм Белый (24625/184)", "passage: Лампа ESPERANZA UV LED Lamp EBN007 для полимеризации", "passage: Ножи тактические BPS Knives Гарантия 14 дней Тип Нескладные Ножны/чехол Ножны Количество грузовых мест 1 Страна регистрации бренда Украина Страна-производитель товара Украина Доставка Готов к отправке Доставка Доставка в магазины ROZETKA Вид Охотничьи Вид Рыбацкие Вид Туристические Вид Карманные"]}], "model-index": [{"name": "SentenceTransformer based on intfloat/multilingual-e5-base", "results": [{"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "validation matryoshka dim 768 ", "type": "validation--matryoshka_dim-768--"}, "metrics": [{"type": "dot_accuracy_10", "value": 0.507201511991658, "name": "Dot Accuracy 10"}, {"type": "dot_precision_10", "value": 0.1442795229405631, "name": "Dot Precision 10"}, {"type": "dot_recall_10", "value": 0.22776333806589133, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.2175905090861651, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.264842172422519, "name": "Dot Mrr 10"}, {"type": "dot_map_60", "value": 0.184980466857725, "name": "Dot Map 60"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "bm full", "type": "bm-full"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.4989384288747346, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.6670205237084218, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7469922151450814, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.843241330502477, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.4989384288747346, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5031847133757962, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5068648266100496, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5027246992215145, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.012063598815833302, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.03587460812559255, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.0600948196481301, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.11762008717816595, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5029239135947354, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.5983870016513317, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.36856738827598634, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core uk title", "type": "core-uk-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6529774127310062, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8501026694045175, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.9229979466119097, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9753593429158111, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6529774127310062, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6303901437371663, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.6223819301848049, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5197125256673512, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.06676169794224511, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.19317800670092614, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.3150854347066148, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.5133241225296425, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5841120630996868, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7654655976011217, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5691707661733787, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core ru title", "type": "core-ru-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6457905544147844, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8634496919917864, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.9240246406570842, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9774127310061602, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6457905544147844, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6498973305954825, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.6244353182751541, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5285420944558521, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.0658006438241068, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.19884675900725543, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.31688340916508567, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.5224551236901762, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5921096158331104, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7635829177666964, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5766445569031706, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core uk options", "type": "core-uk-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5082135523613963, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7628336755646817, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8572895277207392, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9435318275154004, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5082135523613963, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5208761122518822, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5149897330595483, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.46581108829568796, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.05096272351073655, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.15692054100829317, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.25669944302251946, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.45827616824264966, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5041110615142775, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.6539556240018252, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5090223360110971, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core ru options", "type": "core-ru-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5338809034907598, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.784394250513347, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8624229979466119, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9291581108829569, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5338809034907598, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5335386721423682, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5236139630390144, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.46642710472279264, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.05342347674603986, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.16003637053582698, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.2611603045630404, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.45855519604538203, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5097896528840739, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.671221602946449, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5138834029383393, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "options uk title", "type": "options-uk-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.7706013363028953, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.9242761692650334, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.9621380846325167, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9955456570155902, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.7706013363028953, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.7334818114328137, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.6984409799554565, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5443207126948775, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.11723591305996653, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.33553158052044463, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.5277887850827717, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.7799321564354972, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.7548631799996341, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.851841835472125, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.729503740667324, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "options ru title", "type": "options-ru-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.77728285077951, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.9265033407572383, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.9665924276169265, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9977728285077951, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.77728285077951, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.7431328878990348, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.7002227171492205, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5409799554565702, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.11746039780560939, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.34071240563445465, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.5302607686794769, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.7773921203765302, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.7549796461043052, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.8570208929897127, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.731989698879385, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "options uk options", "type": "options-uk-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6347438752783965, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8106904231625836, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8864142538975501, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9420935412026726, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6347438752783965, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6139569413511506, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5897550111358575, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.4873051224944321, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.0929473706199764, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.26723784134697276, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.426944757791083, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.6723484446758389, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.6425132104512846, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7379609007671365, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.628037661942495, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "options ru options", "type": "options-ru-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6302895322939867, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8418708240534521, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8819599109131403, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9465478841870824, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6302895322939867, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6206384558277653, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5893095768374166, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.4886414253897551, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.0926663227888172, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.2693872385965927, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.4224152919587218, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.6773074364388395, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.644434921208596, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.741894686605154, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.6255501320885972, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms uk title", "type": "rusisms-uk-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6349206349206349, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8063492063492064, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8476190476190476, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8952380952380953, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6349206349206349, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6582010582010582, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.6495238095238096, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.621904761904762, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.04608561750016866, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.12030970402785328, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.18481151303807217, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.31615683931646443, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.6656660346071843, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7263277903754093, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5822318416027721, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms ru title", "type": "rusisms-ru-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.6507936507936508, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7873015873015873, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.834920634920635, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8888888888888888, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.6507936507936508, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.6476190476190476, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.6419047619047619, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.6174603174603175, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.04386105773703947, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.12144099224790127, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.18110381262728556, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.3096903539599102, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.659607690171676, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7312055933484504, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5842765581060543, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms uk options", "type": "rusisms-uk-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5142857142857142, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.6349206349206349, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7111111111111111, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.7873015873015873, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5142857142857142, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5164021164021164, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5187301587301587, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5092063492063492, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.04086070233812709, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.09125589967667369, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.1391826787319033, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.25493013951881177, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5432280320538901, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.5967384731670444, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5008966606414169, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms ru options", "type": "rusisms-ru-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5079365079365079, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.6507936507936508, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7015873015873015, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5079365079365079, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5206349206349207, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5136507936507937, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5123809523809524, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.03755920266398328, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.09275752986565318, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.13889507452669533, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.24837942719280243, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5416125065389893, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.5956512975560592, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5022978361138828, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms corrected uk title", "type": "rusisms_corrected-uk-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.7468354430379747, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8512658227848101, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8987341772151899, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.930379746835443, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.7468354430379747, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.7267932489451476, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.7120253164556962, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.6718354430379747, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.05413847530306109, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.13549638337016875, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.20640018757390072, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.3504834628281357, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.7301763805249227, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.8087590415913197, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.6422358164589997, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms corrected ru title", "type": "rusisms_corrected-ru-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.7278481012658228, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.8386075949367089, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8765822784810127, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9240506329113924, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.7278481012658228, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.7215189873417721, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.7037974683544304, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.6715189873417721, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.05456593757600845, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.13973934661722753, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.20571322767056138, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.34937377972766176, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.7270703377453234, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.7933707554751858, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.6461514090978902, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms corrected uk options", "type": "rusisms_corrected-uk-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5886075949367089, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7278481012658228, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7816455696202531, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8544303797468354, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5886075949367089, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5843881856540085, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5867088607594936, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5727848101265823, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.042248255233976226, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.11213843292540425, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.16876113275591878, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.29811718733233894, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.6143319751035731, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.6718492565802692, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5714145545212894, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "rusisms corrected ru options", "type": "rusisms_corrected-ru-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5791139240506329, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7278481012658228, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7911392405063291, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8639240506329114, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5791139240506329, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.580168776371308, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5854430379746836, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.5708860759493671, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.04477724151203175, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.11025620974877864, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.16476745621688715, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.2890210600181688, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.6090163972597312, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.6657725537472372, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.5706605720157263, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core typos uk title", "type": "core_typos-uk-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5359342915811088, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7525667351129364, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.837782340862423, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.9075975359342916, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5359342915811088, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5321697467488021, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5271047227926079, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.44250513347022585, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.0548592291269547, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.16390877679982643, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.26735461747389516, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.43859522796592487, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.49578055328230314, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.6623574036048365, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.4754406732534669, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core typos ru title", "type": "core_typos-ru-title"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5585215605749486, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.7659137577002053, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.8367556468172485, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8993839835728953, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.5585215605749486, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.5468856947296372, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.5338809034907598, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.44455852156057496, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.05721534893687328, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.16884014376465156, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.27158115568853575, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.4409582582165322, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.5016521479115571, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.6763954075812391, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.4823271804638839, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core typos uk options", "type": "core_typos-uk-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.42813141683778233, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.6611909650924025, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7494866529774127, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8470225872689938, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.42813141683778233, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.4277891854893908, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.4260780287474333, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.3834702258726899, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.042773227346394396, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.1286119443996301, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.21222453983053088, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.3781351048865242, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.41562231410456363, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.5610007007594269, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.414986485692781, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "core typos ru options", "type": "core_typos-ru-options"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.419917864476386, "name": "Dot Accuracy 1"}, {"type": "dot_accuracy_3", "value": 0.6591375770020534, "name": "Dot Accuracy 3"}, {"type": "dot_accuracy_5", "value": 0.7484599589322382, "name": "Dot Accuracy 5"}, {"type": "dot_accuracy_10", "value": 0.8552361396303901, "name": "Dot Accuracy 10"}, {"type": "dot_precision_1", "value": 0.419917864476386, "name": "Dot Precision 1"}, {"type": "dot_precision_3", "value": 0.4318959616700889, "name": "Dot Precision 3"}, {"type": "dot_precision_5", "value": 0.4273100616016427, "name": "Dot Precision 5"}, {"type": "dot_precision_10", "value": 0.38305954825462013, "name": "Dot Precision 10"}, {"type": "dot_recall_1", "value": 0.04174381486103916, "name": "Dot Recall 1"}, {"type": "dot_recall_3", "value": 0.13009388847654338, "name": "Dot Recall 3"}, {"type": "dot_recall_5", "value": 0.21404733730724185, "name": "Dot Recall 5"}, {"type": "dot_recall_10", "value": 0.37828240427286214, "name": "Dot Recall 10"}, {"type": "dot_ndcg_10", "value": 0.41609465156770326, "name": "Dot Ndcg 10"}, {"type": "dot_mrr_10", "value": 0.5601051953978028, "name": "Dot Mrr 10"}, {"type": "dot_map_100", "value": 0.4159363797020089, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "bm full matryoshka dim 768 ", "type": "bm-full--matryoshka_dim-768--"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.4989384288747346, "name": "Dot Accuracy 1"}, {"type": "dot_precision_1", "value": 0.4989384288747346, "name": "Dot Precision 1"}, {"type": "dot_recall_1", "value": 0.012063598815833302, "name": "Dot Recall 1"}, {"type": "dot_ndcg_1", "value": 0.4989384288747346, "name": "Dot Ndcg 1"}, {"type": "dot_mrr_1", "value": 0.4989384288747346, "name": "Dot Mrr 1"}, {"type": "dot_map_100", "value": 0.36856738827598634, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "bm full matryoshka dim 512 ", "type": "bm-full--matryoshka_dim-512--"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.5, "name": "Dot Accuracy 1"}, {"type": "dot_precision_1", "value": 0.5, "name": "Dot Precision 1"}, {"type": "dot_recall_1", "value": 0.011990029695710535, "name": "Dot Recall 1"}, {"type": "dot_ndcg_1", "value": 0.5, "name": "Dot Ndcg 1"}, {"type": "dot_mrr_1", "value": 0.5, "name": "Dot Mrr 1"}, {"type": "dot_map_100", "value": 0.36547264154192, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "bm full matryoshka dim 256 ", "type": "bm-full--matryoshka_dim-256--"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.4964614295824487, "name": "Dot Accuracy 1"}, {"type": "dot_precision_1", "value": 0.4964614295824487, "name": "Dot Precision 1"}, {"type": "dot_recall_1", "value": 0.011900603068482157, "name": "Dot Recall 1"}, {"type": "dot_ndcg_1", "value": 0.4964614295824487, "name": "Dot Ndcg 1"}, {"type": "dot_mrr_1", "value": 0.4964614295824487, "name": "Dot Mrr 1"}, {"type": "dot_map_100", "value": 0.3530775160276886, "name": "Dot Map 100"}]}, {"task": {"type": "rztkinformation-retrieval", "name": "RZTKInformation Retrieval"}, "dataset": {"name": "bm full matryoshka dim 128 ", "type": "bm-full--matryoshka_dim-128--"}, "metrics": [{"type": "dot_accuracy_1", "value": 0.4872611464968153, "name": "Dot Accuracy 1"}, {"type": "dot_precision_1", "value": 0.4872611464968153, "name": "Dot Precision 1"}, {"type": "dot_recall_1", "value": 0.011534666531071948, "name": "Dot Recall 1"}, {"type": "dot_ndcg_1", "value": 0.4872611464968153, "name": "Dot Ndcg 1"}, {"type": "dot_mrr_1", "value": 0.4872611464968153, "name": "Dot Mrr 1"}, {"type": "dot_map_100", "value": 0.3307544210079215, "name": "Dot Map 100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,172 |
osmario/UFS_PROCC_2024_Osmario
|
osmario
|
translation
|
[
"translation",
"en",
"pt",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:mit",
"region:us"
] | 2024-08-29T20:26:28Z |
2024-08-29T20:31:51+00:00
| 0 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
language:
- en
- pt
license: mit
pipeline_tag: translation
---
| null |
Non_BioNLP
|
{"base_model": "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2", "language": ["en", "pt"], "license": "mit", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 42,173 |
|
Romain-XV/18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
|
Romain-XV
| null |
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-9b",
"base_model:adapter:unsloth/gemma-2-9b",
"license:gemma",
"region:us"
] | 2025-02-16T11:22:22Z |
2025-02-16T12:13:06+00:00
| 15 | 0 |
---
base_model: unsloth/gemma-2-9b
library_name: peft
license: gemma
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e64c9b6cec305e73_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e64c9b6cec305e73_train_data.json
type:
field_input: literal_translation
field_instruction: harm_category
field_output: prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 210
micro_batch_size: 4
mlflow_experiment_name: /tmp/e64c9b6cec305e73_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_rslora: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 895be764-6b76-4ed3-996a-63b382b1e742
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 895be764-6b76-4ed3-996a-63b382b1e742
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 210
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2768 | 0.0047 | 1 | 2.7323 |
| 0.936 | 0.4665 | 100 | 0.9319 |
| 0.7796 | 0.9329 | 200 | 0.8081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-9b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e64c9b6cec305e73_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e64c9b6cec305e73_train_data.json
type:
field_input: literal_translation
field_instruction: harm_category
field_output: prompt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 210
micro_batch_size: 4
mlflow_experiment_name: /tmp/e64c9b6cec305e73_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_rslora: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 895be764-6b76-4ed3-996a-63b382b1e742
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 895be764-6b76-4ed3-996a-63b382b1e742
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb
This model is a fine-tuned version of [unsloth/gemma-2-9b](https://huggingface.co/unsloth/gemma-2-9b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 210
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2768 | 0.0047 | 1 | 2.7323 |
| 0.936 | 0.4665 | 100 | 0.9319 |
| 0.7796 | 0.9329 | 200 | 0.8081 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
{"base_model": "unsloth/gemma-2-9b", "library_name": "peft", "license": "gemma", "tags": ["axolotl", "generated_from_trainer"], "model-index": [{"name": "18dd9c03-9893-4bdb-8e2f-bbfd3e0e8bdb", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,174 |
pauhidalgoo/cucafera-instruct
|
pauhidalgoo
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-02T21:51:23Z |
2025-03-22T16:17:55+00:00
| 36 | 0 |
---
base_model:
- pauhidalgoo/cucafera
datasets:
- baiges/patufet-IT
- baiges/alpaCAT
- baiges/patufet-QA
- pauhidalgoo/patufet-escrits
- baiges/patufet-human-interactions
- baiges/patufet-summaries
language:
- ca
library_name: transformers
license: apache-2.0
tags:
- catalan
- language-model
- transformer
- sft
model-index:
- name: cucafera-instruct
results:
- task:
type: language-understanding
name: arc_ca_challenge
dataset:
name: arc_ca_challenge
type: catalan_bench
metrics:
- type: acc
value: 0.2295
name: Accuracy
- type: acc_norm
value: 0.2534
name: Normalized Accuracy
- type: acc
value: 0.4238
name: Accuracy
- type: acc_norm
value: 0.4108
name: Normalized Accuracy
- type: acc
value: 0.614
name: Accuracy
source:
url: https://github.com/EleutherAI/lm-evaluation-harness
name: Eleuther AI LM Evaluation Harness
- task:
type: question-answering
name: catalanqa
dataset:
name: catalanqa
type: catalan_bench
metrics:
- type: exact_match
value: 0.0037
name: Exact Match
- type: f1
value: 0.0991
name: F1 Score
source:
url: https://github.com/EleutherAI/lm-evaluation-harness
name: Eleuther AI LM Evaluation Harness
- task:
type: machine-translation
name: flores_ca
dataset:
name: flores_ca
type: flores
metrics:
- type: bleu
value: 0.5934
name: BLEU
source:
url: https://github.com/EleutherAI/lm-evaluation-harness
name: Eleuther AI LM Evaluation Harness
---
# Model Card for cucafera 🔥🐲 (Instruct Model)
This document describes **cucafera (Instruct Model)**, a Catalan Large Language Model (LLM) fine-tuned to follow instructions and generate text in Catalan. Built upon the base model, it leverages high-quality Catalan datasets and is optimized for instruction following tasks.
## Model Details
### Model Description
**cucafera (Instruct Model)** is a 244-million parameter transformer-based language model inspired by the LLAMA architecture (notably LLAMA3). Despite its relatively small size compared to many contemporary models, it is optimized for generating coherent and contextually relevant text in Catalan.
- **Model Size:** 244M parameters
- **Architecture:** Transformer-based (LLAMA-inspired) with 30 layers
- **Embedding Size:** 768
- **Attention Mechanism:** 4 key/value heads and 8 query heads (using Grouped Query Attention - GQA)
- **Context Length:** 2048 tokens
- **Tokenizer:** Byte-Pair Encoding (BPE) with a vocabulary size of 65,536
- **Activation Function:** GeGLU
## Instruct Fine-Tuning
The instruct version of **cucafera** has been fine-tuned on a variety of instruction datasets to enhance its ability to follow user prompts. The fine-tuning was performed using Hugging Face's `SFTTrainer` and follows the ChatML format for conversation, for example:
```
<|im_start|>user Fes un poema <|im_end|> <|im_start|>assistant
```
### Training Data
The base model was pre-trained using the [patufet-pretrain](https://huggingface.co/datasets/pauhidalgoo/patufet-pretrain) dataset.
The fine-tuning data utilized a mix of instruction datasets from the [patufet](https://huggingface.co/collections/pauhidalgoo/patufet-66ca6dd3888e99a28dd616ae) collection.
### Fine-tunning Procedure
The model was fine-tuned with the following setup:
- **Total fine-tunning steps:** 1500
- **Per device train batch size:** 12
- **Sequence Length:** 2048
- **Learning rate:** 3e-5
- **Optimizer:** AdamW
- **Weight decay:** 0.01
- **Epochs**: 5
Different commits represent different fine-tunning procedures: we experimented with different data mixes, epochs, datasets...
### Direct Use
The cucafera (Instruct Model) is designed for:
- Conversational agents and chatbots in Catalan.
- Task-specific applications such as summarization, translation (within Catalan), and creative writing.
- Educational and experimental research into instruction-following LLMs.
- Creative content generation, like poems or stories
However, due to its limited size, it is not able to provide correct factual information and you must be aware of this fact when using this model.
### Out-of-Scope Uses
- **High-Stakes Applications:**
The model is not recommended for uses where extremely high factual accuracy is required or where outputs could have significant real-world consequences.
- **Non-Catalan Tasks:**
Since the model is exclusively trained on Catalan text, it is not suited for tasks in other languages without further training or fine-tuning.
- **Sensitive or safety-critical uses:** It has not undergone RLHF/DPO tuning, so outputs should be reviewed carefully.
## Bias, Risks, and Limitations
- The model has **no instruction tuning**, so it may not follow prompts effectively.
- It **only understands Catalan**, meaning it is unsuitable for multilingual applications.
- Due to its **small size (244M parameters)**, its knowledge and reasoning capabilities are limited.
- It was trained on **a limited dataset**, which may introduce biases in its outputs.
### Recommendations
- The goal of this model is educational. You are encouraged to train your own model.
- If used in production, **human review** of its outputs is recommended.
- Fine-tuning on task-specific data can **improve accuracy** and **mitigate biases**.
- Users should be cautious when using it in **sensitive or high-stakes applications**.
## Use the Instruct Model
You can use the instruct model via huggingface's transformers library. Make sure to specify the **ChatML format**.
### Acknowledgements
This model was developed as an experimental project, inspired by Karpathy's [NanoGPT Series](https://github.com/karpathy/nanoGPT).
My colleague [Roger Baiges](https://huggingface.co/baiges) also trained his own [CatGPT](https://huggingface.co/baiges/CatGPT).
For more details, updates, or to contribute to the project, please visit the [GitHub repository](https://github.com/pauhidalgoo/cucafera)
| null |
Non_BioNLP
|
# Model Card for cucafera 🔥🐲 (Instruct Model)
This document describes **cucafera (Instruct Model)**, a Catalan Large Language Model (LLM) fine-tuned to follow instructions and generate text in Catalan. Built upon the base model, it leverages high-quality Catalan datasets and is optimized for instruction following tasks.
## Model Details
### Model Description
**cucafera (Instruct Model)** is a 244-million parameter transformer-based language model inspired by the LLAMA architecture (notably LLAMA3). Despite its relatively small size compared to many contemporary models, it is optimized for generating coherent and contextually relevant text in Catalan.
- **Model Size:** 244M parameters
- **Architecture:** Transformer-based (LLAMA-inspired) with 30 layers
- **Embedding Size:** 768
- **Attention Mechanism:** 4 key/value heads and 8 query heads (using Grouped Query Attention - GQA)
- **Context Length:** 2048 tokens
- **Tokenizer:** Byte-Pair Encoding (BPE) with a vocabulary size of 65,536
- **Activation Function:** GeGLU
## Instruct Fine-Tuning
The instruct version of **cucafera** has been fine-tuned on a variety of instruction datasets to enhance its ability to follow user prompts. The fine-tuning was performed using Hugging Face's `SFTTrainer` and follows the ChatML format for conversation, for example:
```
<|im_start|>user Fes un poema <|im_end|> <|im_start|>assistant
```
### Training Data
The base model was pre-trained using the [patufet-pretrain](https://huggingface.co/datasets/pauhidalgoo/patufet-pretrain) dataset.
The fine-tuning data utilized a mix of instruction datasets from the [patufet](https://huggingface.co/collections/pauhidalgoo/patufet-66ca6dd3888e99a28dd616ae) collection.
### Fine-tunning Procedure
The model was fine-tuned with the following setup:
- **Total fine-tunning steps:** 1500
- **Per device train batch size:** 12
- **Sequence Length:** 2048
- **Learning rate:** 3e-5
- **Optimizer:** AdamW
- **Weight decay:** 0.01
- **Epochs**: 5
Different commits represent different fine-tunning procedures: we experimented with different data mixes, epochs, datasets...
### Direct Use
The cucafera (Instruct Model) is designed for:
- Conversational agents and chatbots in Catalan.
- Task-specific applications such as summarization, translation (within Catalan), and creative writing.
- Educational and experimental research into instruction-following LLMs.
- Creative content generation, like poems or stories
However, due to its limited size, it is not able to provide correct factual information and you must be aware of this fact when using this model.
### Out-of-Scope Uses
- **High-Stakes Applications:**
The model is not recommended for uses where extremely high factual accuracy is required or where outputs could have significant real-world consequences.
- **Non-Catalan Tasks:**
Since the model is exclusively trained on Catalan text, it is not suited for tasks in other languages without further training or fine-tuning.
- **Sensitive or safety-critical uses:** It has not undergone RLHF/DPO tuning, so outputs should be reviewed carefully.
## Bias, Risks, and Limitations
- The model has **no instruction tuning**, so it may not follow prompts effectively.
- It **only understands Catalan**, meaning it is unsuitable for multilingual applications.
- Due to its **small size (244M parameters)**, its knowledge and reasoning capabilities are limited.
- It was trained on **a limited dataset**, which may introduce biases in its outputs.
### Recommendations
- The goal of this model is educational. You are encouraged to train your own model.
- If used in production, **human review** of its outputs is recommended.
- Fine-tuning on task-specific data can **improve accuracy** and **mitigate biases**.
- Users should be cautious when using it in **sensitive or high-stakes applications**.
## Use the Instruct Model
You can use the instruct model via huggingface's transformers library. Make sure to specify the **ChatML format**.
### Acknowledgements
This model was developed as an experimental project, inspired by Karpathy's [NanoGPT Series](https://github.com/karpathy/nanoGPT).
My colleague [Roger Baiges](https://huggingface.co/baiges) also trained his own [CatGPT](https://huggingface.co/baiges/CatGPT).
For more details, updates, or to contribute to the project, please visit the [GitHub repository](https://github.com/pauhidalgoo/cucafera)
|
{"base_model": ["pauhidalgoo/cucafera"], "datasets": ["baiges/patufet-IT", "baiges/alpaCAT", "baiges/patufet-QA", "pauhidalgoo/patufet-escrits", "baiges/patufet-human-interactions", "baiges/patufet-summaries"], "language": ["ca"], "library_name": "transformers", "license": "apache-2.0", "tags": ["catalan", "language-model", "transformer", "sft"], "model-index": [{"name": "cucafera-instruct", "results": [{"task": {"type": "language-understanding", "name": "arc_ca_challenge"}, "dataset": {"name": "arc_ca_challenge", "type": "catalan_bench"}, "metrics": [{"type": "acc", "value": 0.2295, "name": "Accuracy"}, {"type": "acc_norm", "value": 0.2534, "name": "Normalized Accuracy"}, {"type": "acc", "value": 0.4238, "name": "Accuracy"}, {"type": "acc_norm", "value": 0.4108, "name": "Normalized Accuracy"}, {"type": "acc", "value": 0.614, "name": "Accuracy"}], "source": {"url": "https://github.com/EleutherAI/lm-evaluation-harness", "name": "Eleuther AI LM Evaluation Harness"}}, {"task": {"type": "question-answering", "name": "catalanqa"}, "dataset": {"name": "catalanqa", "type": "catalan_bench"}, "metrics": [{"type": "exact_match", "value": 0.0037, "name": "Exact Match"}, {"type": "f1", "value": 0.0991, "name": "F1 Score"}], "source": {"url": "https://github.com/EleutherAI/lm-evaluation-harness", "name": "Eleuther AI LM Evaluation Harness"}}, {"task": {"type": "machine-translation", "name": "flores_ca"}, "dataset": {"name": "flores_ca", "type": "flores"}, "metrics": [{"type": "bleu", "value": 0.5934, "name": "BLEU"}], "source": {"url": "https://github.com/EleutherAI/lm-evaluation-harness", "name": "Eleuther AI LM Evaluation Harness"}}]}]}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 42,175 |
EpistemeAI/Huacayas-6.4B-pretrain
|
EpistemeAI
|
text-generation
|
[
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | 2025-03-06T05:59:09Z |
2025-03-06T19:41:02+00:00
| 80 | 0 |
---
language:
- en
license: mit
pipeline_tag: text-generation
---
# Model
It is pre-fine tuned basic Huacayas-6.4B model. pretrained model. It will be future reasoning general focus 6.4B model.
This model has to be trained for inference.
## Model Details
Created custom architecture 6.4B and than created model usig the architecture.
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** EpistemeAI
- **License:** MIT
## Uses
Intended Use Cases: Huacayas 16B is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
### Out-of-Scope Use
Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages beyond those explicitly referenced as supported in this model card.
## Bias, Risks, and Limitations
For these reasons, as with all LLMs, Huacayas 16B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model
It is pre-fine tuned basic Huacayas-6.4B model. pretrained model. It will be future reasoning general focus 6.4B model.
This model has to be trained for inference.
## Model Details
Created custom architecture 6.4B and than created model usig the architecture.
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** EpistemeAI
- **License:** MIT
## Uses
Intended Use Cases: Huacayas 16B is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
### Out-of-Scope Use
Out of Scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages beyond those explicitly referenced as supported in this model card.
## Bias, Risks, and Limitations
For these reasons, as with all LLMs, Huacayas 16B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"language": ["en"], "license": "mit", "pipeline_tag": "text-generation"}
|
task
|
[
"SUMMARIZATION"
] | 42,177 |
ixa-ehu/berteus-base-cased
|
ixa-ehu
|
feature-extraction
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"eu",
"arxiv:2004.00033",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-09-11T13:30:55+00:00
| 116 | 4 |
---
language: eu
---
# BERTeus base cased
This is the Basque language pretrained model presented in [Give your Text Representation Models some Love: the Case for Basque](https://arxiv.org/pdf/2004.00033.pdf). This model has been trained on a Basque corpus comprising Basque crawled news articles from online newspapers and the Basque Wikipedia. The training corpus contains 224.6 million tokens, of which 35 million come from the Wikipedia.
BERTeus has been tested on four different downstream tasks for Basque: part-of-speech (POS) tagging, named entity recognition (NER), sentiment analysis and topic classification; improving the state of the art for all tasks. See summary of results below:
| Downstream task | BERTeus | mBERT | Previous SOTA |
| --------------- | ------- | ------| ------------- |
| Topic Classification | **76.77** | 68.42 | 63.00 |
| Sentiment | **78.10** | 71.02 | 74.02 |
| POS | **97.76** | 96.37 | 96.10 |
| NER | **87.06** | 81.52 | 76.72 |
If using this model, please cite the following paper:
```
@inproceedings{agerri2020give,
title={Give your Text Representation Models some Love: the Case for Basque},
author={Rodrigo Agerri and I{\~n}aki San Vicente and Jon Ander Campos and Ander Barrena and Xabier Saralegi and Aitor Soroa and Eneko Agirre},
booktitle={Proceedings of the 12th International Conference on Language Resources and Evaluation},
year={2020}
}
```
| null |
Non_BioNLP
|
# BERTeus base cased
This is the Basque language pretrained model presented in [Give your Text Representation Models some Love: the Case for Basque](https://arxiv.org/pdf/2004.00033.pdf). This model has been trained on a Basque corpus comprising Basque crawled news articles from online newspapers and the Basque Wikipedia. The training corpus contains 224.6 million tokens, of which 35 million come from the Wikipedia.
BERTeus has been tested on four different downstream tasks for Basque: part-of-speech (POS) tagging, named entity recognition (NER), sentiment analysis and topic classification; improving the state of the art for all tasks. See summary of results below:
| Downstream task | BERTeus | mBERT | Previous SOTA |
| --------------- | ------- | ------| ------------- |
| Topic Classification | **76.77** | 68.42 | 63.00 |
| Sentiment | **78.10** | 71.02 | 74.02 |
| POS | **97.76** | 96.37 | 96.10 |
| NER | **87.06** | 81.52 | 76.72 |
If using this model, please cite the following paper:
```
@inproceedings{agerri2020give,
title={Give your Text Representation Models some Love: the Case for Basque},
author={Rodrigo Agerri and I{\~n}aki San Vicente and Jon Ander Campos and Ander Barrena and Xabier Saralegi and Aitor Soroa and Eneko Agirre},
booktitle={Proceedings of the 12th International Conference on Language Resources and Evaluation},
year={2020}
}
```
|
{"language": "eu"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,178 |
mqy/mt5-small-finetuned-18jan-3
|
mqy
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-18T06:43:00Z |
2023-01-18T10:42:58+00:00
| 120 | 0 |
---
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-18jan-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-18jan-3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6115
- Rouge1: 7.259
- Rouge2: 0.3667
- Rougel: 7.1595
- Rougelsum: 7.156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.1947 | 1.0 | 60 | 3.1045 | 5.91 | 0.8583 | 5.8687 | 5.8123 |
| 3.8567 | 2.0 | 120 | 2.7744 | 8.0065 | 0.4524 | 8.0204 | 7.85 |
| 3.4346 | 3.0 | 180 | 2.7319 | 7.5954 | 0.4524 | 7.5204 | 7.4833 |
| 3.219 | 4.0 | 240 | 2.6736 | 8.5329 | 0.3333 | 8.487 | 8.312 |
| 3.0836 | 5.0 | 300 | 2.6583 | 8.3405 | 0.5667 | 8.2003 | 8.0543 |
| 2.9713 | 6.0 | 360 | 2.6516 | 8.8421 | 0.1667 | 8.7597 | 8.6754 |
| 2.9757 | 7.0 | 420 | 2.6369 | 8.04 | 0.3667 | 8.0018 | 7.8489 |
| 2.8321 | 8.0 | 480 | 2.6215 | 6.8739 | 0.3667 | 6.859 | 6.7917 |
| 2.794 | 9.0 | 540 | 2.6090 | 7.0738 | 0.4167 | 7.0232 | 6.9619 |
| 2.7695 | 10.0 | 600 | 2.6115 | 7.259 | 0.3667 | 7.1595 | 7.156 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-18jan-3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6115
- Rouge1: 7.259
- Rouge2: 0.3667
- Rougel: 7.1595
- Rougelsum: 7.156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.1947 | 1.0 | 60 | 3.1045 | 5.91 | 0.8583 | 5.8687 | 5.8123 |
| 3.8567 | 2.0 | 120 | 2.7744 | 8.0065 | 0.4524 | 8.0204 | 7.85 |
| 3.4346 | 3.0 | 180 | 2.7319 | 7.5954 | 0.4524 | 7.5204 | 7.4833 |
| 3.219 | 4.0 | 240 | 2.6736 | 8.5329 | 0.3333 | 8.487 | 8.312 |
| 3.0836 | 5.0 | 300 | 2.6583 | 8.3405 | 0.5667 | 8.2003 | 8.0543 |
| 2.9713 | 6.0 | 360 | 2.6516 | 8.8421 | 0.1667 | 8.7597 | 8.6754 |
| 2.9757 | 7.0 | 420 | 2.6369 | 8.04 | 0.3667 | 8.0018 | 7.8489 |
| 2.8321 | 8.0 | 480 | 2.6215 | 6.8739 | 0.3667 | 6.859 | 6.7917 |
| 2.794 | 9.0 | 540 | 2.6090 | 7.0738 | 0.4167 | 7.0232 | 6.9619 |
| 2.7695 | 10.0 | 600 | 2.6115 | 7.259 | 0.3667 | 7.1595 | 7.156 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-18jan-3", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,179 |
kroonen/llama2-Q4_0-GGML
|
kroonen
| null |
[
"license:mit",
"region:us"
] | 2023-07-22T23:53:01Z |
2023-07-30T12:21:19+00:00
| 0 | 2 |
---
license: mit
---
# Model description
LLAMA-2-Q4_0 GGML (7 and 13b) is a language model trained by Meta AI. This model is based on the original LLAMA-2, but with a couple of key changes. It has been converted to F32 before being quantized to 4 bits. These alterations make the model more efficient in terms of memory and computational requirements, without significantly compromising its language understanding and generation capabilities.
# Intended uses & limitations
## How to use
This model can be used with llama.cpp (or similar) for a variety of natural language understanding and generation tasks. These include, but are not limited to, text completion, text generation, conversation modeling, and semantic similarity estimation.
## Limitations and bias
While this model is designed to understand and generate human-like text, it has a few limitations:
1. It might generate incorrect or nonsensical responses if the input prompt is ambiguous or lacks sufficient context.
2. It is based on the data it was trained on and therefore might reflect the biases present in those data.
3. Despite the conversion and quantization, this model might still require substantial computational resources for large-scale tasks.
# Training data
LLAMA-2-Q4_0 GGML (7 and 13b) model was trained on the same data as the original LLAMA-2. For more details, please refer to the LLAMA-2 model card.
# Evaluations
The performance is similar to that of the original LLAMA-2, with a slight drop due to the quantization process. More specific evaluation results will be added as they become available.
| null |
Non_BioNLP
|
# Model description
LLAMA-2-Q4_0 GGML (7 and 13b) is a language model trained by Meta AI. This model is based on the original LLAMA-2, but with a couple of key changes. It has been converted to F32 before being quantized to 4 bits. These alterations make the model more efficient in terms of memory and computational requirements, without significantly compromising its language understanding and generation capabilities.
# Intended uses & limitations
## How to use
This model can be used with llama.cpp (or similar) for a variety of natural language understanding and generation tasks. These include, but are not limited to, text completion, text generation, conversation modeling, and semantic similarity estimation.
## Limitations and bias
While this model is designed to understand and generate human-like text, it has a few limitations:
1. It might generate incorrect or nonsensical responses if the input prompt is ambiguous or lacks sufficient context.
2. It is based on the data it was trained on and therefore might reflect the biases present in those data.
3. Despite the conversion and quantization, this model might still require substantial computational resources for large-scale tasks.
# Training data
LLAMA-2-Q4_0 GGML (7 and 13b) model was trained on the same data as the original LLAMA-2. For more details, please refer to the LLAMA-2 model card.
# Evaluations
The performance is similar to that of the original LLAMA-2, with a slight drop due to the quantization process. More specific evaluation results will be added as they become available.
|
{"license": "mit"}
|
task
|
[
"SEMANTIC_SIMILARITY"
] | 42,181 |
blockblockblock/Faro-Yi-9B-200K-bpw2.5
|
blockblockblock
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"en",
"dataset:wenbopan/Fusang-v1",
"dataset:wenbopan/OpenOrca-zh-20k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 2024-04-01T06:25:40Z |
2024-04-01T06:27:00+00:00
| 5 | 0 |
---
datasets:
- wenbopan/Fusang-v1
- wenbopan/OpenOrca-zh-20k
language:
- zh
- en
license: mit
---

**The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
# Faro-Yi-9B
Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
## How to Use
Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
{"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
# Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties!
```
## Performance
Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters.
<details> <summary>Benchmark Results</summary>
### Fact-based Evaluation (Open LLM Leaderboard)
| **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** |
| -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- |
| **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 |
| **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 |
### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench))
| **Name** | **Average_zh** | **Average_en** | **Code Completion** |
|----------------|----------------|----------------|---------------------|
| **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 |
| **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 |
<details>
<summary>Score breakdown</summary>
| **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** |
|----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------|
| **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 |
| **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** |
</details>
<!--### Performance on Preference TODO-->
### Bilingual Ability (CMMLU & MMLU)
| **Name** | MMLU | **CMMLU** |
| -------------- | --------- | --------- |
| **Yi-9B-200K** | 65.73 | 71.97 |
| **Faro-Yi-9B** | **68.80** | **73.28** |
</details>
| null |
Non_BioNLP
|

**The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
# Faro-Yi-9B
Faro-Yi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Faro-Yi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
## How to Use
Faro-Yi-9B uses chatml template. This make it easy to set up system prompt and multi-turn conversations.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
messages = [
{"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
{"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
# Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties!
```
## Performance
Faro-Yi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Faro is competitive among all open-sourced models at around 9B parameters.
<details> <summary>Benchmark Results</summary>
### Fact-based Evaluation (Open LLM Leaderboard)
| **Metric** | **MMLU** | **GSM8K** | **HellaSwag** | **TruthfulQA** | **Arc** | **Winogrande** |
| -------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- |
| **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 |
| **Faro-Yi-9B** | **68.80** | **63.08** | **57.28** | **40.86** | **72.58** | 71.11 |
### Long-context Modeling ([LongBench](https://github.com/THUDM/LongBench))
| **Name** | **Average_zh** | **Average_en** | **Code Completion** |
|----------------|----------------|----------------|---------------------|
| **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 |
| **Faro-Yi-9B** | **41.092** | **40.9536** | 46.0 |
<details>
<summary>Score breakdown</summary>
| **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** |
|----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------|
| **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 |
| **Faro-Yi-9B** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** |
</details>
<!--### Performance on Preference TODO-->
### Bilingual Ability (CMMLU & MMLU)
| **Name** | MMLU | **CMMLU** |
| -------------- | --------- | --------- |
| **Yi-9B-200K** | 65.73 | 71.97 |
| **Faro-Yi-9B** | **68.80** | **73.28** |
</details>
|
{"datasets": ["wenbopan/Fusang-v1", "wenbopan/OpenOrca-zh-20k"], "language": ["zh", "en"], "license": "mit"}
|
task
|
[
"SUMMARIZATION"
] | 42,182 |
Neuron-LM/neuron-1.5
|
Neuron-LM
|
question-answering
|
[
"transformers",
"pytorch",
"jax",
"rust",
"safetensors",
"gpt_neo",
"text-generation",
"question-answering",
"dataset:wikitext",
"dataset:openwebtext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-14T21:21:51Z |
2025-01-14T23:35:32+00:00
| 27 | 1 |
---
datasets:
- wikitext
- openwebtext
library_name: transformers
license: apache-2.0
pipeline_tag: question-answering
---
# Neuron-1.5: A Language Model by Neuron-LM
**Neuron-1.5** is the second-generation model in the Neuron-LM series, designed to push the boundaries of natural language processing by combining enhanced performance with versatility. Leveraging a robust architecture and extensive training, Neuron-1.5 builds upon the strengths of its predecessor to address more complex and diverse tasks.
---
## Model Overview
- **Number of Parameters:** 1.3 billion
- **Vocabulary Size:** 50,257 tokens
- **Training Tokens:** Trained on 380 billion tokens of high-quality textual data, ensuring deeper contextual understanding and improved generalization across various domains.
- **Maximum Sequence Length:** 2,048 tokens, enabling it to process and generate coherent text in extended contexts.
- **Training Framework:** Developed using state-of-the-art libraries for optimized performance, including integration with scalable frameworks like PyTorch and TensorFlow.
---
## Key Features
### 1. Contextual Mastery
Neuron-1.5 generates human-like responses with unmatched fluency and coherence, making it ideal for applications requiring advanced contextual understanding, such as:
- Chatbots
- Content creation
- Question-answering systems
### 2. Enhanced Efficiency
Neuron-1.5 optimizes computational efficiency despite its larger parameter size, ensuring low latency and resource-friendly inference for a wide range of deployments.
### 3. Versatile Adaptability
Neuron-1.5 adapts seamlessly to diverse use cases, including but not limited to:
- **Text Classification**: Accurate categorization of textual data
- **Sentiment Analysis**: Understanding emotional tones
- **Language Translation**: High-quality translations across multiple languages
- **Summarization**: Generating concise summaries of lengthy texts
- **Creative Writing**: Crafting compelling narratives and ideas
- **Legal and Technical Document Analysis**: Processing complex and structured information with accuracy
### 4. Advanced Pretraining
Trained on a vast and diverse dataset spanning multiple domains, Neuron-1.5 excels in both specialized and general-purpose tasks. Its robust training ensures reliability in handling nuanced queries.
### 5. Fine-Tuning Ready
Neuron-1.5 is designed for fine-tuning, allowing users to customize the model for specific tasks with minimal computational overhead, unlocking its full potential for tailored applications.
### 6. Scalable Deployment Options
Neuron-1.5 supports scalable deployment options, including:
- Cloud-based inference for high-availability applications.
- Edge deployment optimized for resource-constrained environments.
- Integration with APIs for seamless embedding into existing workflows.
---
## Technical Specifications
- **Architecture:** Transformer-based model
- **Parameter Distribution:** Balanced across layers for optimal performance
- **Data Diversity:** Includes encyclopedic entries, literature, technical documentation, conversational data, and more
- **Model Size:** Designed to balance performance and accessibility, suitable for consumer-grade GPUs
- **Pretraining Hardware:** Trained using a distributed setup with high-performance GPUs and TPUs for faster convergence
- **Optimization Techniques:** Employs techniques like mixed-precision training and gradient checkpointing to enhance efficiency
---
## Use Cases
Neuron-1.5 can be applied in a variety of industries and scenarios:
- **Healthcare:** Summarizing medical documents and providing conversational support for patients.
- **Education:** Assisting with automated tutoring systems and generating educational content.
- **E-commerce:** Enhancing product descriptions, sentiment analysis for reviews, and personalized marketing.
- **Finance:** Analyzing financial documents and generating detailed reports.
- **Entertainment:** Generating scripts, lyrics, and creative content for media production.
---
## About Neuron-LM
Neuron-LM is committed to advancing the field of AI with efficient, adaptable, and high-performance language models. Neuron-1.5 embodies this vision, offering developers and researchers a powerful tool to innovate and solve real-world challenges.
Neuron-LM strives to empower the AI community by providing open and adaptable models, encouraging innovation and collaboration. Join us in shaping the future of AI-powered solutions.
| null |
Non_BioNLP
|
# Neuron-1.5: A Language Model by Neuron-LM
**Neuron-1.5** is the second-generation model in the Neuron-LM series, designed to push the boundaries of natural language processing by combining enhanced performance with versatility. Leveraging a robust architecture and extensive training, Neuron-1.5 builds upon the strengths of its predecessor to address more complex and diverse tasks.
---
## Model Overview
- **Number of Parameters:** 1.3 billion
- **Vocabulary Size:** 50,257 tokens
- **Training Tokens:** Trained on 380 billion tokens of high-quality textual data, ensuring deeper contextual understanding and improved generalization across various domains.
- **Maximum Sequence Length:** 2,048 tokens, enabling it to process and generate coherent text in extended contexts.
- **Training Framework:** Developed using state-of-the-art libraries for optimized performance, including integration with scalable frameworks like PyTorch and TensorFlow.
---
## Key Features
### 1. Contextual Mastery
Neuron-1.5 generates human-like responses with unmatched fluency and coherence, making it ideal for applications requiring advanced contextual understanding, such as:
- Chatbots
- Content creation
- Question-answering systems
### 2. Enhanced Efficiency
Neuron-1.5 optimizes computational efficiency despite its larger parameter size, ensuring low latency and resource-friendly inference for a wide range of deployments.
### 3. Versatile Adaptability
Neuron-1.5 adapts seamlessly to diverse use cases, including but not limited to:
- **Text Classification**: Accurate categorization of textual data
- **Sentiment Analysis**: Understanding emotional tones
- **Language Translation**: High-quality translations across multiple languages
- **Summarization**: Generating concise summaries of lengthy texts
- **Creative Writing**: Crafting compelling narratives and ideas
- **Legal and Technical Document Analysis**: Processing complex and structured information with accuracy
### 4. Advanced Pretraining
Trained on a vast and diverse dataset spanning multiple domains, Neuron-1.5 excels in both specialized and general-purpose tasks. Its robust training ensures reliability in handling nuanced queries.
### 5. Fine-Tuning Ready
Neuron-1.5 is designed for fine-tuning, allowing users to customize the model for specific tasks with minimal computational overhead, unlocking its full potential for tailored applications.
### 6. Scalable Deployment Options
Neuron-1.5 supports scalable deployment options, including:
- Cloud-based inference for high-availability applications.
- Edge deployment optimized for resource-constrained environments.
- Integration with APIs for seamless embedding into existing workflows.
---
## Technical Specifications
- **Architecture:** Transformer-based model
- **Parameter Distribution:** Balanced across layers for optimal performance
- **Data Diversity:** Includes encyclopedic entries, literature, technical documentation, conversational data, and more
- **Model Size:** Designed to balance performance and accessibility, suitable for consumer-grade GPUs
- **Pretraining Hardware:** Trained using a distributed setup with high-performance GPUs and TPUs for faster convergence
- **Optimization Techniques:** Employs techniques like mixed-precision training and gradient checkpointing to enhance efficiency
---
## Use Cases
Neuron-1.5 can be applied in a variety of industries and scenarios:
- **Healthcare:** Summarizing medical documents and providing conversational support for patients.
- **Education:** Assisting with automated tutoring systems and generating educational content.
- **E-commerce:** Enhancing product descriptions, sentiment analysis for reviews, and personalized marketing.
- **Finance:** Analyzing financial documents and generating detailed reports.
- **Entertainment:** Generating scripts, lyrics, and creative content for media production.
---
## About Neuron-LM
Neuron-LM is committed to advancing the field of AI with efficient, adaptable, and high-performance language models. Neuron-1.5 embodies this vision, offering developers and researchers a powerful tool to innovate and solve real-world challenges.
Neuron-LM strives to empower the AI community by providing open and adaptable models, encouraging innovation and collaboration. Join us in shaping the future of AI-powered solutions.
|
{"datasets": ["wikitext", "openwebtext"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "question-answering"}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION",
"SUMMARIZATION"
] | 42,183 |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-04T03:20:55Z |
2023-02-08T00:03:46+00:00
| 25 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- type: accuracy
value: 0.1267605633802817
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5690
- Accuracy: 0.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3354 | 1.0 | 435 | 0.5690 | 0.1268 |
| 0.299 | 2.0 | 870 | 0.5693 | 0.1408 |
| 0.2905 | 3.0 | 1305 | 0.6161 | 0.1127 |
| 0.2827 | 4.0 | 1740 | 0.6297 | 0.0704 |
| 0.2757 | 5.0 | 2175 | 0.6336 | 0.0986 |
| 0.2705 | 6.0 | 2610 | 0.6493 | 0.0845 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5690
- Accuracy: 0.1268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3354 | 1.0 | 435 | 0.5690 | 0.1268 |
| 0.299 | 2.0 | 870 | 0.5693 | 0.1408 |
| 0.2905 | 3.0 | 1305 | 0.6161 | 0.1127 |
| 0.2827 | 4.0 | 1740 | 0.6297 | 0.0704 |
| 0.2757 | 5.0 | 2175 | 0.6336 | 0.0986 |
| 0.2705 | 6.0 | 2610 | 0.6493 | 0.0845 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI", "type": "glue", "args": "wnli"}, "metrics": [{"type": "accuracy", "value": 0.1267605633802817, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,184 |
SnypzZz/Llama2-13b-Language-translate
|
SnypzZz
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"mbart",
"text2text-generation",
"text-generation-inference",
"code",
"PyTorch",
"multilingual",
"ar",
"cs",
"de",
"en",
"es",
"et",
"fi",
"fr",
"gu",
"hi",
"it",
"ja",
"kk",
"ko",
"lt",
"lv",
"my",
"ne",
"nl",
"ro",
"ru",
"si",
"tr",
"vi",
"zh",
"af",
"az",
"bn",
"fa",
"he",
"hr",
"id",
"ka",
"km",
"mk",
"ml",
"mn",
"mr",
"pl",
"ps",
"pt",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"uk",
"ur",
"xh",
"gl",
"sl",
"arxiv:2008.00401",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-04T07:43:41Z |
2024-01-05T09:11:55+00:00
| 2,163 | 122 |
---
language:
- multilingual
- ar
- cs
- de
- en
- es
- et
- fi
- fr
- gu
- hi
- it
- ja
- kk
- ko
- lt
- lv
- my
- ne
- nl
- ro
- ru
- si
- tr
- vi
- zh
- af
- az
- bn
- fa
- he
- hr
- id
- ka
- km
- mk
- ml
- mn
- mr
- pl
- ps
- pt
- sv
- sw
- ta
- te
- th
- tl
- uk
- ur
- xh
- gl
- sl
library_name: transformers
tags:
- transformers
- text-generation-inference
- code
- PyTorch
---
# mBART-50 one to many multilingual machine translation GGML
This model is a fine-tuned checkpoint of [TheBloke-Llama-2-13B](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML). `mbart-large-50-one-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
The model can translate English to other 49 languages mentioned below.
To translate into a target language, the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_en = "The head of the United Nations says there is no military solution in Syria"
model = MBartForConditionalGeneration.from_pretrained("SnypzZz/Llama2-13b-Language-translate")
tokenizer = MBart50TokenizerFast.from_pretrained("SnypzZz/Llama2-13b-Language-translate", src_lang="en_XX")
model_inputs = tokenizer(article_en, return_tensors="pt")
# translate from English to Hindi
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है'
# translate from English to Chinese
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '联合国首脑说,叙利亚没有军事解决办法'
```
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[SnypzZz's Discord server](https://discord.gg/g9MnGrAAyT)
PS i am a real gaming fanatic and this is also my gaming server
so if anyone wants to play VALORANT or any other games, feel free to ping me--- @SNYPER#1942.
## instagram
[SnypzZz's Instagram](https://www.instagram.com/1nonly.lel/?next=%2F)
## LinkedIn
[SnypzZz's LinkedIn profile](https://www.linkedin.com/in/damodar-hegde-6a367720a/)
| null |
Non_BioNLP
|
# mBART-50 one to many multilingual machine translation GGML
This model is a fine-tuned checkpoint of [TheBloke-Llama-2-13B](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML). `mbart-large-50-one-to-many-mmt` is fine-tuned for multilingual machine translation. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
The model can translate English to other 49 languages mentioned below.
To translate into a target language, the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_en = "The head of the United Nations says there is no military solution in Syria"
model = MBartForConditionalGeneration.from_pretrained("SnypzZz/Llama2-13b-Language-translate")
tokenizer = MBart50TokenizerFast.from_pretrained("SnypzZz/Llama2-13b-Language-translate", src_lang="en_XX")
model_inputs = tokenizer(article_en, return_tensors="pt")
# translate from English to Hindi
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["hi_IN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => 'संयुक्त राष्ट्र के नेता कहते हैं कि सीरिया में कोई सैन्य समाधान नहीं है'
# translate from English to Chinese
generated_tokens = model.generate(
**model_inputs,
forced_bos_token_id=tokenizer.lang_code_to_id["zh_CN"]
)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => '联合国首脑说,叙利亚没有军事解决办法'
```
See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for more fine-tuned versions.
## Languages covered
Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[SnypzZz's Discord server](https://discord.gg/g9MnGrAAyT)
PS i am a real gaming fanatic and this is also my gaming server
so if anyone wants to play VALORANT or any other games, feel free to ping me--- @SNYPER#1942.
## instagram
[SnypzZz's Instagram](https://www.instagram.com/1nonly.lel/?next=%2F)
## LinkedIn
[SnypzZz's LinkedIn profile](https://www.linkedin.com/in/damodar-hegde-6a367720a/)
|
{"language": ["multilingual", "ar", "cs", "de", "en", "es", "et", "fi", "fr", "gu", "hi", "it", "ja", "kk", "ko", "lt", "lv", "my", "ne", "nl", "ro", "ru", "si", "tr", "vi", "zh", "af", "az", "bn", "fa", "he", "hr", "id", "ka", "km", "mk", "ml", "mn", "mr", "pl", "ps", "pt", "sv", "sw", "ta", "te", "th", "tl", "uk", "ur", "xh", "gl", "sl"], "library_name": "transformers", "tags": ["transformers", "text-generation-inference", "code", "PyTorch"]}
|
task
|
[
"TRANSLATION"
] | 42,185 |
gaudi/opus-mt-fr-ha-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:57:16Z |
2024-10-19T04:24:04+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-ha --output_dir ./ctranslate2/opus-mt-fr-ha-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-ha-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-ha-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-ha-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-ha --output_dir ./ctranslate2/opus-mt-fr-ha-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-ha-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-ha-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-ha-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-ha) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 42,186 |
daviddrzik/SK_Morph_BLM
|
daviddrzik
|
fill-mask
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"slovak-language-model",
"sk",
"dataset:oscar-corpus/OSCAR-2109",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-09-04T06:38:14Z |
2024-11-19T12:41:07+00:00
| 13 | 0 |
---
datasets:
- oscar-corpus/OSCAR-2109
language:
- sk
library_name: transformers
license: mit
pipeline_tag: fill-mask
tags:
- slovak-language-model
---
# Slovak Morphological Baby Language Model (SK_Morph_BLM)
**SK_Morph_BLM** is a pretrained small language model for the Slovak language, based on the RoBERTa architecture. The model utilizes a custom morphological tokenizer (**SKMT**, more info [here](https://github.com/daviddrzik/Slovak_subword_tokenizers)) specifically designed for the Slovak language, which focuses on **preserving the integrity of root morphemes**. This tokenizer is not compatible with the standard `RobertaTokenizer` from the Hugging Face library due to its unique approach to tokenization. The model is case-insensitive, meaning it operates in lowercase. While the pretrained model can be used for masked language modeling, it is primarily intended for fine-tuning on downstream NLP tasks.
## How to Use the Model
To use the SK_Morph_BLM model, follow these steps:
```python
import torch
import sys
from transformers import AutoModelForMaskedLM
from huggingface_hub import snapshot_download
# Download the repository from Hugging Face and append the path to sys.path
repo_path = snapshot_download(repo_id="daviddrzik/SK_Morph_BLM")
sys.path.append(repo_path)
# Import the custom tokenizer from the downloaded repository
from SKMT_lib_v2.SKMT_BPE import SKMorfoTokenizer
# Initialize the tokenizer and model
tokenizer = SKMorfoTokenizer()
model = AutoModelForMaskedLM.from_pretrained("daviddrzik/SK_Morph_BLM")
# Function to fill in the masked token in a given text
def fill_mask(tokenized_text, tokenizer, model, top_k=5):
inputs = tokenizer.tokenize(tokenized_text.lower(), max_length=256, return_tensors='pt', return_subword=False)
mask_token_index = torch.where(inputs["input_ids"][0] == 4)[0]
with torch.no_grad():
predictions = model(**inputs)
topk_tokens = torch.topk(predictions.logits[0, mask_token_index], k=top_k, dim=-1).indices
fill_results = []
for idx, i in enumerate(mask_token_index):
for j, token_idx in enumerate(topk_tokens[idx]):
token_text = tokenizer.convert_ids_to_tokens(token_idx.item())
token_text = token_text.replace("Ġ", " ") # Replace special characters with a space
probability = torch.softmax(predictions.logits[0, i], dim=-1)[token_idx].item()
fill_results.append({
'score': probability,
'token': token_idx.item(),
'token_str': token_text,
'sequence': tokenized_text.replace("<mask>", token_text.strip())
})
fill_results.sort(key=lambda x: x['score'], reverse=True)
return fill_results
# Example usage of the function
text = "Včera večer sme <mask> nový film v kine, ktorý mal premiéru iba pred týždňom."
result = fill_mask(text.lower(), tokenizer, model, top_k=5)
print(result)
[{'score': 0.4014046788215637,
'token': 6626,
'token_str': ' videli',
'sequence': 'včera večer sme videli nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.15018892288208008,
'token': 874,
'token_str': ' mali',
'sequence': 'včera večer sme mali nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.057530131191015244,
'token': 21193,
'token_str': ' pozreli',
'sequence': 'včera večer sme pozreli nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.049020398408174515,
'token': 26468,
'token_str': ' sledovali',
'sequence': 'včera večer sme sledovali nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.04107135161757469,
'token': 9171,
'token_str': ' objavili',
'sequence': 'včera večer sme objavili nový film v kine, ktorý mal premiéru iba pred týždňom.'}]
```
## Training Data
The `SK_Morph_BLM` model was pretrained using a subset of the OSCAR 2019 corpus, specifically focusing on the Slovak language. The corpus underwent comprehensive preprocessing to ensure the quality and relevance of the data:
- **Language Filtering:** Non-Slovak text was removed to focus solely on the Slovak language.
- **Character Normalization:** Various types of spaces, quotes, dashes, and separators were standardized (e.g., replacing different types of spaces with a single space, or dashes with hyphens). Emoticons were replaced with spaces.
- **Symbol and Unwanted Text Removal:** Sentences containing mathematical symbols, pictograms, or characters from Asian and African languages were deleted. Duplicates of punctuation, special characters, and spaces were also removed.
- **URL and Text Normalization:** All web addresses were removed, and the text was converted to lowercase to simplify tokenization.
- **Content Cleanup:** Text that included irrelevant content from web crawling, such as keywords and HTML tags, was identified and removed.
Additionally, the preprocessing included further refinement steps to create the final dataset:
- **Parentheses Content Removal:** All content within parentheses was removed to reduce noise.
- **Selection of Text Segments:** Medium-length text paragraphs were selected to maintain consistency.
- **Similarity Filtering:** Paragraphs with at least 50% similarity to previous ones were removed to minimize redundancy.
- **Random Sampling:** Finally, 20% of the remaining paragraphs were randomly selected.
After preprocessing, the training corpus consisted of:
- **455 MB of text**
- **895,125 paragraphs**
- **64.6 million words**
- **1.13 million unique words**
- **119 unique characters**
## Pretraining
The `SK_Morph_BLM` model was trained with the following key parameters:
- **Architecture:** Based on RoBERTa, with 6 hidden layers and 12 attention heads.
- **Hidden size:** 576
- **Vocabulary size:** 50,264 tokens
- **Sequence length:** 256 tokens
- **Dropout:** 0.1
- **Number of parameters:** 58 million
- **Optimizer:** AdamW, learning rate 1×10^(-4), weight decay 0.01
- **Training:** 30 epochs, divided into 3 phases:
- **Phase 1:** 10 epochs on CPU (4x AMD EPYC 7542), batch size 64, 50 hours per epoch, 139,870 steps total.
- **Phase 2:** 5 epochs on GPU (1x Nvidia A100 40GB), batch size 64, 100 minutes per epoch, 69,935 steps total.
- **Phase 3:** 15 epochs on GPU (2x Nvidia A100 40GB), batch size 128, 60 minutes per epoch, 104,910 steps total.
The model was trained using the Hugging Face library, but without using the `Trainer` class—native PyTorch was used instead.
## Fine-Tuned Versions of the SK_Morph_BLM Model
Here are the fine-tuned versions of the `SK_Morph_BLM` model based on the folders provided:
- [`SK_Morph_BLM-ner`](https://huggingface.co/daviddrzik/SK_Morph_BLM-ner): Fine-tuned for Named Entity Recognition (NER) tasks.
- [`SK_Morph_BLM-pos`](https://huggingface.co/daviddrzik/SK_Morph_BLM-pos): Fine-tuned for Part-of-Speech (POS) tagging.
- [`SK_Morph_BLM-qa`](https://huggingface.co/daviddrzik/SK_Morph_BLM-qa): Fine-tuned for Question Answering tasks.
- [`SK_Morph_BLM-sentiment-csfd`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-csfd): Fine-tuned for sentiment analysis on the CSFD (movie review) dataset.
- [`SK_Morph_BLM-sentiment-multidomain`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-multidomain): Fine-tuned for sentiment analysis across multiple domains.
- [`SK_Morph_BLM-sentiment-reviews`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-reviews): Fine-tuned for sentiment analysis on general review datasets.
- [`SK_Morph_BLM-topic-news`](https://huggingface.co/daviddrzik/SK_Morph_BLM-topic-news): Fine-tuned for topic classification in news articles.
## Citation
If you find our model or paper useful, please consider citing our work:
### Article:
Držík, D., & Forgac, F. (2024). Slovak morphological tokenizer using the Byte-Pair Encoding algorithm. PeerJ Computer Science, 10, e2465. https://doi.org/10.7717/peerj-cs.2465
### BibTeX Entry:
```bib
@article{drzik2024slovak,
title={Slovak morphological tokenizer using the Byte-Pair Encoding algorithm},
author={Držík, Dávid and Forgac, František},
journal={PeerJ Computer Science},
volume={10},
pages={e2465},
year={2024},
month={11},
issn={2376-5992},
doi={10.7717/peerj-cs.2465}
}
```
| null |
Non_BioNLP
|
# Slovak Morphological Baby Language Model (SK_Morph_BLM)
**SK_Morph_BLM** is a pretrained small language model for the Slovak language, based on the RoBERTa architecture. The model utilizes a custom morphological tokenizer (**SKMT**, more info [here](https://github.com/daviddrzik/Slovak_subword_tokenizers)) specifically designed for the Slovak language, which focuses on **preserving the integrity of root morphemes**. This tokenizer is not compatible with the standard `RobertaTokenizer` from the Hugging Face library due to its unique approach to tokenization. The model is case-insensitive, meaning it operates in lowercase. While the pretrained model can be used for masked language modeling, it is primarily intended for fine-tuning on downstream NLP tasks.
## How to Use the Model
To use the SK_Morph_BLM model, follow these steps:
```python
import torch
import sys
from transformers import AutoModelForMaskedLM
from huggingface_hub import snapshot_download
# Download the repository from Hugging Face and append the path to sys.path
repo_path = snapshot_download(repo_id="daviddrzik/SK_Morph_BLM")
sys.path.append(repo_path)
# Import the custom tokenizer from the downloaded repository
from SKMT_lib_v2.SKMT_BPE import SKMorfoTokenizer
# Initialize the tokenizer and model
tokenizer = SKMorfoTokenizer()
model = AutoModelForMaskedLM.from_pretrained("daviddrzik/SK_Morph_BLM")
# Function to fill in the masked token in a given text
def fill_mask(tokenized_text, tokenizer, model, top_k=5):
inputs = tokenizer.tokenize(tokenized_text.lower(), max_length=256, return_tensors='pt', return_subword=False)
mask_token_index = torch.where(inputs["input_ids"][0] == 4)[0]
with torch.no_grad():
predictions = model(**inputs)
topk_tokens = torch.topk(predictions.logits[0, mask_token_index], k=top_k, dim=-1).indices
fill_results = []
for idx, i in enumerate(mask_token_index):
for j, token_idx in enumerate(topk_tokens[idx]):
token_text = tokenizer.convert_ids_to_tokens(token_idx.item())
token_text = token_text.replace("Ġ", " ") # Replace special characters with a space
probability = torch.softmax(predictions.logits[0, i], dim=-1)[token_idx].item()
fill_results.append({
'score': probability,
'token': token_idx.item(),
'token_str': token_text,
'sequence': tokenized_text.replace("<mask>", token_text.strip())
})
fill_results.sort(key=lambda x: x['score'], reverse=True)
return fill_results
# Example usage of the function
text = "Včera večer sme <mask> nový film v kine, ktorý mal premiéru iba pred týždňom."
result = fill_mask(text.lower(), tokenizer, model, top_k=5)
print(result)
[{'score': 0.4014046788215637,
'token': 6626,
'token_str': ' videli',
'sequence': 'včera večer sme videli nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.15018892288208008,
'token': 874,
'token_str': ' mali',
'sequence': 'včera večer sme mali nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.057530131191015244,
'token': 21193,
'token_str': ' pozreli',
'sequence': 'včera večer sme pozreli nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.049020398408174515,
'token': 26468,
'token_str': ' sledovali',
'sequence': 'včera večer sme sledovali nový film v kine, ktorý mal premiéru iba pred týždňom.'},
{'score': 0.04107135161757469,
'token': 9171,
'token_str': ' objavili',
'sequence': 'včera večer sme objavili nový film v kine, ktorý mal premiéru iba pred týždňom.'}]
```
## Training Data
The `SK_Morph_BLM` model was pretrained using a subset of the OSCAR 2019 corpus, specifically focusing on the Slovak language. The corpus underwent comprehensive preprocessing to ensure the quality and relevance of the data:
- **Language Filtering:** Non-Slovak text was removed to focus solely on the Slovak language.
- **Character Normalization:** Various types of spaces, quotes, dashes, and separators were standardized (e.g., replacing different types of spaces with a single space, or dashes with hyphens). Emoticons were replaced with spaces.
- **Symbol and Unwanted Text Removal:** Sentences containing mathematical symbols, pictograms, or characters from Asian and African languages were deleted. Duplicates of punctuation, special characters, and spaces were also removed.
- **URL and Text Normalization:** All web addresses were removed, and the text was converted to lowercase to simplify tokenization.
- **Content Cleanup:** Text that included irrelevant content from web crawling, such as keywords and HTML tags, was identified and removed.
Additionally, the preprocessing included further refinement steps to create the final dataset:
- **Parentheses Content Removal:** All content within parentheses was removed to reduce noise.
- **Selection of Text Segments:** Medium-length text paragraphs were selected to maintain consistency.
- **Similarity Filtering:** Paragraphs with at least 50% similarity to previous ones were removed to minimize redundancy.
- **Random Sampling:** Finally, 20% of the remaining paragraphs were randomly selected.
After preprocessing, the training corpus consisted of:
- **455 MB of text**
- **895,125 paragraphs**
- **64.6 million words**
- **1.13 million unique words**
- **119 unique characters**
## Pretraining
The `SK_Morph_BLM` model was trained with the following key parameters:
- **Architecture:** Based on RoBERTa, with 6 hidden layers and 12 attention heads.
- **Hidden size:** 576
- **Vocabulary size:** 50,264 tokens
- **Sequence length:** 256 tokens
- **Dropout:** 0.1
- **Number of parameters:** 58 million
- **Optimizer:** AdamW, learning rate 1×10^(-4), weight decay 0.01
- **Training:** 30 epochs, divided into 3 phases:
- **Phase 1:** 10 epochs on CPU (4x AMD EPYC 7542), batch size 64, 50 hours per epoch, 139,870 steps total.
- **Phase 2:** 5 epochs on GPU (1x Nvidia A100 40GB), batch size 64, 100 minutes per epoch, 69,935 steps total.
- **Phase 3:** 15 epochs on GPU (2x Nvidia A100 40GB), batch size 128, 60 minutes per epoch, 104,910 steps total.
The model was trained using the Hugging Face library, but without using the `Trainer` class—native PyTorch was used instead.
## Fine-Tuned Versions of the SK_Morph_BLM Model
Here are the fine-tuned versions of the `SK_Morph_BLM` model based on the folders provided:
- [`SK_Morph_BLM-ner`](https://huggingface.co/daviddrzik/SK_Morph_BLM-ner): Fine-tuned for Named Entity Recognition (NER) tasks.
- [`SK_Morph_BLM-pos`](https://huggingface.co/daviddrzik/SK_Morph_BLM-pos): Fine-tuned for Part-of-Speech (POS) tagging.
- [`SK_Morph_BLM-qa`](https://huggingface.co/daviddrzik/SK_Morph_BLM-qa): Fine-tuned for Question Answering tasks.
- [`SK_Morph_BLM-sentiment-csfd`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-csfd): Fine-tuned for sentiment analysis on the CSFD (movie review) dataset.
- [`SK_Morph_BLM-sentiment-multidomain`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-multidomain): Fine-tuned for sentiment analysis across multiple domains.
- [`SK_Morph_BLM-sentiment-reviews`](https://huggingface.co/daviddrzik/SK_Morph_BLM-sentiment-reviews): Fine-tuned for sentiment analysis on general review datasets.
- [`SK_Morph_BLM-topic-news`](https://huggingface.co/daviddrzik/SK_Morph_BLM-topic-news): Fine-tuned for topic classification in news articles.
## Citation
If you find our model or paper useful, please consider citing our work:
### Article:
Držík, D., & Forgac, F. (2024). Slovak morphological tokenizer using the Byte-Pair Encoding algorithm. PeerJ Computer Science, 10, e2465. https://doi.org/10.7717/peerj-cs.2465
### BibTeX Entry:
```bib
@article{drzik2024slovak,
title={Slovak morphological tokenizer using the Byte-Pair Encoding algorithm},
author={Držík, Dávid and Forgac, František},
journal={PeerJ Computer Science},
volume={10},
pages={e2465},
year={2024},
month={11},
issn={2376-5992},
doi={10.7717/peerj-cs.2465}
}
```
|
{"datasets": ["oscar-corpus/OSCAR-2109"], "language": ["sk"], "library_name": "transformers", "license": "mit", "pipeline_tag": "fill-mask", "tags": ["slovak-language-model"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING"
] | 42,187 |
iiBLACKii/Gujarati_VDB_Fine_Tune
|
iiBLACKii
|
automatic-speech-recognition
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-to-text",
"transcription",
"Gujarati",
"fine-tuned",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 2024-11-25T13:27:05Z |
2024-11-26T07:37:35+00:00
| 13 | 0 |
---
library_name: transformers
tags:
- speech-to-text
- transcription
- Gujarati
- whisper
- fine-tuned
---
# Whisper Small - Fine-tuned for Gujarati Speech-to-Text
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) for Gujarati transcription and translation tasks. It is capable of converting Gujarati speech into text, and since it is based on Whisper, it supports multilingual audio inputs. This fine-tuned model was specifically trained for improving performance on Gujarati speech data.
## Model Details
### Model Description
This model was fine-tuned on Gujarati speech data to improve transcription accuracy for audio recorded in Gujarati. It has been trained to handle diverse speech inputs, including variations in accents, backgrounds, and speech styles.
- **Developed by:** [BLACK]
- **Shared by:** [None]
- **Model type:** Speech-to-Text (Fine-tuned Whisper Model)
- **Language(s):** Gujarati
- **License:** Apache-2.0
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```python
import torch
import librosa
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("iiBLACKii/Gujarati_VDB_Fine_Tune")
model = AutoModelForSpeechSeq2Seq.from_pretrained("iiBLACKii/Gujarati_VDB_Fine_Tune")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def preprocess_audio(file_path, sampling_rate=16000):
audio_array, sr = librosa.load(file_path, sr=None)
if sr != sampling_rate:
audio_array = librosa.resample(audio_array, orig_sr=sr, target_sr=sampling_rate)
return audio_array
def transcribe_and_translate_audio(audio_path):
audio_array = preprocess_audio(audio_path)
input_features = processor(audio_array, return_tensors="pt", sampling_rate=16000).input_features
input_features = input_features.to(device)
with torch.no_grad():
predicted_ids = model.generate(input_features, max_length=400, num_beams=5)
transcription_or_translation = processor.batch_decode(predicted_ids, skip_special_tokens=True)
return transcription_or_translation[0]
if __name__ == "__main__":
audio_file_path = "" # .wav file path
print("Transcribing and Translating audio...")
result = transcribe_and_translate_audio(audio_file_path)
print(f"Result: {result}")
```
### Using Base Model (OpenAI)
```python
import torch
import librosa
from transformers import WhisperProcessor, WhisperForConditionalGeneration, AutoConfig
repo_name = "iiBLACKii/Gujarati_VDB_Fine_Tune"
processor = WhisperProcessor.from_pretrained(repo_name)
config = AutoConfig.from_pretrained(repo_name)
model = WhisperForConditionalGeneration.from_pretrained(repo_name, config=config)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def preprocess_audio(file_path, sampling_rate=16000):
audio_array, sr = librosa.load(file_path, sr=None)
if sr != sampling_rate:
audio_array = librosa.resample(audio_array, orig_sr=sr, target_sr=sampling_rate)
return audio_array
def transcribe_audio(audio_path):
audio_array = preprocess_audio(audio_path)
input_features = processor.feature_extractor(
audio_array, sampling_rate=16000, return_tensors="pt"
).input_features
input_features = input_features.to(device)
with torch.no_grad():
predicted_ids = model.generate(
input_features,
max_new_tokens=400,
num_beams=5,
)
transcription = processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)
return transcription[0]
if __name__ == "__main__":
audio_file_path = "" #.wav file path
print("Transcribing audio...")
transcription = transcribe_audio(audio_file_path)
print(f"Transcription: {transcription}")
```
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Whisper Small - Fine-tuned for Gujarati Speech-to-Text
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) for Gujarati transcription and translation tasks. It is capable of converting Gujarati speech into text, and since it is based on Whisper, it supports multilingual audio inputs. This fine-tuned model was specifically trained for improving performance on Gujarati speech data.
## Model Details
### Model Description
This model was fine-tuned on Gujarati speech data to improve transcription accuracy for audio recorded in Gujarati. It has been trained to handle diverse speech inputs, including variations in accents, backgrounds, and speech styles.
- **Developed by:** [BLACK]
- **Shared by:** [None]
- **Model type:** Speech-to-Text (Fine-tuned Whisper Model)
- **Language(s):** Gujarati
- **License:** Apache-2.0
- **Finetuned from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```python
import torch
import librosa
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("iiBLACKii/Gujarati_VDB_Fine_Tune")
model = AutoModelForSpeechSeq2Seq.from_pretrained("iiBLACKii/Gujarati_VDB_Fine_Tune")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def preprocess_audio(file_path, sampling_rate=16000):
audio_array, sr = librosa.load(file_path, sr=None)
if sr != sampling_rate:
audio_array = librosa.resample(audio_array, orig_sr=sr, target_sr=sampling_rate)
return audio_array
def transcribe_and_translate_audio(audio_path):
audio_array = preprocess_audio(audio_path)
input_features = processor(audio_array, return_tensors="pt", sampling_rate=16000).input_features
input_features = input_features.to(device)
with torch.no_grad():
predicted_ids = model.generate(input_features, max_length=400, num_beams=5)
transcription_or_translation = processor.batch_decode(predicted_ids, skip_special_tokens=True)
return transcription_or_translation[0]
if __name__ == "__main__":
audio_file_path = "" # .wav file path
print("Transcribing and Translating audio...")
result = transcribe_and_translate_audio(audio_file_path)
print(f"Result: {result}")
```
### Using Base Model (OpenAI)
```python
import torch
import librosa
from transformers import WhisperProcessor, WhisperForConditionalGeneration, AutoConfig
repo_name = "iiBLACKii/Gujarati_VDB_Fine_Tune"
processor = WhisperProcessor.from_pretrained(repo_name)
config = AutoConfig.from_pretrained(repo_name)
model = WhisperForConditionalGeneration.from_pretrained(repo_name, config=config)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def preprocess_audio(file_path, sampling_rate=16000):
audio_array, sr = librosa.load(file_path, sr=None)
if sr != sampling_rate:
audio_array = librosa.resample(audio_array, orig_sr=sr, target_sr=sampling_rate)
return audio_array
def transcribe_audio(audio_path):
audio_array = preprocess_audio(audio_path)
input_features = processor.feature_extractor(
audio_array, sampling_rate=16000, return_tensors="pt"
).input_features
input_features = input_features.to(device)
with torch.no_grad():
predicted_ids = model.generate(
input_features,
max_new_tokens=400,
num_beams=5,
)
transcription = processor.tokenizer.batch_decode(predicted_ids, skip_special_tokens=True)
return transcription[0]
if __name__ == "__main__":
audio_file_path = "" #.wav file path
print("Transcribing audio...")
transcription = transcribe_audio(audio_file_path)
print(f"Transcription: {transcription}")
```
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": ["speech-to-text", "transcription", "Gujarati", "whisper", "fine-tuned"]}
|
task
|
[
"TRANSLATION"
] | 42,188 |
sieu-n/autotrain-t5baseparaphrase-42430108692
|
sieu-n
|
summarization
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:krenerd/autotrain-data-t5baseparaphrase",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-03-21T03:30:31Z |
2023-03-21T03:37:38+00:00
| 33 | 0 |
---
datasets:
- krenerd/autotrain-data-t5baseparaphrase
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 2.6793230772092427
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 42430108692
- CO2 Emissions (in grams): 2.6793
## Validation Metrics
- Loss: 0.072
- Rouge1: 63.306
- Rouge2: 53.109
- RougeL: 62.478
- RougeLsum: 62.252
- Gen Len: 202.325
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/krenerd/autotrain-t5baseparaphrase-42430108692
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 42430108692
- CO2 Emissions (in grams): 2.6793
## Validation Metrics
- Loss: 0.072
- Rouge1: 63.306
- Rouge2: 53.109
- RougeL: 62.478
- RougeLsum: 62.252
- Gen Len: 202.325
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/krenerd/autotrain-t5baseparaphrase-42430108692
```
|
{"datasets": ["krenerd/autotrain-data-t5baseparaphrase"], "language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 2.6793230772092427}}
|
task
|
[
"SUMMARIZATION"
] | 42,189 |
mapsoriano/roberta-tagalog-base-philippine-elections-2016-2022-hate-speech
|
mapsoriano
|
text-classification
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"tagalog",
"filipino",
"twitter",
"tl",
"en",
"dataset:hate_speech_filipino",
"dataset:mapsoriano/2016_2022_hate_speech_filipino",
"base_model:jcblaise/roberta-tagalog-base",
"base_model:finetune:jcblaise/roberta-tagalog-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-24T02:00:31Z |
2024-07-13T07:16:11+00:00
| 164 | 1 |
---
base_model: jcblaise/roberta-tagalog-base
datasets:
- hate_speech_filipino
- mapsoriano/2016_2022_hate_speech_filipino
language:
- tl
- en
license: cc-by-sa-4.0
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_trainer
- tagalog
- filipino
- twitter
model-index:
- name: roberta-tagalog-base-philippine-elections-2016-2022-hate-speech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-tagalog-base-philippine-elections-2016-2022-hate-speech
This model is a fine-tuned version of [jcblaise/roberta-tagalog-base](https://huggingface.co/jcblaise/roberta-tagalog-base) for the task of Text Classification, classifying hate and non-hate tweets.
The model was fine-tuned on a combined dataset [mapsoriano/2016_2022_hate_speech_filipino](https://huggingface.co/datasets/mapsoriano/2016_2022_hate_speech_filipino) consisting of
the [hate_speech_filipino](https://huggingface.co/datasets/hate_speech_filipino) dataset and a newly crawled 2022 Philippine Presidential Elections-related Tweets Hate Speech Dataset.
It achieves the following results on the evaluation (validation) set:
- Loss: 0.3574
- Accuracy: 0.8743
It achieves the following results on the test set:
- Accuracy: 0.8783
- Precision: 0.8563
- Recall: 0.9077
- F1: 0.8813
Feel free to connect via [LinkedIn](https://www.linkedin.com/in/map-soriano/) for further information on this model or on the study that it was used on.
<!-- ## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed -->
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3423 | 1.0 | 1361 | 0.3167 | 0.8693 |
| 0.2194 | 2.0 | 2722 | 0.3574 | 0.8743 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
### Citation Information
**Research Title:** Application of BERT in Detecting Online Hate
**Published:** 2023
**Authors:**
- Castro, D.
- Dizon, L. J.
- Sarip, A. J.
- Soriano, M. A.
Feel free to connect via [LinkedIn](https://www.linkedin.com/in/map-soriano/) for further information on this model or on the study that it was used on.
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-tagalog-base-philippine-elections-2016-2022-hate-speech
This model is a fine-tuned version of [jcblaise/roberta-tagalog-base](https://huggingface.co/jcblaise/roberta-tagalog-base) for the task of Text Classification, classifying hate and non-hate tweets.
The model was fine-tuned on a combined dataset [mapsoriano/2016_2022_hate_speech_filipino](https://huggingface.co/datasets/mapsoriano/2016_2022_hate_speech_filipino) consisting of
the [hate_speech_filipino](https://huggingface.co/datasets/hate_speech_filipino) dataset and a newly crawled 2022 Philippine Presidential Elections-related Tweets Hate Speech Dataset.
It achieves the following results on the evaluation (validation) set:
- Loss: 0.3574
- Accuracy: 0.8743
It achieves the following results on the test set:
- Accuracy: 0.8783
- Precision: 0.8563
- Recall: 0.9077
- F1: 0.8813
Feel free to connect via [LinkedIn](https://www.linkedin.com/in/map-soriano/) for further information on this model or on the study that it was used on.
<!-- ## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed -->
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3423 | 1.0 | 1361 | 0.3167 | 0.8693 |
| 0.2194 | 2.0 | 2722 | 0.3574 | 0.8743 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
### Citation Information
**Research Title:** Application of BERT in Detecting Online Hate
**Published:** 2023
**Authors:**
- Castro, D.
- Dizon, L. J.
- Sarip, A. J.
- Soriano, M. A.
Feel free to connect via [LinkedIn](https://www.linkedin.com/in/map-soriano/) for further information on this model or on the study that it was used on.
|
{"base_model": "jcblaise/roberta-tagalog-base", "datasets": ["hate_speech_filipino", "mapsoriano/2016_2022_hate_speech_filipino"], "language": ["tl", "en"], "license": "cc-by-sa-4.0", "metrics": ["accuracy", "precision", "recall", "f1"], "tags": ["generated_from_trainer", "tagalog", "filipino", "twitter"], "model-index": [{"name": "roberta-tagalog-base-philippine-elections-2016-2022-hate-speech", "results": []}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,190 |
lijingxin/distilbert-base-uncased-distilled-clinc
|
lijingxin
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-14T10:33:00Z |
2022-03-14T10:42:34+00:00
| 114 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9470967741935484
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3365 | 1.0 | 318 | 1.6602 | 0.7361 |
| 1.2799 | 2.0 | 636 | 0.8378 | 0.8548 |
| 0.6739 | 3.0 | 954 | 0.4872 | 0.9132 |
| 0.4143 | 4.0 | 1272 | 0.3640 | 0.9352 |
| 0.3051 | 5.0 | 1590 | 0.3168 | 0.9406 |
| 0.2585 | 6.0 | 1908 | 0.2970 | 0.9442 |
| 0.235 | 7.0 | 2226 | 0.2876 | 0.9458 |
| 0.2236 | 8.0 | 2544 | 0.2824 | 0.9458 |
| 0.2168 | 9.0 | 2862 | 0.2794 | 0.9468 |
| 0.2138 | 10.0 | 3180 | 0.2782 | 0.9471 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3365 | 1.0 | 318 | 1.6602 | 0.7361 |
| 1.2799 | 2.0 | 636 | 0.8378 | 0.8548 |
| 0.6739 | 3.0 | 954 | 0.4872 | 0.9132 |
| 0.4143 | 4.0 | 1272 | 0.3640 | 0.9352 |
| 0.3051 | 5.0 | 1590 | 0.3168 | 0.9406 |
| 0.2585 | 6.0 | 1908 | 0.2970 | 0.9442 |
| 0.235 | 7.0 | 2226 | 0.2876 | 0.9458 |
| 0.2236 | 8.0 | 2544 | 0.2824 | 0.9458 |
| 0.2168 | 9.0 | 2862 | 0.2794 | 0.9468 |
| 0.2138 | 10.0 | 3180 | 0.2782 | 0.9471 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9470967741935484, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,191 |
brianknowsai/Brian-Llama-3.2-3B
|
brianknowsai
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Web3",
"Domain-Specific",
"NLP",
"Intent Recognition",
"Solidity",
"en",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-22T22:09:08Z |
2024-12-23T13:07:33+00:00
| 1,622 | 1 |
---
base_model:
- meta-llama/Llama-3.2-3B
language:
- en
library_name: transformers
license: llama3.2
metrics:
- perplexity
pipeline_tag: text-generation
tags:
- Web3
- Domain-Specific
- NLP
- Intent Recognition
- Solidity
---
# Model Card for Brian-3B
<img src="brian_llama2_logo.png" alt="Brian Logo" width="600"/>
## Model Details
### Model Description
The **Brian-3B** model is a domain-specific language model tailored for Web3 applications. Built upon Meta’s Llama-3.2-3B, it is optimized for tasks involving natural language understanding and intent recognition in the blockchain ecosystem.
This includes tasks such as transaction intent parsing, Solidity code generation, and question answering on Web3-related topics.
- **Developed by:** The Brian Team
- **Funded by:** The Brian Team
- **Shared by:** The Brian Team
- **Model type:** Transformer-based autoregressive language model
- **Language(s):** English
- **License:** Llama 3.2 Community License
- **Finetuned from:** meta-llama/Llama-3.2-3B
**Please note**, this is just the first of a series of further training phases before the model can be used in production (estimated Q1 2025) to power our Intent Recognition Engine.
The Brian team is calling on all partners interested in the space: developers, projects, and investors who might be involved in future phases of the model training.
Join our [TG Dev chat](https://t.me/+NJjmAm2Y9p85Mzc0) if you have any questions or want to contribute to the model training.
### Model Sources
- **Repository:** [Hugging Face Repository](https://huggingface.co/brianknowsai/Brian-Llama-3.2-3B)
- **Demo:** This model will be integrated soon to power https://www.brianknows.org/
- **Paper:** Coming soon
## Uses
### Downstream Use
The model is specifically designed to be fine-tuned for downstream tasks such as:
- **Transaction intent recognition**: Parsing natural language into JSON for transaction data.
- **Solidity code generation**: Creating smart contracts based on user prompts.
- **Web3 question answering**: Answering protocol-specific queries or extracting blockchain-related data.
In the coming months, our team will release these task-specific models.
Anyone in the web3 space can fine-tune the model for other downstream tasks or improve its knowledge of specific ecosystems (e.g., Solana, Farcaster, etc.)
### Out-of-Scope Use
- Tasks outside the Web3 domain.
- Generating harmful, unethical, or misleading content.
## Bias, Risks, and Limitations
### Recommendations
While the model shows excellent performance in Web3-related domains, users should validate outputs for critical tasks like smart contract generation or
transaction execution to avoid errors. Fine-tuning is recommended for domain-specific applications.
## How to Get Started with the Model
To load and use the Brian-3B model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("brianknowsai/Brian-Llama-3.2-3B")
tokenizer = AutoTokenizer.from_pretrained("brianknowsai/Brian-Llama-3.2-3B")
# Generate text
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
input_text = "A web3 bridge is "
# Tokenize the input text
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
# Generate output (this is typical for causal language models)
with torch.no_grad():
outputs = model.generate(input_ids, max_length=80, num_return_sequences=1)
# Decode the generated tokens to text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Print the result
print(f"Input: {input_text}")
print(f"Generated Brian text: {generated_text}")
| null |
Non_BioNLP
|
# Model Card for Brian-3B
<img src="brian_llama2_logo.png" alt="Brian Logo" width="600"/>
## Model Details
### Model Description
The **Brian-3B** model is a domain-specific language model tailored for Web3 applications. Built upon Meta’s Llama-3.2-3B, it is optimized for tasks involving natural language understanding and intent recognition in the blockchain ecosystem.
This includes tasks such as transaction intent parsing, Solidity code generation, and question answering on Web3-related topics.
- **Developed by:** The Brian Team
- **Funded by:** The Brian Team
- **Shared by:** The Brian Team
- **Model type:** Transformer-based autoregressive language model
- **Language(s):** English
- **License:** Llama 3.2 Community License
- **Finetuned from:** meta-llama/Llama-3.2-3B
**Please note**, this is just the first of a series of further training phases before the model can be used in production (estimated Q1 2025) to power our Intent Recognition Engine.
The Brian team is calling on all partners interested in the space: developers, projects, and investors who might be involved in future phases of the model training.
Join our [TG Dev chat](https://t.me/+NJjmAm2Y9p85Mzc0) if you have any questions or want to contribute to the model training.
### Model Sources
- **Repository:** [Hugging Face Repository](https://huggingface.co/brianknowsai/Brian-Llama-3.2-3B)
- **Demo:** This model will be integrated soon to power https://www.brianknows.org/
- **Paper:** Coming soon
## Uses
### Downstream Use
The model is specifically designed to be fine-tuned for downstream tasks such as:
- **Transaction intent recognition**: Parsing natural language into JSON for transaction data.
- **Solidity code generation**: Creating smart contracts based on user prompts.
- **Web3 question answering**: Answering protocol-specific queries or extracting blockchain-related data.
In the coming months, our team will release these task-specific models.
Anyone in the web3 space can fine-tune the model for other downstream tasks or improve its knowledge of specific ecosystems (e.g., Solana, Farcaster, etc.)
### Out-of-Scope Use
- Tasks outside the Web3 domain.
- Generating harmful, unethical, or misleading content.
## Bias, Risks, and Limitations
### Recommendations
While the model shows excellent performance in Web3-related domains, users should validate outputs for critical tasks like smart contract generation or
transaction execution to avoid errors. Fine-tuning is recommended for domain-specific applications.
## How to Get Started with the Model
To load and use the Brian-3B model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("brianknowsai/Brian-Llama-3.2-3B")
tokenizer = AutoTokenizer.from_pretrained("brianknowsai/Brian-Llama-3.2-3B")
# Generate text
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
input_text = "A web3 bridge is "
# Tokenize the input text
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
# Generate output (this is typical for causal language models)
with torch.no_grad():
outputs = model.generate(input_ids, max_length=80, num_return_sequences=1)
# Decode the generated tokens to text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Print the result
print(f"Input: {input_text}")
print(f"Generated Brian text: {generated_text}")
|
{"base_model": ["meta-llama/Llama-3.2-3B"], "language": ["en"], "library_name": "transformers", "license": "llama3.2", "metrics": ["perplexity"], "pipeline_tag": "text-generation", "tags": ["Web3", "Domain-Specific", "NLP", "Intent Recognition", "Solidity"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,192 |
Almheiri/Llama-3.2-1B-Instruct-QLORA_INT4_EO8
|
Almheiri
|
text-generation
|
[
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | 2024-11-25T18:34:15Z |
2024-11-25T19:33:28+00:00
| 13 | 0 |
---
base_model:
- meta-llama/Llama-3.2-1B
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
\ for use, reproduction, distribution and modification of the Llama Materials set\
\ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\
\ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\n“Licensee” or “you” means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\
\ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\
\ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\
\ below or by using or distributing any portion or element of the Llama Materials,\
\ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\
\ copy, create derivative works of, and make modifications to the Llama Materials.\
\ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\
\ Materials (or any derivative works thereof), or a product or service (including\
\ another AI model) that contains any of them, you shall (A) provide a copy of this\
\ Agreement with any such Llama Materials; and (B) prominently display “Built with\
\ Llama” on a related website, user interface, blogpost, about page, or product\
\ documentation. If you use the Llama Materials or any outputs or results of the\
\ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\
\ which is distributed or made available, you shall also include “Llama” at the\
\ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\
\ derivative works thereof, from a Licensee as part of an integrated end user product,\
\ then Section 2 of this Agreement will not apply to you. \niii. You must retain\
\ in all copies of the Llama Materials that you distribute the following attribution\
\ notice within a “Notice” text file distributed as a part of such copies: “Llama\
\ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\
\ version release date, the monthly active users of the products or services made\
\ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\
\ monthly active users in the preceding calendar month, you must request a license\
\ from Meta, which Meta may grant to you in its sole discretion, and you are not\
\ authorized to exercise any of the rights under this Agreement unless or until\
\ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\
\ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\
\ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\
\ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\
\ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\
\ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\
\ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\
\ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\
\ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\
\ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\
\ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\
\ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\
a. No trademark licenses are granted under this Agreement, and in connection with\
\ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\
\ by or associated with the other or any of its affiliates, except as required\
\ for reasonable and customary use in describing and redistributing the Llama Materials\
\ or as set forth in this Section 5(a). Meta hereby grants you a license to use\
\ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\
\ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\
\ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\
\ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\
\ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\
\ respect to any derivative works and modifications of the Llama Materials that\
\ are made by you, as between you and Meta, you are and will be the owner of such\
\ derivative works and modifications.\nc. If you institute litigation or other proceedings\
\ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\
\ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\
\ of any of the foregoing, constitutes infringement of intellectual property or\
\ other rights owned or licensable by you, then any licenses granted to you under\
\ this Agreement shall terminate as of the date such litigation or claim is filed\
\ or instituted. You will indemnify and hold harmless Meta from and against any\
\ claim by any third party arising out of or related to your use or distribution\
\ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\
\ commence upon your acceptance of this Agreement or access to the Llama Materials\
\ and will continue in full force and effect until terminated in accordance with\
\ the terms and conditions herein. Meta may terminate this Agreement if you are\
\ in breach of any term or condition of this Agreement. Upon termination of this\
\ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\
\ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\
\ Jurisdiction. This Agreement will be governed and construed under the laws of\
\ the State of California without regard to choice of law principles, and the UN\
\ Convention on Contracts for the International Sale of Goods does not apply to\
\ this Agreement. The courts of California shall have exclusive jurisdiction of\
\ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\
\ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 3.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\
\ information about individuals, including information about individuals’ identity,\
\ health, or demographic information, unless you have obtained the right to do so\
\ in accordance with applicable law\n 5. Engage in or facilitate any action or\
\ generate any content that infringes, misappropriates, or otherwise violates any\
\ third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 6. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n 7. Engage in any action, or\
\ facilitate any action, to intentionally circumvent or remove usage restrictions\
\ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\
\ in, promote, incite, facilitate, or assist in the planning or development of activities\
\ that present a risk of death or bodily harm to individuals, including use of Llama\
\ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\
\ applications, espionage, use for materials or activities that are subject to the\
\ International Traffic Arms Regulations (ITAR) maintained by the United States\
\ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\
\ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\
\ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\
\ substances\n 11. Operation of critical infrastructure, transportation technologies,\
\ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\
\ and eating disorders\n 13. Any content intended to incite or promote violence,\
\ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\
\ or mislead others, including use of Llama 3.2 related to the following:\n 14.\
\ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\
\ 15. Generating, promoting, or furthering defamatory content, including the\
\ creation of defamatory statements, images, or other content\n 16. Generating,\
\ promoting, or further distributing spam\n 17. Impersonating another individual\
\ without consent, authorization, or legal right\n 18. Representing that the\
\ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\
\ false online engagement, including fake reviews and other means of fake online\
\ engagement \n4. Fail to appropriately disclose to end users any known dangers\
\ of your AI system 5. Interact with third party tools, models, or software designed\
\ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\
\ that the outputs of such tools, models, or software are associated with Meta or\
\ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\
\ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\
\ are not being granted to you if you are an individual domiciled in, or a company\
\ with a principal place of business in, the European Union. This restriction does\
\ not apply to end users of a product or service that incorporates any such multimodal\
\ models.\n\nPlease report any violation of this Policy, software “bug,” or other\
\ problems that could lead to a violation of this Policy through one of the following\
\ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\
\ 3.2: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Oct 24, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
| null |
Non_BioNLP
|
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model Developer:** Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
| | Training Data | Params | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count | Knowledge cutoff |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Llama 3.2 (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 128k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
| Llama 3.2 Quantized (text only) | A new mix of publicly available online data. | 1B (1.23B) | Multilingual Text | Multilingual Text and code | 8k | Yes | Yes | Up to 9T tokens | December 2023 |
| | | 3B (3.21B) | Multilingual Text | Multilingual Text and code | | | | | |
**Supported Languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 Model Family:** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Oct 24, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
**Feedback:** Instructions on how to provide feedback or comments on the model can be found in the Llama Models [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases:** Llama 3.2 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat and agentic applications like knowledge retrieval and summarization, mobile AI powered writing assistants and query and prompt rewriting. Pretrained models can be adapted for a variety of additional natural language generation tasks. Similarly, quantized models can be adapted for a variety of on-device use-cases with limited compute resources.
**Out of Scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.2 Community License. Use in languages beyond those explicitly referenced as supported in this model card.
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU cluster, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Training utilized a cumulative of **916k** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
**Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **240** tons CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| | Training Time (GPU hours) | Logit Generation Time (GPU Hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | ----- | :---: | :---: | :---: |
| Llama 3.2 1B | 370k | \- | 700 | 107 | 0 |
| Llama 3.2 3B | 460k | \- | 700 | 133 | 0 |
| Llama 3.2 1B SpinQuant | 1.7 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 3B SpinQuant | 2.4 | 0 | 700 | *Negligible*\*\* | 0 |
| Llama 3.2 1B QLora | 1.3k | 0 | 700 | 0.381 | 0 |
| Llama 3.2 3B QLora | 1.6k | 0 | 700 | 0.461 | 0 |
| Total | 833k | 86k | | 240 | 0 |
\*\* The location-based CO2e emissions of Llama 3.2 1B SpinQuant and Llama 3.2 3B SpinQuant are less than 0.001 metric tonnes each. This is due to the minimal training GPU hours that are required.
The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 3.2 was pretrained on up to 9 trillion tokens of data from publicly available sources. For the 1B and 3B Llama 3.2 models, we incorporated logits from the Llama 3.1 8B and 70B models into the pretraining stage of the model development, where outputs (logits) from these larger models were used as token-level targets. Knowledge distillation was used after pruning to recover performance. In post-training we used a similar recipe as Llama 3.1 and produced final chat models by doing several rounds of alignment on top of the pre-trained model. Each round involved Supervised Fine-Tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO).
**Data Freshness:** The pretraining data has a cutoff of December 2023\.
## Quantization
### Quantization Scheme
We designed the current quantization scheme with the [PyTorch’s ExecuTorch](https://github.com/pytorch/executorch) inference framework and Arm CPU backend in mind, taking into account metrics including model quality, prefill/decoding speed, and memory footprint. Our quantization scheme involves three parts:
- All linear layers in all transformer blocks are quantized to a 4-bit groupwise scheme (with a group size of 32) for weights and 8-bit per-token dynamic quantization for activations.
- The classification layer is quantized to 8-bit per-channel for weight and 8-bit per token dynamic quantization for activation.
- Similar to classification layer, an 8-bit per channel quantization is used for embedding layer.
### Quantization-Aware Training and LoRA
The quantization-aware training (QAT) with low-rank adaptation (LoRA) models went through only post-training stages, using the same data as the full precision models. To initialize QAT, we utilize BF16 Llama 3.2 model checkpoints obtained after supervised fine-tuning (SFT) and perform an additional full round of SFT training with QAT. We then freeze the backbone of the QAT model and perform another round of SFT with LoRA adaptors applied to all layers within the transformer block. Meanwhile, the LoRA adaptors' weights and activations are maintained in BF16. Because our approach is similar to QLoRA of Dettmers et al., (2023) (i.e., quantization followed by LoRA adapters), we refer this method as QLoRA. Finally, we fine-tune the resulting model (both backbone and LoRA adaptors) using direct preference optimization (DPO).
### SpinQuant
[SpinQuant](https://arxiv.org/abs/2405.16406) was applied, together with generative post-training quantization (GPTQ). For the SpinQuant rotation matrix fine-tuning, we optimized for 100 iterations, using 800 samples with sequence-length 2048 from the WikiText 2 dataset. For GPTQ, we used 128 samples from the same dataset with the same sequence-length.
## Benchmarks \- English Text
In this section, we report the results for Llama 3.2 models on standard automatic benchmarks. For all these evaluations, we used our internal evaluations library.
### Base Pretrained Models
| Category | Benchmark | \# Shots | Metric | Llama 3.2 1B | Llama 3.2 3B | Llama 3.1 8B |
| ----- | ----- | :---: | :---: | :---: | :---: | :---: |
| General | MMLU | 5 | macro\_avg/acc\_char | 32.2 | 58 | 66.7 |
| | AGIEval English | 3-5 | average/acc\_char | 23.3 | 39.2 | 47.8 |
| | ARC-Challenge | 25 | acc\_char | 32.8 | 69.1 | 79.7 |
| Reading comprehension | SQuAD | 1 | em | 49.2 | 67.7 | 77 |
| | QuAC (F1) | 1 | f1 | 37.9 | 42.9 | 44.9 |
| | DROP (F1) | 3 | f1 | 28.0 | 45.2 | 59.5 |
| Long Context | Needle in Haystack | 0 | em | 96.8 | 1 | 1 |
### Instruction Tuned Models
| Capability | | Benchmark | \# Shots | Metric | Llama 3.2 1B bf16 | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B bf16 | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | ----- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | | MMLU | 5 | macro\_avg/acc | 49.3 | 43.3 | 47.3 | 49.0 | 63.4 | 60.5 | 62 | 62.4 | 69.4 |
| Re-writing | | Open-rewrite eval | 0 | micro\_avg/rougeL | 41.6 | 39.2 | 40.9 | 41.2 | 40.1 | 40.3 | 40.8 | 40.7 | 40.9 |
| Summarization | | TLDR9+ (test) | 1 | rougeL | 16.8 | 14.9 | 16.7 | 16.8 | 19.0 | 19.1 | 19.2 | 19.1 | 17.2 |
| Instruction following | | IFEval | 0 | Avg(Prompt/Instruction acc Loose/Strict) | 59.5 | 51.5 | 58.4 | 55.6 | 77.4 | 73.9 | 73.5 | 75.9 | 80.4 |
| Math | | GSM8K (CoT) | 8 | em\_maj1@1 | 44.4 | 33.1 | 40.6 | 46.5 | 77.7 | 72.9 | 75.7 | 77.9 | 84.5 |
| | | MATH (CoT) | 0 | final\_em | 30.6 | 20.5 | 25.3 | 31.0 | 48.0 | 44.2 | 45.3 | 49.2 | 51.9 |
| Reasoning | | ARC-C | 0 | acc | 59.4 | 54.3 | 57 | 60.7 | 78.6 | 75.6 | 77.6 | 77.6 | 83.4 |
| | | GPQA | 0 | acc | 27.2 | 25.9 | 26.3 | 25.9 | 32.8 | 32.8 | 31.7 | 33.9 | 32.8 |
| | | Hellaswag | 0 | acc | 41.2 | 38.1 | 41.3 | 41.5 | 69.8 | 66.3 | 68 | 66.3 | 78.7 |
| Tool Use | | BFCL V2 | 0 | acc | 25.7 | 14.3 | 15.9 | 23.7 | 67.0 | 53.4 | 60.1 | 63.5 | 67.1 |
| | | Nexus | 0 | macro\_avg/acc | 13.5 | 5.2 | 9.6 | 12.5 | 34.3 | 32.4 | 31.5 | 30.1 | 38.5 |
| Long Context | | InfiniteBench/En.QA | 0 | longbook\_qa/f1 | 20.3 | N/A | N/A | N/A | 19.8 | N/A | N/A | N/A | 27.3 |
| | | InfiniteBench/En.MC | 0 | longbook\_choice/acc | 38.0 | N/A | N/A | N/A | 63.3 | N/A | N/A | N/A | 72.2 |
| | | NIH/Multi-needle | 0 | recall | 75.0 | N/A | N/A | N/A | 84.7 | N/A | N/A | N/A | 98.8 |
| Multilingual | | MGSM (CoT) | 0 | em | 24.5 | 13.7 | 18.2 | 24.4 | 58.2 | 48.9 | 54.3 | 56.8 | 68.9 |
\*\*for comparison purposes only. Model not released.
### Multilingual Benchmarks
| Category | Benchmark | Language | Llama 3.2 1B | Llama 3.2 1B Vanilla PTQ\*\* | Llama 3.2 1B Spin Quant | Llama 3.2 1B QLoRA | Llama 3.2 3B | Llama 3.2 3B Vanilla PTQ\*\* | Llama 3.2 3B Spin Quant | Llama 3.2 3B QLoRA | Llama 3.1 8B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| General | MMLU (5-shot, macro_avg/acc) | Portuguese | 39.8 | 34.9 | 38.9 | 40.2 | 54.5 | 50.9 | 53.3 | 53.4 | 62.1 |
| | | Spanish | 41.5 | 36.0 | 39.8 | 41.8 | 55.1 | 51.9 | 53.6 | 53.6 | 62.5 |
| | | Italian | 39.8 | 34.9 | 38.1 | 40.6 | 53.8 | 49.9 | 52.1 | 51.7 | 61.6 |
| | | German | 39.2 | 34.9 | 37.5 | 39.6 | 53.3 | 50.0 | 52.2 | 51.3 | 60.6 |
| | | French | 40.5 | 34.8 | 39.2 | 40.8 | 54.6 | 51.2 | 53.3 | 53.3 | 62.3 |
| | | Hindi | 33.5 | 30.0 | 32.1 | 34.0 | 43.3 | 40.4 | 42.0 | 42.1 | 50.9 |
| | | Thai | 34.7 | 31.2 | 32.4 | 34.9 | 44.5 | 41.3 | 44.0 | 42.2 | 50.3 |
\*\*for comparison purposes only. Model not released.
## Inference time
In the below table, we compare the performance metrics of different quantization methods (SpinQuant and QAT \+ LoRA) with the BF16 baseline. The evaluation was done using the [ExecuTorch](https://github.com/pytorch/executorch) framework as the inference engine, with the ARM CPU as a backend using Android OnePlus 12 device.
| Category | Decode (tokens/sec) | Time-to-first-token (sec) | Prefill (tokens/sec) | Model size (PTE file size in MB) | Memory size (RSS in MB) |
| :---- | ----- | ----- | ----- | ----- | ----- |
| 1B BF16 (baseline) | 19.2 | 1.0 | 60.3 | 2358 | 3,185 |
| 1B SpinQuant | 50.2 (2.6x) | 0.3 (-76.9%) | 260.5 (4.3x) | 1083 (-54.1%) | 1,921 (-39.7%) |
| 1B QLoRA | 45.8 (2.4x) | 0.3 (-76.0%) | 252.0 (4.2x) | 1127 (-52.2%) | 2,255 (-29.2%) |
| 3B BF16 (baseline) | 7.6 | 3.0 | 21.2 | 6129 | 7,419 |
| 3B SpinQuant | 19.7 (2.6x) | 0.7 (-76.4%) | 89.7 (4.2x) | 2435 (-60.3%) | 3,726 (-49.8%) |
| 3B QLoRA | 18.5 (2.4x) | 0.7 (-76.1%) | 88.8 (4.2x) | 2529 (-58.7%) | 4,060 (-45.3%) |
(\*) The performance measurement is done using an adb binary-based approach.
(\*\*) It is measured on an Android OnePlus 12 device.
(\*\*\*) Time-to-first-token (TTFT) is measured with prompt length=64
*Footnote:*
- *Decode (tokens/second) is for how quickly it keeps generating. Higher is better.*
- *Time-to-first-token (TTFT for shorthand) is for how fast it generates the first token for a given prompt. Lower is better.*
- *Prefill is the inverse of TTFT (aka 1/TTFT) in tokens/second. Higher is better*
- *Model size \- how big is the model, measured by, PTE file, a binary file format for ExecuTorch*
- *RSS size \- Memory usage in resident set size (RSS)*
## Responsibility & Safety
As part of our Responsible release approach, we followed a three-pronged strategy to managing trust & safety risks:
1. Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama
2. Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm
3. Provide protections for the community to help prevent the misuse of our models
### Responsible Deployment
**Approach:** Llama is a foundational technology designed to be used in a variety of use cases. Examples on how Meta’s Llama models have been responsibly deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models, enabling the world to benefit from the technology power, by aligning our model safety for generic use cases and addressing a standard set of harms. Developers are then in the driver’s seat to tailor safety for their use cases, defining their own policies and deploying the models with the necessary safeguards in their Llama systems. Llama 3.2 was developed following the best practices outlined in our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/).
#### Llama 3.2 Instruct
**Objective:** Our main objectives for conducting safety fine-tuning are to provide the research community with a valuable resource for studying the robustness of safety fine-tuning, as well as to offer developers a readily available, safe, and powerful model for various applications to reduce the developer workload to deploy safe AI systems. We implemented the same set of safety mitigations as in Llama 3, and you can learn more about these in the Llama 3 [paper](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/).
**Fine-Tuning Data:** We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals and Tone:** Building on the work we started with Llama 3, we put a great emphasis on model refusals to benign prompts as well as refusal tone. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
#### Llama 3.2 Systems
**Safety as a System:** Large language models, including Llama 3.2, **are not designed to be deployed in isolation** but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building agentic systems. Safeguards are key to achieve the right helpfulness-safety alignment as well as mitigating safety and security risks inherent to the system and any integration of the model or system with external tools. As part of our responsible release approach, we provide the community with [safeguards](https://llama.meta.com/trust-and-safety/) that developers should deploy with Llama models or other LLMs, including Llama Guard, Prompt Guard and Code Shield. All our [reference implementations](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### New Capabilities and Use Cases
**Technological Advancement:** Llama releases usually introduce new capabilities that require specific considerations in addition to the best practices that generally apply across all Generative AI use cases. For prior release capabilities also supported by Llama 3.2, see [Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md), as the same considerations apply here as well.
**Constrained Environments:** Llama 3.2 1B and 3B models are expected to be deployed in highly constrained environments, such as mobile devices. LLM Systems using smaller models will have a different alignment profile and safety/helpfulness tradeoff than more complex, larger systems. Developers should ensure the safety of their system meets the requirements of their use case. We recommend using lighter system safeguards for such use cases, like Llama Guard 3-1B or its mobile-optimized version.
### Evaluations
**Scaled Evaluations:** We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Purple Llama safeguards to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case.
**Red Teaming:** We conducted recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we used the learnings to improve our benchmarks and safety tuning datasets. We partnered early with subject-matter experts in critical risk areas to understand the nature of these real-world harms and how such models may lead to unintended harm for society. Based on these conversations, we derived a set of adversarial goals for the red team to attempt to achieve, such as extracting harmful information or reprogramming the model to act in a potentially harmful capacity. The red team consisted of experts in cybersecurity, adversarial machine learning, responsible AI, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
In addition to our safety work above, we took extra care on measuring and/or mitigating the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive Weapons):** Llama 3.2 1B and 3B models are smaller and less capable derivatives of Llama 3.1. For Llama 3.1 70B and 405B, to assess risks related to proliferation of chemical and biological weapons, we performed uplift testing designed to assess whether use of Llama 3.1 models could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons and have determined that such testing also applies to the smaller 1B and 3B models.
**2\. Child Safety:** Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors including the additional languages Llama 3 is trained on. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
**3\. Cyber Attacks:** For Llama 3.1 405B, our cyber attack uplift study investigated whether LLMs can enhance human capabilities in hacking tasks, both in terms of skill level and speed.
Our attack automation study focused on evaluating the capabilities of LLMs when used as autonomous agents in cyber offensive operations, specifically in the context of ransomware attacks. This evaluation was distinct from previous studies that considered LLMs as interactive assistants. The primary objective was to assess whether these models could effectively function as independent agents in executing complex cyber-attacks without human intervention. Because Llama 3.2’s 1B and 3B models are smaller and less capable models than Llama 3.1 405B, we broadly believe that the testing conducted for the 405B model also applies to Llama 3.2 models.
### Community
**Industry Partnerships:** Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
**Grants:** We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
**Reporting:** Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
**Values:** The core values of Llama 3.2 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3.2 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
**Testing:** Llama 3.2 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3.2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3.2 models, developers should perform safety testing and tuning tailored to their specific applications of the model. Please refer to available resources including our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide), [Trust and Safety](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more about responsible development.
|
{"base_model": ["meta-llama/Llama-3.2-1B"], "language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\n“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\n“Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. \nb. Redistribution and Use. \ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. \niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n#### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law\n 5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n 7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta \n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:\n 8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled substances\n 11. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:\n 14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n 15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 16. Generating, promoting, or further distributing spam\n 17. Impersonating another individual without consent, authorization, or legal right\n 18. Representing that the use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement \n4. Fail to appropriately disclose to end users any known dangers of your AI system 5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.\n\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]", "extra_gated_fields": {"First Name": "text", "Last Name": "text", "Date of birth": "date_picker", "Country": "country", "Affiliation": "text", "Job title": {"type": "select", "options": ["Student", "Research Graduate", "AI researcher", "AI developer/engineer", "Reporter", "Other"]}, "geo": "ip_location", "By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy": "checkbox"}, "extra_gated_description": "The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).", "extra_gated_button_content": "Submit"}
|
task
|
[
"SUMMARIZATION"
] | 42,193 |
RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf
|
RichardErkhov
| null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | 2024-09-14T18:33:46Z |
2024-09-15T01:32:44+00:00
| 48 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hyperion-3.0-Mistral-7B-DPO - GGUF
- Model creator: https://huggingface.co/Locutusque/
- Original model: https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hyperion-3.0-Mistral-7B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q2_K.gguf) | Q2_K | 2.53GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K.gguf) | Q3_K | 3.28GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K.gguf) | Q4_K | 4.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K.gguf) | Q5_K | 4.78GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q6_K.gguf) | Q6_K | 5.53GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
language:
- en
widget:
- example_title: Reasoning
text: >
<|im_start|>system
You are Hyperion, an AI assistant developed by Sebastian Gabarain with a
focus in reasoning, coding, science, and other general-purpose subjects. You
are an AI assistant that follows instruction extremely well. Help as much as
you can.
<|im_end|>
<|im_start|>user
In this task, you are given a description of an object and its surroundings.
Your
task is to describe the motion of the object mentioned in the description
using
differential equations. The answer should be a mathematical expression in t.
Description:
A ball is thrown up at 30 ft/sec from 3 feet off the ground. Solve this
step-by-step,
explaining each step in detail.
<|im_end|>
<|im_start|>assistant
output:
text: >
Hyperion here. The task requires you to describe the motion of a ball
thrown up using differential equations. Here's the detailed explanation of
how to solve it:
1. Identify the given information:
- Initial velocity (v0): 30 ft/sec
- Initial height (y0): 3 feet off the ground
2. Determine the acceleration due to gravity (g) at sea level, which is
approximately -32.2 ft/s^2. Since we're dealing with a short distance and
not very high altitudes, we can consider this value of g as constant.
3. Use Newton's second law of motion to find the differential equation
that describes the motion:
- The net force acting on the ball is the sum of gravity (mg) and air resistance (which we will neglect for simplicity). So, Fnet = mg.
- Since acceleration equals force divided by mass, a = Fnet/m. In this case, m cancels out, so a = g.
- The motion equation becomes: dv/dt = g = -32.2 ft/s^2 (since we're moving upwards). This is the first differential equation.
4. Since we want to describe the ball's position (y) as well, we need
another differential equation. We can use kinematics equations:
- The velocity equation is v = v0 + at. Substitute the value of a from step 3:
dv/dt = -32.2 ft/s^2. Rewriting it as an integral:
∫dv = -32.2 ft/s^2 * dt
- Integrating with respect to time, we get v(t) = -16.1 t + C (constant of integration).
5. Now we have two equations, one for velocity and another for position:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + C (equation 2)
6. We need to find the constant C using the initial conditions given in
the problem. We know that initially, at time t=0, the ball's velocity is
v0:
- Equation 2 becomes v0 = -16.1 * 0 + C, which simplifies to C = v0 = 30 ft/sec (initial velocity).
7. Now we have both equations with the constants filled in:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
8. Finally, we need to find the ball's position (y) as a function of time:
- Use the second kinematics equation y = y0 + v0*t + ½*a*t^2 (since acceleration is constant, it can be moved outside the integral).
- Substitute the values for v0 and a:
y(t) = 3 ft + (30 ft/sec) * t + (-½ * 32.2 ft/s^2) * t^2
That's it! The differential equation describing the motion of the ball is:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
- y(t) = 3 ft + (30 ft/sec) * t - (16.1 t^2) ft (equation 3)
datasets:
- Locutusque/hyperion-dpo-v1.0
---
# Hyperion-3.0-Mistral-7B-DPO
## Model Details
- **Model Name**: Locutusque/Hyperion-3.0-Mistral-7B-DPO
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Publisher**: Locutusque
- **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning
- **Language**: Multi-domain, English language
- **License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-3.0-Mistral-7B-DPO` is an advanced language model fine-tuned with a dataset of 20,000 meticulously curated high-quality preference pairs using Direct Preference Optimization (DPO). The examples were generated by GPT-4 to ensure exceptional quality and relevance. This model is designed to provide superior performance across a wide range of complex tasks, including question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers, developers, and organizations seeking a highly capable and reliable language model for tackling challenging problems across various domains. Potential use cases include:
- Intelligent tutoring systems and educational applications in science, medicine, mathematics, and computer science
- Advanced conversational AI for technical support, customer service, and domain-specific chatbots
- Code generation and analysis tools for software development and programming assistance
- Medical text analysis and information retrieval for healthcare professionals and researchers
- Mathematical problem-solving and logical reasoning applications for academia and industry
## Training Data
The `Locutusque/Hyperion-3.0-Mistral-7B-DPO` model was fine-tuned on a carefully curated dataset of 20,000 preference pairs, where 4,000 examples were used to fine-tune. These examples were generated by GPT-4 to ensure the highest quality and relevance across various domains, including programming, medical texts, mathematical problems, and reasoning tasks. The training data was further optimized using Direct Preference Optimization (DPO) to align the model's outputs with human preferences and improve overall performance.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF
## Evaluation Results
mmlu flan cot 5-shot
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|-------------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_formal_logic | 0|get-answer| 0|exact_match|0.2143|± |0.1138|
| - mmlu_flan_cot_fewshot_high_school_european_history | 0|get-answer| 0|exact_match|0.6667|± |0.1143|
| - mmlu_flan_cot_fewshot_high_school_us_history | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_high_school_world_history | 0|get-answer| 0|exact_match|0.5385|± |0.0997|
| - mmlu_flan_cot_fewshot_international_law | 0|get-answer| 0|exact_match|0.9231|± |0.0769|
| - mmlu_flan_cot_fewshot_jurisprudence | 0|get-answer| 0|exact_match|0.5455|± |0.1575|
| - mmlu_flan_cot_fewshot_logical_fallacies | 0|get-answer| 0|exact_match|0.7778|± |0.1008|
| - mmlu_flan_cot_fewshot_moral_disputes | 0|get-answer| 0|exact_match|0.5526|± |0.0817|
| - mmlu_flan_cot_fewshot_moral_scenarios | 0|get-answer| 0|exact_match|0.4000|± |0.0492|
| - mmlu_flan_cot_fewshot_philosophy | 0|get-answer| 0|exact_match|0.7647|± |0.0738|
| - mmlu_flan_cot_fewshot_prehistory | 0|get-answer| 0|exact_match|0.6571|± |0.0814|
| - mmlu_flan_cot_fewshot_professional_law | 0|get-answer| 0|exact_match|0.3294|± |0.0362|
| - mmlu_flan_cot_fewshot_world_religions | 0|get-answer| 0|exact_match|0.8947|± |0.0723|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_business_ethics | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_clinical_knowledge | 0|get-answer| 0|exact_match|0.5862|± |0.0931|
| - mmlu_flan_cot_fewshot_college_medicine | 0|get-answer| 0|exact_match|0.6364|± |0.1050|
| - mmlu_flan_cot_fewshot_global_facts | 0|get-answer| 0|exact_match|0.6000|± |0.1633|
| - mmlu_flan_cot_fewshot_human_aging | 0|get-answer| 0|exact_match|0.6087|± |0.1041|
| - mmlu_flan_cot_fewshot_management | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_marketing | 0|get-answer| 0|exact_match|0.8000|± |0.0816|
| - mmlu_flan_cot_fewshot_medical_genetics | 0|get-answer| 0|exact_match|1.0000|± |0.0000|
| - mmlu_flan_cot_fewshot_miscellaneous | 0|get-answer| 0|exact_match|0.8023|± |0.0432|
| - mmlu_flan_cot_fewshot_nutrition | 0|get-answer| 0|exact_match|0.6667|± |0.0833|
| - mmlu_flan_cot_fewshot_professional_accounting | 0|get-answer| 0|exact_match|0.4839|± |0.0912|
| - mmlu_flan_cot_fewshot_professional_medicine | 0|get-answer| 0|exact_match|0.5806|± |0.0901|
| - mmlu_flan_cot_fewshot_virology | 0|get-answer| 0|exact_match|0.3889|± |0.1182|
| - mmlu_flan_cot_fewshot_social_sciences |N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_econometrics | 0|get-answer| 0|exact_match|0.4167|± |0.1486|
| - mmlu_flan_cot_fewshot_high_school_geography | 0|get-answer| 0|exact_match|0.9091|± |0.0627|
| - mmlu_flan_cot_fewshot_high_school_government_and_politics| 0|get-answer| 0|exact_match|0.8095|± |0.0878|
| - mmlu_flan_cot_fewshot_high_school_macroeconomics | 0|get-answer| 0|exact_match|0.6512|± |0.0735|
| - mmlu_flan_cot_fewshot_high_school_microeconomics | 0|get-answer| 0|exact_match|0.5769|± |0.0988|
| - mmlu_flan_cot_fewshot_high_school_psychology | 0|get-answer| 0|exact_match|0.9000|± |0.0391|
| - mmlu_flan_cot_fewshot_human_sexuality | 0|get-answer| 0|exact_match|0.6667|± |0.1421|
| - mmlu_flan_cot_fewshot_professional_psychology | 0|get-answer| 0|exact_match|0.6522|± |0.0578|
| - mmlu_flan_cot_fewshot_public_relations | 0|get-answer| 0|exact_match|0.5833|± |0.1486|
| - mmlu_flan_cot_fewshot_security_studies | 0|get-answer| 0|exact_match|0.4074|± |0.0964|
| - mmlu_flan_cot_fewshot_sociology | 0|get-answer| 0|exact_match|0.8182|± |0.0842|
| - mmlu_flan_cot_fewshot_us_foreign_policy | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
| - mmlu_flan_cot_fewshot_abstract_algebra | 0|get-answer| 0|exact_match|0.0909|± |0.0909|
| - mmlu_flan_cot_fewshot_anatomy | 0|get-answer| 0|exact_match|0.4286|± |0.1373|
| - mmlu_flan_cot_fewshot_astronomy | 0|get-answer| 0|exact_match|0.5625|± |0.1281|
| - mmlu_flan_cot_fewshot_college_biology | 0|get-answer| 0|exact_match|0.5000|± |0.1291|
| - mmlu_flan_cot_fewshot_college_chemistry | 0|get-answer| 0|exact_match|0.5000|± |0.1890|
| - mmlu_flan_cot_fewshot_college_computer_science | 0|get-answer| 0|exact_match|0.2727|± |0.1408|
| - mmlu_flan_cot_fewshot_college_mathematics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_college_physics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_computer_security | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_conceptual_physics | 0|get-answer| 0|exact_match|0.6538|± |0.0951|
| - mmlu_flan_cot_fewshot_electrical_engineering | 0|get-answer| 0|exact_match|0.7500|± |0.1118|
| - mmlu_flan_cot_fewshot_elementary_mathematics | 0|get-answer| 0|exact_match|0.7317|± |0.0701|
| - mmlu_flan_cot_fewshot_high_school_biology | 0|get-answer| 0|exact_match|0.5938|± |0.0882|
| - mmlu_flan_cot_fewshot_high_school_chemistry | 0|get-answer| 0|exact_match|0.3636|± |0.1050|
| - mmlu_flan_cot_fewshot_high_school_computer_science | 0|get-answer| 0|exact_match|0.5556|± |0.1757|
| - mmlu_flan_cot_fewshot_high_school_mathematics | 0|get-answer| 0|exact_match|0.3103|± |0.0874|
| - mmlu_flan_cot_fewshot_high_school_physics | 0|get-answer| 0|exact_match|0.2353|± |0.1060|
| - mmlu_flan_cot_fewshot_high_school_statistics | 0|get-answer| 0|exact_match|0.3043|± |0.0981|
| - mmlu_flan_cot_fewshot_machine_learning | 0|get-answer| 0|exact_match|0.4545|± |0.1575|
| Groups |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-3.0-Mistral-7B-DPO"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nExplain the implications of quantum entanglement in layman's terms.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, do_sample=True, top_p=0.7, top_k=6) # These are the recommended sample settings.
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
While the training data has been carefully curated and optimized, there may still be some inconsistencies or biases present due to the inherent complexity and diversity of the source dataset. Users should be aware of potential limitations and carefully evaluate the model's outputs for their specific use case.
Additionally, this model is highly compliant and will attempt to respond to most requests. For enterprise-level deployment, it is strongly recommended to further fine-tune the model using DPO to align its behavior with specific requirements and constraints.
## Licensing Information
This model is released under the Apache-2.0 license.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Hyperion-3.0-Mistral-7B-DPO - GGUF
- Model creator: https://huggingface.co/Locutusque/
- Original model: https://huggingface.co/Locutusque/Hyperion-3.0-Mistral-7B-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Hyperion-3.0-Mistral-7B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q2_K.gguf) | Q2_K | 2.53GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K.gguf) | Q3_K | 3.28GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Hyperion-3.0-Mistral-7B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K.gguf) | Q4_K | 4.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K.gguf) | Q5_K | 4.78GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q6_K.gguf) | Q6_K | 5.53GB |
| [Hyperion-3.0-Mistral-7B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/Locutusque_-_Hyperion-3.0-Mistral-7B-DPO-gguf/blob/main/Hyperion-3.0-Mistral-7B-DPO.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
license: apache-2.0
language:
- en
widget:
- example_title: Reasoning
text: >
<|im_start|>system
You are Hyperion, an AI assistant developed by Sebastian Gabarain with a
focus in reasoning, coding, science, and other general-purpose subjects. You
are an AI assistant that follows instruction extremely well. Help as much as
you can.
<|im_end|>
<|im_start|>user
In this task, you are given a description of an object and its surroundings.
Your
task is to describe the motion of the object mentioned in the description
using
differential equations. The answer should be a mathematical expression in t.
Description:
A ball is thrown up at 30 ft/sec from 3 feet off the ground. Solve this
step-by-step,
explaining each step in detail.
<|im_end|>
<|im_start|>assistant
output:
text: >
Hyperion here. The task requires you to describe the motion of a ball
thrown up using differential equations. Here's the detailed explanation of
how to solve it:
1. Identify the given information:
- Initial velocity (v0): 30 ft/sec
- Initial height (y0): 3 feet off the ground
2. Determine the acceleration due to gravity (g) at sea level, which is
approximately -32.2 ft/s^2. Since we're dealing with a short distance and
not very high altitudes, we can consider this value of g as constant.
3. Use Newton's second law of motion to find the differential equation
that describes the motion:
- The net force acting on the ball is the sum of gravity (mg) and air resistance (which we will neglect for simplicity). So, Fnet = mg.
- Since acceleration equals force divided by mass, a = Fnet/m. In this case, m cancels out, so a = g.
- The motion equation becomes: dv/dt = g = -32.2 ft/s^2 (since we're moving upwards). This is the first differential equation.
4. Since we want to describe the ball's position (y) as well, we need
another differential equation. We can use kinematics equations:
- The velocity equation is v = v0 + at. Substitute the value of a from step 3:
dv/dt = -32.2 ft/s^2. Rewriting it as an integral:
∫dv = -32.2 ft/s^2 * dt
- Integrating with respect to time, we get v(t) = -16.1 t + C (constant of integration).
5. Now we have two equations, one for velocity and another for position:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + C (equation 2)
6. We need to find the constant C using the initial conditions given in
the problem. We know that initially, at time t=0, the ball's velocity is
v0:
- Equation 2 becomes v0 = -16.1 * 0 + C, which simplifies to C = v0 = 30 ft/sec (initial velocity).
7. Now we have both equations with the constants filled in:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
8. Finally, we need to find the ball's position (y) as a function of time:
- Use the second kinematics equation y = y0 + v0*t + ½*a*t^2 (since acceleration is constant, it can be moved outside the integral).
- Substitute the values for v0 and a:
y(t) = 3 ft + (30 ft/sec) * t + (-½ * 32.2 ft/s^2) * t^2
That's it! The differential equation describing the motion of the ball is:
- dv/dt = -32.2 ft/s^2 (equation 1)
- v(t) = -16.1 t + 30 ft/sec (equation 2)
- y(t) = 3 ft + (30 ft/sec) * t - (16.1 t^2) ft (equation 3)
datasets:
- Locutusque/hyperion-dpo-v1.0
---
# Hyperion-3.0-Mistral-7B-DPO
## Model Details
- **Model Name**: Locutusque/Hyperion-3.0-Mistral-7B-DPO
- **Base Model**: mistralai/Mistral-7B-v0.1
- **Publisher**: Locutusque
- **Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning
- **Language**: Multi-domain, English language
- **License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-3.0-Mistral-7B-DPO` is an advanced language model fine-tuned with a dataset of 20,000 meticulously curated high-quality preference pairs using Direct Preference Optimization (DPO). The examples were generated by GPT-4 to ensure exceptional quality and relevance. This model is designed to provide superior performance across a wide range of complex tasks, including question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers, developers, and organizations seeking a highly capable and reliable language model for tackling challenging problems across various domains. Potential use cases include:
- Intelligent tutoring systems and educational applications in science, medicine, mathematics, and computer science
- Advanced conversational AI for technical support, customer service, and domain-specific chatbots
- Code generation and analysis tools for software development and programming assistance
- Medical text analysis and information retrieval for healthcare professionals and researchers
- Mathematical problem-solving and logical reasoning applications for academia and industry
## Training Data
The `Locutusque/Hyperion-3.0-Mistral-7B-DPO` model was fine-tuned on a carefully curated dataset of 20,000 preference pairs, where 4,000 examples were used to fine-tune. These examples were generated by GPT-4 to ensure the highest quality and relevance across various domains, including programming, medical texts, mathematical problems, and reasoning tasks. The training data was further optimized using Direct Preference Optimization (DPO) to align the model's outputs with human preferences and improve overall performance.
## Quants
ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-exl2
GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF
## Evaluation Results
mmlu flan cot 5-shot
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|-------------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_formal_logic | 0|get-answer| 0|exact_match|0.2143|± |0.1138|
| - mmlu_flan_cot_fewshot_high_school_european_history | 0|get-answer| 0|exact_match|0.6667|± |0.1143|
| - mmlu_flan_cot_fewshot_high_school_us_history | 0|get-answer| 0|exact_match|0.7727|± |0.0914|
| - mmlu_flan_cot_fewshot_high_school_world_history | 0|get-answer| 0|exact_match|0.5385|± |0.0997|
| - mmlu_flan_cot_fewshot_international_law | 0|get-answer| 0|exact_match|0.9231|± |0.0769|
| - mmlu_flan_cot_fewshot_jurisprudence | 0|get-answer| 0|exact_match|0.5455|± |0.1575|
| - mmlu_flan_cot_fewshot_logical_fallacies | 0|get-answer| 0|exact_match|0.7778|± |0.1008|
| - mmlu_flan_cot_fewshot_moral_disputes | 0|get-answer| 0|exact_match|0.5526|± |0.0817|
| - mmlu_flan_cot_fewshot_moral_scenarios | 0|get-answer| 0|exact_match|0.4000|± |0.0492|
| - mmlu_flan_cot_fewshot_philosophy | 0|get-answer| 0|exact_match|0.7647|± |0.0738|
| - mmlu_flan_cot_fewshot_prehistory | 0|get-answer| 0|exact_match|0.6571|± |0.0814|
| - mmlu_flan_cot_fewshot_professional_law | 0|get-answer| 0|exact_match|0.3294|± |0.0362|
| - mmlu_flan_cot_fewshot_world_religions | 0|get-answer| 0|exact_match|0.8947|± |0.0723|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_business_ethics | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_clinical_knowledge | 0|get-answer| 0|exact_match|0.5862|± |0.0931|
| - mmlu_flan_cot_fewshot_college_medicine | 0|get-answer| 0|exact_match|0.6364|± |0.1050|
| - mmlu_flan_cot_fewshot_global_facts | 0|get-answer| 0|exact_match|0.6000|± |0.1633|
| - mmlu_flan_cot_fewshot_human_aging | 0|get-answer| 0|exact_match|0.6087|± |0.1041|
| - mmlu_flan_cot_fewshot_management | 0|get-answer| 0|exact_match|0.9091|± |0.0909|
| - mmlu_flan_cot_fewshot_marketing | 0|get-answer| 0|exact_match|0.8000|± |0.0816|
| - mmlu_flan_cot_fewshot_medical_genetics | 0|get-answer| 0|exact_match|1.0000|± |0.0000|
| - mmlu_flan_cot_fewshot_miscellaneous | 0|get-answer| 0|exact_match|0.8023|± |0.0432|
| - mmlu_flan_cot_fewshot_nutrition | 0|get-answer| 0|exact_match|0.6667|± |0.0833|
| - mmlu_flan_cot_fewshot_professional_accounting | 0|get-answer| 0|exact_match|0.4839|± |0.0912|
| - mmlu_flan_cot_fewshot_professional_medicine | 0|get-answer| 0|exact_match|0.5806|± |0.0901|
| - mmlu_flan_cot_fewshot_virology | 0|get-answer| 0|exact_match|0.3889|± |0.1182|
| - mmlu_flan_cot_fewshot_social_sciences |N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_econometrics | 0|get-answer| 0|exact_match|0.4167|± |0.1486|
| - mmlu_flan_cot_fewshot_high_school_geography | 0|get-answer| 0|exact_match|0.9091|± |0.0627|
| - mmlu_flan_cot_fewshot_high_school_government_and_politics| 0|get-answer| 0|exact_match|0.8095|± |0.0878|
| - mmlu_flan_cot_fewshot_high_school_macroeconomics | 0|get-answer| 0|exact_match|0.6512|± |0.0735|
| - mmlu_flan_cot_fewshot_high_school_microeconomics | 0|get-answer| 0|exact_match|0.5769|± |0.0988|
| - mmlu_flan_cot_fewshot_high_school_psychology | 0|get-answer| 0|exact_match|0.9000|± |0.0391|
| - mmlu_flan_cot_fewshot_human_sexuality | 0|get-answer| 0|exact_match|0.6667|± |0.1421|
| - mmlu_flan_cot_fewshot_professional_psychology | 0|get-answer| 0|exact_match|0.6522|± |0.0578|
| - mmlu_flan_cot_fewshot_public_relations | 0|get-answer| 0|exact_match|0.5833|± |0.1486|
| - mmlu_flan_cot_fewshot_security_studies | 0|get-answer| 0|exact_match|0.4074|± |0.0964|
| - mmlu_flan_cot_fewshot_sociology | 0|get-answer| 0|exact_match|0.8182|± |0.0842|
| - mmlu_flan_cot_fewshot_us_foreign_policy | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
| - mmlu_flan_cot_fewshot_abstract_algebra | 0|get-answer| 0|exact_match|0.0909|± |0.0909|
| - mmlu_flan_cot_fewshot_anatomy | 0|get-answer| 0|exact_match|0.4286|± |0.1373|
| - mmlu_flan_cot_fewshot_astronomy | 0|get-answer| 0|exact_match|0.5625|± |0.1281|
| - mmlu_flan_cot_fewshot_college_biology | 0|get-answer| 0|exact_match|0.5000|± |0.1291|
| - mmlu_flan_cot_fewshot_college_chemistry | 0|get-answer| 0|exact_match|0.5000|± |0.1890|
| - mmlu_flan_cot_fewshot_college_computer_science | 0|get-answer| 0|exact_match|0.2727|± |0.1408|
| - mmlu_flan_cot_fewshot_college_mathematics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_college_physics | 0|get-answer| 0|exact_match|0.3636|± |0.1521|
| - mmlu_flan_cot_fewshot_computer_security | 0|get-answer| 0|exact_match|0.7273|± |0.1408|
| - mmlu_flan_cot_fewshot_conceptual_physics | 0|get-answer| 0|exact_match|0.6538|± |0.0951|
| - mmlu_flan_cot_fewshot_electrical_engineering | 0|get-answer| 0|exact_match|0.7500|± |0.1118|
| - mmlu_flan_cot_fewshot_elementary_mathematics | 0|get-answer| 0|exact_match|0.7317|± |0.0701|
| - mmlu_flan_cot_fewshot_high_school_biology | 0|get-answer| 0|exact_match|0.5938|± |0.0882|
| - mmlu_flan_cot_fewshot_high_school_chemistry | 0|get-answer| 0|exact_match|0.3636|± |0.1050|
| - mmlu_flan_cot_fewshot_high_school_computer_science | 0|get-answer| 0|exact_match|0.5556|± |0.1757|
| - mmlu_flan_cot_fewshot_high_school_mathematics | 0|get-answer| 0|exact_match|0.3103|± |0.0874|
| - mmlu_flan_cot_fewshot_high_school_physics | 0|get-answer| 0|exact_match|0.2353|± |0.1060|
| - mmlu_flan_cot_fewshot_high_school_statistics | 0|get-answer| 0|exact_match|0.3043|± |0.0981|
| - mmlu_flan_cot_fewshot_machine_learning | 0|get-answer| 0|exact_match|0.4545|± |0.1575|
| Groups |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|mmlu_flan_cot_fewshot |N/A |get-answer| 0|exact_match|0.5833|± |0.0118|
| - mmlu_flan_cot_fewshot_humanities |N/A |get-answer| 0|exact_match|0.5039|± |0.0205|
| - mmlu_flan_cot_fewshot_other |N/A |get-answer| 0|exact_match|0.6833|± |0.0244|
| - mmlu_flan_cot_fewshot_social_sciences|N/A |get-answer| 0|exact_match|0.7003|± |0.0239|
| - mmlu_flan_cot_fewshot_stem |N/A |get-answer| 0|exact_match|0.4866|± |0.0262|
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-3.0-Mistral-7B-DPO"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nExplain the implications of quantum entanglement in layman's terms.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, do_sample=True, top_p=0.7, top_k=6) # These are the recommended sample settings.
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
While the training data has been carefully curated and optimized, there may still be some inconsistencies or biases present due to the inherent complexity and diversity of the source dataset. Users should be aware of potential limitations and carefully evaluate the model's outputs for their specific use case.
Additionally, this model is highly compliant and will attempt to respond to most requests. For enterprise-level deployment, it is strongly recommended to further fine-tune the model using DPO to align its behavior with specific requirements and constraints.
## Licensing Information
This model is released under the Apache-2.0 license.
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,194 |
seongil-dn/bge-m3-kor-retrieval-bs16-checkpoint-1981
|
seongil-dn
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:482743",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-04T23:31:15Z |
2024-12-04T23:32:40+00:00
| 5 | 0 |
---
base_model: BAAI/bge-m3
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:482743
- loss:CachedGISTEmbedLoss
widget:
- source_sentence: 언제 청년층 생산가능인구 수가 9,149,000명으로 드러났어
sentences:
- '제2장 청년층 노동시장 동향 및 청년내일채움공제의 성과 개괄
□ 본 장에서는 청년내일채움공제 사업의 도입 배경으로 작용한 청년층 노동시장의 현황을 살펴보고, 세부 성과에 앞서 청년내일채움공제가 현 청년고용
실태에 미친 개괄적 성과를 평가하고자 한다
제1절 청년층 노동시장 동향
1. 청년층 인구구조 변화 추이
□ 고용률과 실업률, 취업자 수 등의 고용지표는 경제 및 노동시장의 상황 뿐 만 아니라, 인구구조 변화에 의해 영향을 받는 만큼, 청년층의
노동시장 동향을 파악하기 위해서는 청년층 인구구조의 변화양상을 이해할 필요가 있음 □ 청년층 인구구조 변화 추이
○ 전체 생산가능인구가 여전히 증가추세에 있는 것에 비해, 청년층(15∼29세) 생산가능인구는 1990년을 기점으로 감소추세에 있음. 2018년
청년층 생산가능인구는 전년대비 133천명 감소한 9,149천명임 - 15∼19세, 20∼24세 연령대의 생산가능인구는 감소추세에 있으나, 25∼29세
생산가능인구(3,432천명, 전년대비 96천명 증가)는 증가추세에 있으며 향후 2∼3년간 증가추세가 지속될 것으로 전망됨
○ 청년층 경제활동인구는 2000년대 지속적으로 감소하다가 2013∼2016년 기간 증가추세를 보였으나, 2017년 이후 감소하고 있음. 2018년
청년층 경제활동인구는 전년대비 21천명 감소한 4,312천명을 기록하였음'
- 2. 인구구조변화의 현황<br>UN 인구국 자료를 분석해보면 2010년 인구와 2050년의 중간 수준 예측치를 기준으로 인구 1,000만 명
이상의 국가이면서 2050년까지 생산가능인구(15세~64세)가 감소하는 국가는 198개국중에서 19개국이다. 이 중에서 우리나라는 생산 가능인구(15세~64세)
감소율이 세계 6위 수준을 기록할 것으로 분석되었고, 이에 따라 생산가능 인구가 2010년 대비 2050년에는 27%가 줄어들 것으로 예상된다.
이에 반하여 노인인구는 236%가 증가할 것으로 예상된다. 우리나라는 생산가능인구 감소율과 노인인구(65세 이상) 증가율이 매우 높은 나라
중 하나이며, 동시에 유소년(0~14세) 인구의 감소도 22% 정도 될 것으로 예상하고 있다. <br>UN의 인구국(局) 자료인 [표 1]을
보면, 첫째 현재 우리나라를 포함하여 선진국에서조차도 현재 인구감소를 겪어본 국가가 거의 없어 이것이 가져다 줄 영향에 대한 경험적 지식이
거의 전무하다는 점이다. <br>둘째, 우리나라의 경우에는 무엇보다도 인구감소기로의 전환이 급격하게 이루어지고 있다. 따라서 우리 스스로 이의
긍정적ㆍ부정적 영향을 연역적으로 추론해볼 필요가 있다. 이를 위해서는 우리 사회의 전체 변화를 거시적으로 바라보고 정책 수단에 따라 가져다
줄 다양한 변화를 살펴볼 수 있는 시뮬레이션(simulation) 도구 및 정책 개발 그리고 집행 체제가 필요하다고 할 수 있다.
- '7. 해외사례
플랫폼 노동관계에 특화된 산재보험제도를 별도로 규정하고 있는 포괄적 입법례는 아직 보이지 않는다. 다만, 플랫폼 노동에 관한 미국의 앨라배마
주법(SB363, 2018.3.13. 제정)에서는 플랫폼 노동자들에게 일정한 수준의 복지를 허용하기 위한 법적 기초를 만들고 있다(부표 1
참조). SB363은 플랫폼 노동자들의 이동성을 고려한 ‘이동 가능 복지제도’(portable benefit plans)를 플랫폼 사업주들이
만들면, 이 복지제도를 통해 다수의 플랫폼과 계약을 맺는 플랫폼 노동자들이 복수 사업주 혹은 사업주의 변경에도 불구하고 복지제도를 유지하면서
그로부터 복지수혜를 얻을 수 있도록 하고 있다. 이때의 복지제도는 건강보험, 재해보험, 퇴직수당제도 등으로 구성될 수 있고, 플랫폼 노동자가
복지계좌를 개설하면 여기에 플랫폼 사업주가 복지기여금을 납부하는 방식을 취하고 있다. 국가에 의한 사회보험제도가 아니고, 플랫폼 사업주에 의한
자발적 복지제도의 기초를 구성한 것이기는 하지만, 지방정부가 이를 관리하도록 함으로써 공정성과 객관성을 담보한다. 한국의 산재보험제도에 대입시켜
비교할 수 있는 입법례가 아니기는 하지만, 플랫폼 사업주로 하여금 플랫폼 노동자에 대한 일정한 복지제도 형성을 촉구하고 있다는 점, 해당 복지제도에
대하여 플랫폼이 일정한 기여금을 납부하도록 하고 있다는 점 등에서 하나의 제도적 모델로 시사가능하다. 한편, 미국의 플로리다, 인디아나, 아이오와,
켄터키, 테네시, 텍사스, 유타주 등에서는 플랫폼노동자(경우에 따라서는 가사노동자에 국한)의 의미를 규정하면서, 주법상 재해보험제도 및(또는)
실업급여제도의 가입자가 될 수 있도록 하고 있다.'
- source_sentence: 도매 전력 매매를 주로 한 전력거래체계를 만들었을 때 어느 부문이 안정성 영향을 받을 수 있어
sentences:
- '(4) 외환시장
원/달러 환율 2016년 말 대비 하락
원/달러 환율은 연초 미 보호무역주의에 대한 우려, 수출 호조, 외국인 증권투자자금 유입 등으로 3월 말까지 상당 폭 하락하였다. 이후 북한
관련 리스크 전개 상황, 미 연준의 연내 금리인상 기대 변화 등에 따라 등락하는 모습을 보였다. 10월 들어 북한 리스크 완화, 미 달러화
약세 지속, 국내경제의 견실한 성장세 등으로 하락세로 전환하였으며, 2017년 말에는 연중 최저치인 1,070.5원을 기록하였다. 2017년
중 미 달러화에 대한 원화의 가치는 2016년 말 대비 12.8% 상승하였다. 한편 원/엔 환율(100엔당)은 미 연준 금리인상 기대변화,
북한 리스크 등에 영향을 받으며 등락을 지속하다가 10월 들어 원화가 상대적으로 강세를 보이면서 하락하였다. 2017년 말 원화의 엔화에 대한
가치는 100엔당 949.2원으로 2016년 말 대비 9.1% 상승하였다. 원/달러 환율의 일중 및 전일대비 변동폭은 연평균 각각 5.5원,
4.4원을 기록하여 2016년(각각 7.5원 및 6.0원)에 비해 축소되었다. 외환스왑레이트 하락, 통화스왑금리 상승
외환스왑레이트(3개월물)는 외화자금수요 확대 등으로 2016년 말 대비 0.61% 포인트 하락한 -0.63%를 기록하였다. 통화스왑금리(3년물)는
경제지표 호조 등에 따른 국고채 금리 상승 등으로 2016년 말 대비 0.37% 포인트 상승한 1.57%를 기록하였다.'
- '4. 국내 전력거래 형태 전망
앞서 살펴보았듯이, 구조개편을 단행한 외국의 전력거래 형태는 대체로 도매 전력거래 위주, 소매 전력거래 위주, 계약위주 등 세 가지로 구분되어
있다. 국내 전력산업의 현재 여건을 고려할 경우 가장 가능성이 높은 전력거래 형태는 도매 전력거래 위주의 거래형태일 것이다. 만약 도매 전력거래
위주의 전력거래체계를 구축하는 경우 가격과 공급의 안정성에 상당한 영향을 미칠 것으로 보인다. 하지만 규제계약(Vesting Contract)을
포함하여 장단기 계약물량을 확대함으로써 현물시장의 가격 및 공급 변동성을 완화할 수 있는 대안이 존재한다. 만약 전력시장에서의 가격과 공급
안정성 측면을 염두에 둔다면 소매 전력거래 형태 또는 장단기 계약 위주의 전력거래 형태를 고려해 볼 수 있다. 소매 전력거래 형태에서는 주로
기존의 수직통합체제 또는 이미 발전과 판매가 송배전 부문으로부터 분리된 경우라면 발전과 판매의 통합과 이로 인한 규모의 확대에 따라 유효경쟁의
약화 가능성이 문제가 될 수 있다. 이렇게 통합된 기업의 소매시장에서의 시장지배력 문제가 존재하기 때문에 순수 판매회사 또는 신규기업과의 실질적인
경쟁이 가능한 방향으로 제도적 장치를 마련할 필요가 있다.'
- 'Ⅲ. 우리나라 전력시장에서의 유연성 자원
1. 우리나라 전력시장의 현황
우리나라의 전력시장은 전력산업의 효율성 향상을 위하여 2001년 전력산업구조개편을 통해 독점사업자였던 한국전력공사로부터 발전부문을 분리시켜
발전자회사로 재편시키고, 발전부문에 대한 경쟁체제를 도입하면서 출범하였다. 우리나라 전력시장은 소비자가 직접 거래에 참여하는 소매시장이 아닌
발전회사와 판매회사가 참여하는 도매시장으로, 다수의 발전회사가 존재하는 공급측과는 달리 수요측은 단일 구매자인 한국전력공사만 존재하는 구매독점
형태로 이루어져 있다. 또한 설비용량이 20MW를 초과하는 상업용 발전기는 전력시장을 통해서만 거래가 가능한 의무적 시장이다.'
- source_sentence: 해외 파생상품시장에서 거래되는 경우는 어떤 파생상품으로 분류할 수 있어
sentences:
- '3) 연금 세제
▩ 근퇴법 상 연금수령을 활성화하기 위한 강제조항이 부재한 상황에서 세제체계(소득세법)에 따라 수령방법이 결정되는 구조임. ○ 제도적으로 일시금
수령을 방지하거나 연금수령을 강제하는 규정이 부재한 상황에서 수령 방법은 주로 세금부담에 따라 결정될 수밖에 없음. ○ 일시금 수령을 경감시키기
위해 2016년 소득세법을 개정하였으나 개정된 소득세법 조차도 일시금과 연금수령 간 세금부담 차이가 크지 않아 일시금 수령을 방지하지 못하는
한계가 존재함. ▩ 연금과 일시금 수령 간 세제차이가 크지 않은 주요 원인은 일시금 수령시 제공하는 공제혜택이 크기 때문임. ○ 국내 퇴직연금시장에서
일시금 수령이 만연한 것은 연금수령 시 부담하는 세율이 높아서가 아니라 일시금 수령 시 부담하는 세율이 낮기 때문임. ○ 일시금 수령의 세금부담이
높지 않은 이유는 일시금 수령 시에도 다수의 공제혜택(근속연수공제+환산급여공제)이 제공되고 혜택의 규모도 크기 때문임. ○ 즉 연금수령 시에는
일시금 수령 시 부담하는 세금의 70%만을 부담하도록 규정하고(소득세법 제129조(원천징수세율)제1항제5의336)) 있으나, 일시금 수령 시
세금부담이 작기 때문에 연금수령 시 추가 제공되는 세제혜택의 크기가 미미하게 됨. ○ 특히 연금수령 기간에 따른 세금차이가 없어 연금으로 수령하더라도
단기간 연금(예, 10년)을 수령하는 유인을 제공함.'
- □ 장외파생 권역별 대표 상품을 살펴보면 금리관련상품 중에는 금리스왑(Interest Rate Swap:IRS)이, 통화관련 상품 중에서는
통화선도계약이, 신용관련거래에서는 신용파산스왑(Credit Default Swap:CDS)이 가장 높은 비중을 차지<br>○ 금리스왑은 금리관련
장외파생상품시장의 91.2%를 차지하고 있으며, 신용관련상품의 경우 신용파산스왑과 총수익스왑(Total Return Swap:TRS)만이 거래<br>○
통화선도는 대부분 NDF(Non-Deliverable Forward)의 형태로 거래<br>○ 장외파생상품 거래는 일대일 계약을 기반으로 하고
거래당사자의 높은 신용도를 요구하기 때문에 신용도가 낮은 개인 또는 금융기관은 참가하기가 어려운 실정<br>○ 특히, 신용관련거래는 다른 금융기관에
비해 상대적으로 높은 신용도를 가지고 있는 은행과 증권사를 통해서만 거래가 이루어지고 있는 것으로 파악
- '제5조 (파생상품) ① 이 법에서 ˝파생상품˝이란 다음 각 호의 어느 하나에 해당하는 계약상의 권리를 말한다.
1. 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된 금전등을 장래의 특정 시점에 인도할 것을
약정하는 계약
2. 당사자 어느 한쪽의 의사표시에 의하여 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된 금전등을
수수하는 거래를 성립시킬 수 있는 권리를 부여하는 것을 약정하는 계약
3. 장래의 일정기간 동안 미리 정한 가격으로 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된
금전등을 교환할 것을 약정하는 계약
② 이 법에서 ˝장내파생상품˝이란 파생상품으로서 파생상품시장에서 거래되는 것 또는 해외 파생상품시장(파생상품시장과 유사한 시장으로서 해외에
있는 시장과 대통령령으로 정하는 해외 파생상품거래가 이루어지는 시장을 말한다)에서 거래되는 것을 말한다. ③ 이 법에서 ˝장외파생상품˝이란
파생상품으로서 장내파생상품이 아닌 것을 말한다. ④ 제1항 각 호의 어느 하나에 해당하는 계약 중 매매계약이 아닌 계약의 체결은 이 법을 적용함에
있어서 매매계약의 체결로 본다.'
- source_sentence: 어디서 자금세탁 전제범죄에서 비영리단체 관련 자금세탁범죄를 찾아내는 방식을 선택했어
sentences:
- 'IV. 비영리단체 분야의 범죄유형 및 사례
1. 국내사례와 범죄유형
□ 한국은 아직 비영리단체에 대한 명확한 개념 구분이 정립되어 있지 않은 실정으로 이에 따라 그동안 한국에서는 비영리단체 관련범죄에 대한 인식이
제대로 이루어지지 않음. -비영리단체 관련 범죄에 있어서도 비영리단체에 대한 명확한 개념 구분 없이 판결문이 작성되어 비영리단체 관련 자금세탁범죄를
조사하는데 있어서 큰 장애로 작용함. -국내 판례에서 비영리단체와 관련하여 자금세탁이라는 용어를 직접 사용한 경우는 없으며 이에 따라 부득이
자금세탁 전제범죄를 통하여 비영리단체 관련 자금세탁범죄를 조사하는 방법을 택함.'
- 'VI. 비영리단체를 통한 테러자금조달방지를 위한 제도개선방안
□ FATF는 2001년 10월 테러자금조달에 대한 특별권고 8항(Special Recommendationson Terrorist Financing)을
통하여 비영리단체에 대한 관리감독을 강화하고 관련 법규를 정비할 것을 권고하였음. -우리나라에서 비영리단체를 통한 자금세탁을 이용해서 테러자금이
조달될 가능성은 매우 낮은 것으로 평가되나 자금세탁의 우회로를 차단하고, 또한 예방적인 조치를 취해 국제적으로 자금세탁을 통한 불법적인 테러자금조달
방지 노력에 적극 부응하기 위해서는 비영리단체에 대한 자금세탁 방지제도 도입에 적극 참여해야 할 것임.'
- '(2) 전북의 풍력산업 약점
전북의 풍력산업은 내부환경관점에서 다음과 같은 약점을 보유하고 있다. 첫째, 두산(창원), 유니슨(사천), 효성(창원), 한진(양산) 등 풍력터빈시스템업체는
모두 경남권에 위치하고 있으며, 현재 경남 이외 지역에서는 MW급 이상 풍력용 터빈시스템업체가 존재하지 않는다. 다만 최근 터빈의 대형화에
따라, 터빈의 무게 및 부피 등에 따른 운송에 한계가 존재하고 상당기간 고정식 형태의 해상풍력발전단지 구축이 중심이 될 것으로 판단됨에 따라,
부유식 해상풍력단지개발이 시작되기 전까지는 GW급 해상풍력단지 개발계획에 따라 서해안권으로 시스템업체의 이전가능성은 일부 존재한다. 둘째,
해상풍력 관련 연구/생산인력 부족 역시 장기적 해상풍력기술의 지속성 관점에서 시급한 문제이다. 현재 전북은 해상풍력터빈과 관련된 기술은 없으나,
휴먼컴퍼지트와 재료연구원 풍력핵심기술연구센터를 통해 해상풍력 블레이드 및 재료 등에 대한 기술이 축적되어 있다. 이외에 하부구조물 및 타워
관련 기술 역시 에드벡트가 석션버켓(suction bucket)방식의 하부구조물 관련 기술을 보유하고 있고 현재 군산공장을 리모델링 중인 CS윈드는
세계 최고수준의 타워제작기술을 확보하고 있다. 따라서 전북은 블레이드, 재료, 타워 및 하부구조물 등과 관련된 기술을 확보하고 있다.'
- source_sentence: 전남지역의 석유와 화학제품은 왜 수출이 늘어나는 경향을 보였어
sentences:
- '수출 증가세 지속
1/4분기 중 수출은 전년동기대비 증가흐름을 지속하였다. 품목별로 보면 석유제품, 석유화학, 철강, 선박, 반도체, 자동차 등 대다수 품목에서
증가하였다. 석유제품은 글로벌 경기회복에 따른 에너지 수요 증가와 국제유가 급등으로 수출단가가 높은 상승세를 지속하면서 증가하였다. 석유화학도
중국, 아세안을 중심으로 합성수지, 고무 등의 수출이 큰 폭 증가한 데다 고유가로 인한 수출가격도 동반 상승하면서 증가세를 이어갔다. 철강은
건설, 조선 등 글로벌 전방산업의 수요 증대, 원자재가격 상승 및 중국 감산 등에 따른 수출단가 상승 등에 힘입어 증가세를 이어갔다. 선박은
1/4분기 중 인도물량이 확대됨에 따라 증가하였다. 반도체는 자동차 등 전방산업의 견조한 수요가 이어지는 가운데 전년동기대비로 높은 단가가
지속되면서 증가하였다. 자동차는 차량용 반도체 수급차질이 지속되었음에도 불구하고 글로벌 경기회복 흐름에 따라 수요가 늘어나면서 전년동기대비
소폭 증가하였다. 모니터링 결과 향후 수출은 증가세가 지속될 것으로 전망되었다. 석유화학 및 석유정제는 수출단가 상승과 전방산업의 수요확대
기조가 이어지면서 증가할 전망이다. 철강은 주요국 경기회복과 중국, 인도 등의 인프라 투자 확대 등으로 양호한 흐름을 이어갈 전망이다. 반도체는
글로벌 스마트폰 수요 회복, 디지털 전환 기조 등으로 견조한 증가세를 지속할 것으로 보인다. 자동차는 차량용 반도체 공급차질이 점차 완화되고
미국, 신흥시장을 중심으로 수요회복이 본격화됨에 따라 소폭 증가할 전망이다. 선박은 친환경 선박수요 지속, 글로별 교역 신장 등에도 불구하고
2021년 2/4분기 집중되었던 인도물량의 기저효과로 인해 감소할 것으로 보인다.'
- '(3) 금융기관 여수신
은행 수신 큰 폭 확대
은행 수신은 2019년에 비해 증가폭이 크게 확대되었다. 수시입출식예금은 불확실성 증대에 따른 가계 및 기업의 예비자금 확보 등의 영향으로
증가 규모가 전년대비 3배가량 확대되었다. 반면 정기예금은 예금금리 하락, 예대율 및 LCR 규제 완화에 따른 은행의 정기예금 유치 유인 축소
등에 기인하여 감소로 전환하였다. 자산운용사 수신은 증가폭이 축소되었다. MMF는 꾸준한 증가세를 유지하였으나 주식형 및 채권형 펀드는 개인투자자의
주식 직접투자증가, 신용증권에 대한 시장 경계감 확대 등으로 감소로 전환하였다. 또한 기타펀드는 2019년 중 일부 사모펀드 손실 및 환매중단
사태, 사모펀드 일반투자자 요건 강화 등으로 증가 규모가 절반 수준으로 축소되었다. 한편 신용협동기구 등 비은행예금취급기관 수신은 대체로 2019년과
비슷한 증가 규모를 나타내었다.'
- '(2) 전남지역
2013년중 전남지역 수출은 전년대비 1.2% 감소로 전환하였다. 품목별로는 석유(+9.3% → +3.8%) 및 화학제품(+1.2% → +7.1%)이
중국 등 해외수요확대로 증가세를 지속하였으나 철강금속(+1.8% → -8.6%)은 글로벌 공급과잉 및 중국의 저가 철강수출 확대로, 선박(+7.6%
→ -49.2%)은 수주물량이 급격히 줄어들면서 감소로 전환하였다. 전남지역 수입은 원유, 화학제품, 철강금속 등의 수입이 줄면서 전년대비
7.4% 감소로 전환하였다.'
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-bs16-checkpoint-1981")
# Run inference
sentences = [
'전남지역의 석유와 화학제품은 왜 수출이 늘어나는 경향을 보였어',
'(2) 전남지역\n2013년중 전남지역 수출은 전년대비 1.2% 감소로 전환하였다. 품목별로는 석유(+9.3% → +3.8%) 및 화학제품(+1.2% → +7.1%)이 중국 등 해외수요확대로 증가세를 지속하였으나 철강금속(+1.8% → -8.6%)은 글로벌 공급과잉 및 중국의 저가 철강수출 확대로, 선박(+7.6% → -49.2%)은 수주물량이 급격히 줄어들면서 감소로 전환하였다. 전남지역 수입은 원유, 화학제품, 철강금속 등의 수입이 줄면서 전년대비 7.4% 감소로 전환하였다.',
'수출 증가세 지속\n1/4분기 중 수출은 전년동기대비 증가흐름을 지속하였다. 품목별로 보면 석유제품, 석유화학, 철강, 선박, 반도체, 자동차 등 대다수 품목에서 증가하였다. 석유제품은 글로벌 경기회복에 따른 에너지 수요 증가와 국제유가 급등으로 수출단가가 높은 상승세를 지속하면서 증가하였다. 석유화학도 중국, 아세안을 중심으로 합성수지, 고무 등의 수출이 큰 폭 증가한 데다 고유가로 인한 수출가격도 동반 상승하면서 증가세를 이어갔다. 철강은 건설, 조선 등 글로벌 전방산업의 수요 증대, 원자재가격 상승 및 중국 감산 등에 따른 수출단가 상승 등에 힘입어 증가세를 이어갔다. 선박은 1/4분기 중 인도물량이 확대됨에 따라 증가하였다. 반도체는 자동차 등 전방산업의 견조한 수요가 이어지는 가운데 전년동기대비로 높은 단가가 지속되면서 증가하였다. 자동차는 차량용 반도체 수급차질이 지속되었음에도 불구하고 글로벌 경기회복 흐름에 따라 수요가 늘어나면서 전년동기대비 소폭 증가하였다. 모니터링 결과 향후 수출은 증가세가 지속될 것으로 전망되었다. 석유화학 및 석유정제는 수출단가 상승과 전방산업의 수요확대 기조가 이어지면서 증가할 전망이다. 철강은 주요국 경기회복과 중국, 인도 등의 인프라 투자 확대 등으로 양호한 흐름을 이어갈 전망이다. 반도체는 글로벌 스마트폰 수요 회복, 디지털 전환 기조 등으로 견조한 증가세를 지속할 것으로 보인다. 자동차는 차량용 반도체 공급차질이 점차 완화되고 미국, 신흥시장을 중심으로 수요회복이 본격화됨에 따라 소폭 증가할 전망이다. 선박은 친환경 선박수요 지속, 글로별 교역 신장 등에도 불구하고 2021년 2/4분기 집중되었던 인도물량의 기저효과로 인해 감소할 것으로 보인다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `gradient_accumulation_steps`: 4
- `learning_rate`: 3e-05
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0011 | 1 | 3.7042 |
| 0.0021 | 2 | 4.4098 |
| 0.0032 | 3 | 4.5599 |
| 0.0042 | 4 | 4.5564 |
| 0.0053 | 5 | 5.3164 |
| 0.0064 | 6 | 4.9723 |
| 0.0074 | 7 | 5.2419 |
| 0.0085 | 8 | 3.6708 |
| 0.0095 | 9 | 3.4174 |
| 0.0106 | 10 | 3.7081 |
| 0.0117 | 11 | 3.5893 |
| 0.0127 | 12 | 2.8265 |
| 0.0138 | 13 | 1.8535 |
| 0.0149 | 14 | 2.2631 |
| 0.0159 | 15 | 1.6212 |
| 0.0170 | 16 | 1.3256 |
| 0.0180 | 17 | 3.1196 |
| 0.0191 | 18 | 2.6933 |
| 0.0202 | 19 | 2.7525 |
| 0.0212 | 20 | 1.8354 |
| 0.0223 | 21 | 1.5399 |
| 0.0233 | 22 | 1.2657 |
| 0.0244 | 23 | 1.5086 |
| 0.0255 | 24 | 1.4753 |
| 0.0265 | 25 | 1.4019 |
| 0.0276 | 26 | 1.0282 |
| 0.0286 | 27 | 1.1981 |
| 0.0297 | 28 | 1.1639 |
| 0.0308 | 29 | 1.064 |
| 0.0318 | 30 | 1.1106 |
| 0.0329 | 31 | 0.8862 |
| 0.0339 | 32 | 0.9067 |
| 0.0350 | 33 | 1.0234 |
| 0.0361 | 34 | 1.0057 |
| 0.0371 | 35 | 0.7404 |
| 0.0382 | 36 | 0.5796 |
| 0.0392 | 37 | 0.6 |
| 0.0403 | 38 | 0.6473 |
| 0.0414 | 39 | 0.7274 |
| 0.0424 | 40 | 0.5312 |
| 0.0435 | 41 | 0.6884 |
| 0.0446 | 42 | 0.4993 |
| 0.0456 | 43 | 0.5445 |
| 0.0467 | 44 | 0.2793 |
| 0.0477 | 45 | 0.4398 |
| 0.0488 | 46 | 0.4882 |
| 0.0499 | 47 | 0.3142 |
| 0.0509 | 48 | 0.253 |
| 0.0520 | 49 | 0.1723 |
| 0.0530 | 50 | 0.4482 |
| 0.0541 | 51 | 0.3704 |
| 0.0552 | 52 | 0.3844 |
| 0.0562 | 53 | 0.3141 |
| 0.0573 | 54 | 0.2717 |
| 0.0583 | 55 | 0.0936 |
| 0.0594 | 56 | 0.0795 |
| 0.0605 | 57 | 0.0754 |
| 0.0615 | 58 | 0.0839 |
| 0.0626 | 59 | 0.0739 |
| 0.0636 | 60 | 0.0622 |
| 0.0647 | 61 | 0.0541 |
| 0.0658 | 62 | 0.4835 |
| 0.0668 | 63 | 0.4849 |
| 0.0679 | 64 | 0.5093 |
| 0.0689 | 65 | 0.4725 |
| 0.0700 | 66 | 0.4658 |
| 0.0711 | 67 | 0.4257 |
| 0.0721 | 68 | 0.4656 |
| 0.0732 | 69 | 0.5188 |
| 0.0743 | 70 | 0.465 |
| 0.0753 | 71 | 0.5166 |
| 0.0764 | 72 | 0.4152 |
| 0.0774 | 73 | 0.4874 |
| 0.0785 | 74 | 0.435 |
| 0.0796 | 75 | 0.4698 |
| 0.0806 | 76 | 0.4075 |
| 0.0817 | 77 | 0.2881 |
| 0.0827 | 78 | 0.3375 |
| 0.0838 | 79 | 0.3183 |
| 0.0849 | 80 | 0.3046 |
| 0.0859 | 81 | 0.5192 |
| 0.0870 | 82 | 0.4832 |
| 0.0880 | 83 | 0.4467 |
| 0.0891 | 84 | 0.3109 |
| 0.0902 | 85 | 0.4108 |
| 0.0912 | 86 | 0.3034 |
| 0.0923 | 87 | 0.2636 |
| 0.0933 | 88 | 0.2169 |
| 0.0944 | 89 | 0.2991 |
| 0.0955 | 90 | 0.2901 |
| 0.0965 | 91 | 0.335 |
| 0.0976 | 92 | 0.3621 |
| 0.0986 | 93 | 0.2661 |
| 0.0997 | 94 | 0.3448 |
| 0.1008 | 95 | 0.1964 |
| 0.1018 | 96 | 0.2323 |
| 0.1029 | 97 | 0.2856 |
| 0.1040 | 98 | 0.2986 |
| 0.1050 | 99 | 0.2628 |
| 0.1061 | 100 | 0.2865 |
| 0.1071 | 101 | 0.2288 |
| 0.1082 | 102 | 0.208 |
| 0.1093 | 103 | 0.2074 |
| 0.1103 | 104 | 0.1906 |
| 0.1114 | 105 | 0.1639 |
| 0.1124 | 106 | 0.1597 |
| 0.1135 | 107 | 0.1896 |
| 0.1146 | 108 | 0.1387 |
| 0.1156 | 109 | 0.1281 |
| 0.1167 | 110 | 0.2742 |
| 0.1177 | 111 | 0.1787 |
| 0.1188 | 112 | 0.1449 |
| 0.1199 | 113 | 0.1114 |
| 0.1209 | 114 | 0.1889 |
| 0.1220 | 115 | 0.1044 |
| 0.1230 | 116 | 0.2556 |
| 0.1241 | 117 | 0.2081 |
| 0.1252 | 118 | 0.2649 |
| 0.1262 | 119 | 0.3898 |
| 0.1273 | 120 | 0.6489 |
| 0.1283 | 121 | 0.6267 |
| 0.1294 | 122 | 0.6013 |
| 0.1305 | 123 | 0.5391 |
| 0.1315 | 124 | 0.5176 |
| 0.1326 | 125 | 0.4483 |
| 0.1337 | 126 | 0.4734 |
| 0.1347 | 127 | 0.6635 |
| 0.1358 | 128 | 0.3238 |
| 0.1368 | 129 | 0.1651 |
| 0.1379 | 130 | 0.4351 |
| 0.1390 | 131 | 0.2721 |
| 0.1400 | 132 | 0.2922 |
| 0.1411 | 133 | 0.3631 |
| 0.1421 | 134 | 0.4333 |
| 0.1432 | 135 | 0.2805 |
| 0.1443 | 136 | 0.0546 |
| 0.1453 | 137 | 0.0316 |
| 0.1464 | 138 | 0.0278 |
| 0.1474 | 139 | 0.0151 |
| 0.1485 | 140 | 0.0177 |
| 0.1496 | 141 | 0.0247 |
| 0.1506 | 142 | 0.0168 |
| 0.1517 | 143 | 0.0278 |
| 0.1527 | 144 | 0.0422 |
| 0.1538 | 145 | 0.0363 |
| 0.1549 | 146 | 0.0484 |
| 0.1559 | 147 | 0.0326 |
| 0.1570 | 148 | 0.009 |
| 0.1580 | 149 | 0.0216 |
| 0.1591 | 150 | 0.005 |
| 0.1602 | 151 | 0.0514 |
| 0.1612 | 152 | 0.0131 |
| 0.1623 | 153 | 0.0145 |
| 0.1634 | 154 | 0.0246 |
| 0.1644 | 155 | 0.0111 |
| 0.1655 | 156 | 0.0184 |
| 0.1665 | 157 | 0.0168 |
| 0.1676 | 158 | 0.0055 |
| 0.1687 | 159 | 0.0091 |
| 0.1697 | 160 | 0.0363 |
| 0.1708 | 161 | 0.0039 |
| 0.1718 | 162 | 0.0119 |
| 0.1729 | 163 | 0.0284 |
| 0.1740 | 164 | 0.0055 |
| 0.1750 | 165 | 0.0193 |
| 0.1761 | 166 | 0.0138 |
| 0.1771 | 167 | 0.0099 |
| 0.1782 | 168 | 0.026 |
| 0.1793 | 169 | 0.025 |
| 0.1803 | 170 | 0.0318 |
| 0.1814 | 171 | 0.0088 |
| 0.1824 | 172 | 0.0137 |
| 0.1835 | 173 | 0.0158 |
| 0.1846 | 174 | 0.0271 |
| 0.1856 | 175 | 0.0181 |
| 0.1867 | 176 | 0.026 |
| 0.1877 | 177 | 0.0207 |
| 0.1888 | 178 | 0.009 |
| 0.1899 | 179 | 0.0117 |
| 0.1909 | 180 | 0.0265 |
| 0.1920 | 181 | 0.0151 |
| 0.1931 | 182 | 0.0254 |
| 0.1941 | 183 | 0.0101 |
| 0.1952 | 184 | 0.0096 |
| 0.1962 | 185 | 0.0225 |
| 0.1973 | 186 | 0.0122 |
| 0.1984 | 187 | 0.0184 |
| 0.1994 | 188 | 0.0326 |
| 0.2005 | 189 | 0.0163 |
| 0.2015 | 190 | 0.0257 |
| 0.2026 | 191 | 0.0126 |
| 0.2037 | 192 | 0.0121 |
| 0.2047 | 193 | 0.0251 |
| 0.2058 | 194 | 0.0145 |
| 0.2068 | 195 | 0.0244 |
| 0.2079 | 196 | 0.0196 |
| 0.2090 | 197 | 0.0121 |
| 0.2100 | 198 | 0.0145 |
| 0.2111 | 199 | 0.0084 |
| 0.2121 | 200 | 0.013 |
| 0.2132 | 201 | 0.0123 |
| 0.2143 | 202 | 0.009 |
| 0.2153 | 203 | 0.0248 |
| 0.2164 | 204 | 0.0236 |
| 0.2174 | 205 | 0.0195 |
| 0.2185 | 206 | 0.0206 |
| 0.2196 | 207 | 0.0201 |
| 0.2206 | 208 | 0.0185 |
| 0.2217 | 209 | 0.0206 |
| 0.2228 | 210 | 0.0233 |
| 0.2238 | 211 | 0.0429 |
| 0.2249 | 212 | 0.0161 |
| 0.2259 | 213 | 0.0334 |
| 0.2270 | 214 | 0.0128 |
| 0.2281 | 215 | 0.0273 |
| 0.2291 | 216 | 0.0228 |
| 0.2302 | 217 | 0.0199 |
| 0.2312 | 218 | 0.0154 |
| 0.2323 | 219 | 0.0051 |
| 0.2334 | 220 | 0.018 |
| 0.2344 | 221 | 0.0194 |
| 0.2355 | 222 | 0.0095 |
| 0.2365 | 223 | 0.0058 |
| 0.2376 | 224 | 0.0285 |
| 0.2387 | 225 | 0.0107 |
| 0.2397 | 226 | 0.0196 |
| 0.2408 | 227 | 0.0311 |
| 0.2418 | 228 | 0.0198 |
| 0.2429 | 229 | 0.0126 |
| 0.2440 | 230 | 0.0168 |
| 0.2450 | 231 | 0.0069 |
| 0.2461 | 232 | 0.0112 |
| 0.2471 | 233 | 0.0133 |
| 0.2482 | 234 | 0.0234 |
| 0.2493 | 235 | 0.0174 |
| 0.2503 | 236 | 0.0133 |
| 0.2514 | 237 | 0.0068 |
| 0.2525 | 238 | 0.0213 |
| 0.2535 | 239 | 0.0197 |
| 0.2546 | 240 | 0.011 |
| 0.2556 | 241 | 0.0226 |
| 0.2567 | 242 | 0.0305 |
| 0.2578 | 243 | 0.0198 |
| 0.2588 | 244 | 0.0318 |
| 0.2599 | 245 | 0.024 |
| 0.2609 | 246 | 0.0349 |
| 0.2620 | 247 | 0.1405 |
| 0.2631 | 248 | 0.1075 |
| 0.2641 | 249 | 0.1303 |
| 0.2652 | 250 | 0.1108 |
| 0.2662 | 251 | 0.0913 |
| 0.2673 | 252 | 0.081 |
| 0.2684 | 253 | 0.0516 |
| 0.2694 | 254 | 0.082 |
| 0.2705 | 255 | 0.0558 |
| 0.2715 | 256 | 0.05 |
| 0.2726 | 257 | 0.0829 |
| 0.2737 | 258 | 0.1127 |
| 0.2747 | 259 | 0.0559 |
| 0.2758 | 260 | 0.1117 |
| 0.2768 | 261 | 0.06 |
| 0.2779 | 262 | 0.0525 |
| 0.2790 | 263 | 0.0488 |
| 0.2800 | 264 | 0.0403 |
| 0.2811 | 265 | 0.0978 |
| 0.2822 | 266 | 0.0404 |
| 0.2832 | 267 | 0.0481 |
| 0.2843 | 268 | 0.0357 |
| 0.2853 | 269 | 0.0327 |
| 0.2864 | 270 | 0.0615 |
| 0.2875 | 271 | 0.0662 |
| 0.2885 | 272 | 0.0546 |
| 0.2896 | 273 | 0.0523 |
| 0.2906 | 274 | 0.0436 |
| 0.2917 | 275 | 0.0509 |
| 0.2928 | 276 | 0.0279 |
| 0.2938 | 277 | 0.0405 |
| 0.2949 | 278 | 0.0608 |
| 0.2959 | 279 | 0.0223 |
| 0.2970 | 280 | 0.0103 |
| 0.2981 | 281 | 0.0432 |
| 0.2991 | 282 | 0.0491 |
| 0.3002 | 283 | 0.0237 |
| 0.3012 | 284 | 0.0458 |
| 0.3023 | 285 | 0.0362 |
| 0.3034 | 286 | 0.0235 |
| 0.3044 | 287 | 0.025 |
| 0.3055 | 288 | 0.0354 |
| 0.3065 | 289 | 0.0164 |
| 0.3076 | 290 | 0.0323 |
| 0.3087 | 291 | 0.0334 |
| 0.3097 | 292 | 0.019 |
| 0.3108 | 293 | 0.0246 |
| 0.3119 | 294 | 0.0243 |
| 0.3129 | 295 | 0.0373 |
| 0.3140 | 296 | 0.0247 |
| 0.3150 | 297 | 0.017 |
| 0.3161 | 298 | 0.0158 |
| 0.3172 | 299 | 0.0447 |
| 0.3182 | 300 | 0.036 |
| 0.3193 | 301 | 0.0467 |
| 0.3203 | 302 | 0.0498 |
| 0.3214 | 303 | 0.0371 |
| 0.3225 | 304 | 0.0367 |
| 0.3235 | 305 | 0.0696 |
| 0.3246 | 306 | 0.0432 |
| 0.3256 | 307 | 0.0472 |
| 0.3267 | 308 | 0.0361 |
| 0.3278 | 309 | 0.0282 |
| 0.3288 | 310 | 0.0427 |
| 0.3299 | 311 | 0.0264 |
| 0.3309 | 312 | 0.0857 |
| 0.3320 | 313 | 0.0697 |
| 0.3331 | 314 | 0.09 |
| 0.3341 | 315 | 0.0509 |
| 0.3352 | 316 | 0.0438 |
| 0.3363 | 317 | 0.0451 |
| 0.3373 | 318 | 0.0337 |
| 0.3384 | 319 | 0.032 |
| 0.3394 | 320 | 0.0299 |
| 0.3405 | 321 | 0.0262 |
| 0.3416 | 322 | 0.0394 |
| 0.3426 | 323 | 0.0358 |
| 0.3437 | 324 | 0.0296 |
| 0.3447 | 325 | 0.029 |
| 0.3458 | 326 | 0.0235 |
| 0.3469 | 327 | 0.0541 |
| 0.3479 | 328 | 0.0502 |
| 0.3490 | 329 | 0.0566 |
| 0.3500 | 330 | 0.059 |
| 0.3511 | 331 | 0.0526 |
| 0.3522 | 332 | 0.0142 |
| 0.3532 | 333 | 0.0502 |
| 0.3543 | 334 | 0.0188 |
| 0.3553 | 335 | 0.0348 |
| 0.3564 | 336 | 0.0369 |
| 0.3575 | 337 | 0.0171 |
| 0.3585 | 338 | 0.0251 |
| 0.3596 | 339 | 0.0594 |
| 0.3606 | 340 | 0.0661 |
| 0.3617 | 341 | 0.0671 |
| 0.3628 | 342 | 0.0492 |
| 0.3638 | 343 | 0.0712 |
| 0.3649 | 344 | 0.0678 |
| 0.3660 | 345 | 0.0722 |
| 0.3670 | 346 | 0.0464 |
| 0.3681 | 347 | 0.0373 |
| 0.3691 | 348 | 0.0879 |
| 0.3702 | 349 | 0.0712 |
| 0.3713 | 350 | 0.0527 |
| 0.3723 | 351 | 0.0927 |
| 0.3734 | 352 | 0.0562 |
| 0.3744 | 353 | 0.0676 |
| 0.3755 | 354 | 0.0603 |
| 0.3766 | 355 | 0.0529 |
| 0.3776 | 356 | 0.1075 |
| 0.3787 | 357 | 0.0553 |
| 0.3797 | 358 | 0.048 |
| 0.3808 | 359 | 0.0347 |
| 0.3819 | 360 | 0.0132 |
| 0.3829 | 361 | 0.0364 |
| 0.3840 | 362 | 0.0521 |
| 0.3850 | 363 | 0.0636 |
| 0.3861 | 364 | 0.0467 |
| 0.3872 | 365 | 0.0391 |
| 0.3882 | 366 | 0.0151 |
| 0.3893 | 367 | 0.017 |
| 0.3903 | 368 | 0.0415 |
| 0.3914 | 369 | 0.0307 |
| 0.3925 | 370 | 0.077 |
| 0.3935 | 371 | 0.0317 |
| 0.3946 | 372 | 0.0395 |
| 0.3957 | 373 | 0.0475 |
| 0.3967 | 374 | 0.0451 |
| 0.3978 | 375 | 0.0224 |
| 0.3988 | 376 | 0.0427 |
| 0.3999 | 377 | 0.0337 |
| 0.4010 | 378 | 0.0198 |
| 0.4020 | 379 | 0.0716 |
| 0.4031 | 380 | 0.0342 |
| 0.4041 | 381 | 0.0718 |
| 0.4052 | 382 | 0.0783 |
| 0.4063 | 383 | 0.0702 |
| 0.4073 | 384 | 0.0365 |
| 0.4084 | 385 | 0.0575 |
| 0.4094 | 386 | 0.0278 |
| 0.4105 | 387 | 0.0531 |
| 0.4116 | 388 | 0.0521 |
| 0.4126 | 389 | 0.0817 |
| 0.4137 | 390 | 0.0484 |
| 0.4147 | 391 | 0.0642 |
| 0.4158 | 392 | 0.0374 |
| 0.4169 | 393 | 0.0504 |
| 0.4179 | 394 | 0.0353 |
| 0.4190 | 395 | 0.0556 |
| 0.4200 | 396 | 0.0354 |
| 0.4211 | 397 | 0.0609 |
| 0.4222 | 398 | 0.056 |
| 0.4232 | 399 | 0.042 |
| 0.4243 | 400 | 0.0266 |
| 0.4254 | 401 | 0.0461 |
| 0.4264 | 402 | 0.0674 |
| 0.4275 | 403 | 0.0293 |
| 0.4285 | 404 | 0.0489 |
| 0.4296 | 405 | 0.0546 |
| 0.4307 | 406 | 0.0649 |
| 0.4317 | 407 | 0.039 |
| 0.4328 | 408 | 0.0358 |
| 0.4338 | 409 | 0.0515 |
| 0.4349 | 410 | 0.026 |
| 0.4360 | 411 | 0.0476 |
| 0.4370 | 412 | 0.0736 |
| 0.4381 | 413 | 0.0479 |
| 0.4391 | 414 | 0.0742 |
| 0.4402 | 415 | 0.0435 |
| 0.4413 | 416 | 0.0585 |
| 0.4423 | 417 | 0.051 |
| 0.4434 | 418 | 0.0374 |
| 0.4444 | 419 | 0.0271 |
| 0.4455 | 420 | 0.0397 |
| 0.4466 | 421 | 0.0555 |
| 0.4476 | 422 | 0.0406 |
| 0.4487 | 423 | 0.0282 |
| 0.4497 | 424 | 0.0225 |
| 0.4508 | 425 | 0.0303 |
| 0.4519 | 426 | 0.0763 |
| 0.4529 | 427 | 0.0438 |
| 0.4540 | 428 | 0.0521 |
| 0.4551 | 429 | 0.0415 |
| 0.4561 | 430 | 0.0796 |
| 0.4572 | 431 | 0.0703 |
| 0.4582 | 432 | 0.0754 |
| 0.4593 | 433 | 0.131 |
| 0.4604 | 434 | 0.0805 |
| 0.4614 | 435 | 0.0816 |
| 0.4625 | 436 | 0.096 |
| 0.4635 | 437 | 0.119 |
| 0.4646 | 438 | 0.0648 |
| 0.4657 | 439 | 0.0961 |
| 0.4667 | 440 | 0.0612 |
| 0.4678 | 441 | 0.036 |
| 0.4688 | 442 | 0.2117 |
| 0.4699 | 443 | 0.1767 |
| 0.4710 | 444 | 0.2005 |
| 0.4720 | 445 | 0.1606 |
| 0.4731 | 446 | 0.1282 |
| 0.4741 | 447 | 0.1721 |
| 0.4752 | 448 | 0.1293 |
| 0.4763 | 449 | 0.1211 |
| 0.4773 | 450 | 0.1445 |
| 0.4784 | 451 | 0.1381 |
| 0.4794 | 452 | 0.1315 |
| 0.4805 | 453 | 0.0651 |
| 0.4816 | 454 | 0.0783 |
| 0.4826 | 455 | 0.1153 |
| 0.4837 | 456 | 0.1458 |
| 0.4848 | 457 | 0.0817 |
| 0.4858 | 458 | 0.1302 |
| 0.4869 | 459 | 0.1129 |
| 0.4879 | 460 | 0.0853 |
| 0.4890 | 461 | 0.0934 |
| 0.4901 | 462 | 0.0802 |
| 0.4911 | 463 | 0.0876 |
| 0.4922 | 464 | 0.0927 |
| 0.4932 | 465 | 0.1007 |
| 0.4943 | 466 | 0.0904 |
| 0.4954 | 467 | 0.0951 |
| 0.4964 | 468 | 0.0582 |
| 0.4975 | 469 | 0.0722 |
| 0.4985 | 470 | 0.0545 |
| 0.4996 | 471 | 0.0802 |
| 0.5007 | 472 | 0.075 |
| 0.5017 | 473 | 0.058 |
| 0.5028 | 474 | 0.0583 |
| 0.5038 | 475 | 0.0737 |
| 0.5049 | 476 | 0.0371 |
| 0.5060 | 477 | 0.0896 |
| 0.5070 | 478 | 0.0999 |
| 0.5081 | 479 | 0.1346 |
| 0.5091 | 480 | 0.1087 |
| 0.5102 | 481 | 0.1317 |
| 0.5113 | 482 | 0.0484 |
| 0.5123 | 483 | 0.0754 |
| 0.5134 | 484 | 0.0845 |
| 0.5145 | 485 | 0.0571 |
| 0.5155 | 486 | 0.0698 |
| 0.5166 | 487 | 0.0715 |
| 0.5176 | 488 | 0.1011 |
| 0.5187 | 489 | 0.0773 |
| 0.5198 | 490 | 0.0657 |
| 0.5208 | 491 | 0.075 |
| 0.5219 | 492 | 0.1186 |
| 0.5229 | 493 | 0.0799 |
| 0.5240 | 494 | 0.1062 |
| 0.5251 | 495 | 0.0814 |
| 0.5261 | 496 | 0.1071 |
| 0.5272 | 497 | 0.127 |
| 0.5282 | 498 | 0.0792 |
| 0.5293 | 499 | 0.0559 |
| 0.5304 | 500 | 0.0813 |
| 0.5314 | 501 | 0.0822 |
| 0.5325 | 502 | 0.0704 |
| 0.5335 | 503 | 0.0919 |
| 0.5346 | 504 | 0.0927 |
| 0.5357 | 505 | 0.0851 |
| 0.5367 | 506 | 0.0766 |
| 0.5378 | 507 | 0.0919 |
| 0.5388 | 508 | 0.0489 |
| 0.5399 | 509 | 0.0491 |
| 0.5410 | 510 | 0.0813 |
| 0.5420 | 511 | 0.0763 |
| 0.5431 | 512 | 0.0736 |
| 0.5442 | 513 | 0.0588 |
| 0.5452 | 514 | 0.057 |
| 0.5463 | 515 | 0.0662 |
| 0.5473 | 516 | 0.0859 |
| 0.5484 | 517 | 0.0824 |
| 0.5495 | 518 | 0.0548 |
| 0.5505 | 519 | 0.0565 |
| 0.5516 | 520 | 0.0938 |
| 0.5526 | 521 | 0.0796 |
| 0.5537 | 522 | 0.0891 |
| 0.5548 | 523 | 0.0975 |
| 0.5558 | 524 | 0.0772 |
| 0.5569 | 525 | 0.0548 |
| 0.5579 | 526 | 0.0508 |
| 0.5590 | 527 | 0.0857 |
| 0.5601 | 528 | 0.0755 |
| 0.5611 | 529 | 0.0851 |
| 0.5622 | 530 | 0.0695 |
| 0.5632 | 531 | 0.0711 |
| 0.5643 | 532 | 0.1109 |
| 0.5654 | 533 | 0.048 |
| 0.5664 | 534 | 0.0823 |
| 0.5675 | 535 | 0.0609 |
| 0.5685 | 536 | 0.0701 |
| 0.5696 | 537 | 0.0722 |
| 0.5707 | 538 | 0.1006 |
| 0.5717 | 539 | 0.0827 |
| 0.5728 | 540 | 0.0852 |
| 0.5739 | 541 | 0.1153 |
| 0.5749 | 542 | 0.078 |
| 0.5760 | 543 | 0.0584 |
| 0.5770 | 544 | 0.0766 |
| 0.5781 | 545 | 0.0441 |
| 0.5792 | 546 | 0.0858 |
| 0.5802 | 547 | 0.0984 |
| 0.5813 | 548 | 0.0931 |
| 0.5823 | 549 | 0.1091 |
| 0.5834 | 550 | 0.0704 |
| 0.5845 | 551 | 0.0765 |
| 0.5855 | 552 | 0.0689 |
| 0.5866 | 553 | 0.0776 |
| 0.5876 | 554 | 0.0648 |
| 0.5887 | 555 | 0.1011 |
| 0.5898 | 556 | 0.0574 |
| 0.5908 | 557 | 0.1231 |
| 0.5919 | 558 | 0.0598 |
| 0.5929 | 559 | 0.04 |
| 0.5940 | 560 | 0.0412 |
| 0.5951 | 561 | 0.0644 |
| 0.5961 | 562 | 0.0408 |
| 0.5972 | 563 | 0.0597 |
| 0.5982 | 564 | 0.0455 |
| 0.5993 | 565 | 0.0356 |
| 0.6004 | 566 | 0.0267 |
| 0.6014 | 567 | 0.063 |
| 0.6025 | 568 | 0.0683 |
| 0.6036 | 569 | 0.0576 |
| 0.6046 | 570 | 0.0473 |
| 0.6057 | 571 | 0.0728 |
| 0.6067 | 572 | 0.0411 |
| 0.6078 | 573 | 0.0459 |
| 0.6089 | 574 | 0.0538 |
| 0.6099 | 575 | 0.0431 |
| 0.6110 | 576 | 0.0592 |
| 0.6120 | 577 | 0.0717 |
| 0.6131 | 578 | 0.0897 |
| 0.6142 | 579 | 0.0537 |
| 0.6152 | 580 | 0.0603 |
| 0.6163 | 581 | 0.1405 |
| 0.6173 | 582 | 0.1461 |
| 0.6184 | 583 | 0.0665 |
| 0.6195 | 584 | 0.0783 |
| 0.6205 | 585 | 0.0403 |
| 0.6216 | 586 | 0.0407 |
| 0.6226 | 587 | 0.0896 |
| 0.6237 | 588 | 0.0875 |
| 0.6248 | 589 | 0.11 |
| 0.6258 | 590 | 0.1066 |
| 0.6269 | 591 | 0.0471 |
| 0.6280 | 592 | 0.0338 |
| 0.6290 | 593 | 0.0524 |
| 0.6301 | 594 | 0.0563 |
| 0.6311 | 595 | 0.0526 |
| 0.6322 | 596 | 0.0325 |
| 0.6333 | 597 | 0.0602 |
| 0.6343 | 598 | 0.0395 |
| 0.6354 | 599 | 0.0545 |
| 0.6364 | 600 | 0.0713 |
| 0.6375 | 601 | 0.0402 |
| 0.6386 | 602 | 0.0399 |
| 0.6396 | 603 | 0.0415 |
| 0.6407 | 604 | 0.0536 |
| 0.6417 | 605 | 0.035 |
| 0.6428 | 606 | 0.044 |
| 0.6439 | 607 | 0.0502 |
| 0.6449 | 608 | 0.0209 |
| 0.6460 | 609 | 0.0426 |
| 0.6470 | 610 | 0.0364 |
| 0.6481 | 611 | 0.0529 |
| 0.6492 | 612 | 0.0651 |
| 0.6502 | 613 | 0.0418 |
| 0.6513 | 614 | 0.0562 |
| 0.6523 | 615 | 0.0408 |
| 0.6534 | 616 | 0.0242 |
| 0.6545 | 617 | 0.0265 |
| 0.6555 | 618 | 0.0309 |
| 0.6566 | 619 | 0.0463 |
| 0.6577 | 620 | 0.0256 |
| 0.6587 | 621 | 0.0368 |
| 0.6598 | 622 | 0.0253 |
| 0.6608 | 623 | 0.0444 |
| 0.6619 | 624 | 0.0628 |
| 0.6630 | 625 | 0.0414 |
| 0.6640 | 626 | 0.0244 |
| 0.6651 | 627 | 0.0205 |
| 0.6661 | 628 | 0.0162 |
| 0.6672 | 629 | 0.0166 |
| 0.6683 | 630 | 0.0281 |
| 0.6693 | 631 | 0.0252 |
| 0.6704 | 632 | 0.0192 |
| 0.6714 | 633 | 0.0223 |
| 0.6725 | 634 | 0.0141 |
| 0.6736 | 635 | 0.0165 |
| 0.6746 | 636 | 0.0314 |
| 0.6757 | 637 | 0.1062 |
| 0.6767 | 638 | 0.0371 |
| 0.6778 | 639 | 0.015 |
| 0.6789 | 640 | 0.0301 |
| 0.6799 | 641 | 0.0417 |
| 0.6810 | 642 | 0.0593 |
| 0.6820 | 643 | 0.0399 |
| 0.6831 | 644 | 0.0126 |
| 0.6842 | 645 | 0.0047 |
| 0.6852 | 646 | 0.0013 |
| 0.6863 | 647 | 0.0067 |
| 0.6874 | 648 | 0.0019 |
| 0.6884 | 649 | 0.0154 |
| 0.6895 | 650 | 0.0006 |
| 0.6905 | 651 | 0.0504 |
| 0.6916 | 652 | 0.0159 |
| 0.6927 | 653 | 0.0198 |
| 0.6937 | 654 | 0.2964 |
| 0.6948 | 655 | 0.5783 |
| 0.6958 | 656 | 0.4341 |
| 0.6969 | 657 | 0.3179 |
| 0.6980 | 658 | 0.3856 |
| 0.6990 | 659 | 0.4783 |
| 0.7001 | 660 | 0.3014 |
| 0.7011 | 661 | 0.3303 |
| 0.7022 | 662 | 0.358 |
| 0.7033 | 663 | 0.4306 |
| 0.7043 | 664 | 0.4152 |
| 0.7054 | 665 | 0.2776 |
| 0.7064 | 666 | 0.288 |
| 0.7075 | 667 | 0.2787 |
| 0.7086 | 668 | 0.2555 |
| 0.7096 | 669 | 0.2825 |
| 0.7107 | 670 | 0.2834 |
| 0.7117 | 671 | 0.2461 |
| 0.7128 | 672 | 0.2625 |
| 0.7139 | 673 | 0.2299 |
| 0.7149 | 674 | 0.2478 |
| 0.7160 | 675 | 0.2339 |
| 0.7171 | 676 | 0.3259 |
| 0.7181 | 677 | 0.4917 |
| 0.7192 | 678 | 0.231 |
| 0.7202 | 679 | 0.1833 |
| 0.7213 | 680 | 0.1768 |
| 0.7224 | 681 | 0.1955 |
| 0.7234 | 682 | 0.235 |
| 0.7245 | 683 | 0.224 |
| 0.7255 | 684 | 0.2083 |
| 0.7266 | 685 | 0.2632 |
| 0.7277 | 686 | 0.1705 |
| 0.7287 | 687 | 0.239 |
| 0.7298 | 688 | 0.2403 |
| 0.7308 | 689 | 0.3655 |
| 0.7319 | 690 | 0.3451 |
| 0.7330 | 691 | 0.2554 |
| 0.7340 | 692 | 0.3059 |
| 0.7351 | 693 | 0.2546 |
| 0.7361 | 694 | 0.2133 |
| 0.7372 | 695 | 0.3031 |
| 0.7383 | 696 | 0.1978 |
| 0.7393 | 697 | 0.2051 |
| 0.7404 | 698 | 0.1882 |
| 0.7414 | 699 | 0.2374 |
| 0.7425 | 700 | 0.2157 |
| 0.7436 | 701 | 0.2917 |
| 0.7446 | 702 | 0.1717 |
| 0.7457 | 703 | 0.1438 |
| 0.7468 | 704 | 0.1678 |
| 0.7478 | 705 | 0.2295 |
| 0.7489 | 706 | 0.1697 |
| 0.7499 | 707 | 0.2032 |
| 0.7510 | 708 | 0.1568 |
| 0.7521 | 709 | 0.1483 |
| 0.7531 | 710 | 0.1863 |
| 0.7542 | 711 | 0.1585 |
| 0.7552 | 712 | 0.16 |
| 0.7563 | 713 | 0.1809 |
| 0.7574 | 714 | 0.1599 |
| 0.7584 | 715 | 0.1851 |
| 0.7595 | 716 | 0.1722 |
| 0.7605 | 717 | 0.1718 |
| 0.7616 | 718 | 0.182 |
| 0.7627 | 719 | 0.1263 |
| 0.7637 | 720 | 0.1608 |
| 0.7648 | 721 | 0.1589 |
| 0.7658 | 722 | 0.1615 |
| 0.7669 | 723 | 0.1385 |
| 0.7680 | 724 | 0.1626 |
| 0.7690 | 725 | 0.1592 |
| 0.7701 | 726 | 0.1405 |
| 0.7711 | 727 | 0.1793 |
| 0.7722 | 728 | 0.1639 |
| 0.7733 | 729 | 0.2661 |
| 0.7743 | 730 | 0.1306 |
| 0.7754 | 731 | 0.1914 |
| 0.7765 | 732 | 0.1682 |
| 0.7775 | 733 | 0.2162 |
| 0.7786 | 734 | 0.1439 |
| 0.7796 | 735 | 0.1177 |
| 0.7807 | 736 | 0.1595 |
| 0.7818 | 737 | 0.1303 |
| 0.7828 | 738 | 0.2224 |
| 0.7839 | 739 | 0.1414 |
| 0.7849 | 740 | 0.1395 |
| 0.7860 | 741 | 0.1238 |
| 0.7871 | 742 | 0.1319 |
| 0.7881 | 743 | 0.2027 |
| 0.7892 | 744 | 0.0817 |
| 0.7902 | 745 | 0.101 |
| 0.7913 | 746 | 0.1914 |
| 0.7924 | 747 | 0.1235 |
| 0.7934 | 748 | 0.1635 |
| 0.7945 | 749 | 0.1551 |
| 0.7955 | 750 | 0.2518 |
| 0.7966 | 751 | 0.1477 |
| 0.7977 | 752 | 0.1588 |
| 0.7987 | 753 | 0.1384 |
| 0.7998 | 754 | 0.1724 |
| 0.8008 | 755 | 0.1841 |
| 0.8019 | 756 | 0.1639 |
| 0.8030 | 757 | 0.1974 |
| 0.8040 | 758 | 0.066 |
| 0.8051 | 759 | 0.1331 |
| 0.8062 | 760 | 0.1444 |
| 0.8072 | 761 | 0.1243 |
| 0.8083 | 762 | 0.1583 |
| 0.8093 | 763 | 0.1378 |
| 0.8104 | 764 | 0.1309 |
| 0.8115 | 765 | 0.1588 |
| 0.8125 | 766 | 0.0926 |
| 0.8136 | 767 | 0.1255 |
| 0.8146 | 768 | 0.0968 |
| 0.8157 | 769 | 0.1393 |
| 0.8168 | 770 | 0.1094 |
| 0.8178 | 771 | 0.0904 |
| 0.8189 | 772 | 0.1572 |
| 0.8199 | 773 | 0.0711 |
| 0.8210 | 774 | 0.1014 |
| 0.8221 | 775 | 0.1613 |
| 0.8231 | 776 | 0.1737 |
| 0.8242 | 777 | 0.1312 |
| 0.8252 | 778 | 0.1142 |
| 0.8263 | 779 | 0.1416 |
| 0.8274 | 780 | 0.0773 |
| 0.8284 | 781 | 0.1457 |
| 0.8295 | 782 | 0.1125 |
| 0.8305 | 783 | 0.0863 |
| 0.8316 | 784 | 0.0884 |
| 0.8327 | 785 | 0.1128 |
| 0.8337 | 786 | 0.137 |
| 0.8348 | 787 | 0.1402 |
| 0.8359 | 788 | 0.0916 |
| 0.8369 | 789 | 0.129 |
| 0.8380 | 790 | 0.0848 |
| 0.8390 | 791 | 0.1328 |
| 0.8401 | 792 | 0.1102 |
| 0.8412 | 793 | 0.0634 |
| 0.8422 | 794 | 0.1209 |
| 0.8433 | 795 | 0.0593 |
| 0.8443 | 796 | 0.1537 |
| 0.8454 | 797 | 0.118 |
| 0.8465 | 798 | 0.2072 |
| 0.8475 | 799 | 0.0652 |
| 0.8486 | 800 | 0.0991 |
| 0.8496 | 801 | 0.1198 |
| 0.8507 | 802 | 0.0435 |
| 0.8518 | 803 | 0.0973 |
| 0.8528 | 804 | 0.1537 |
| 0.8539 | 805 | 0.0665 |
| 0.8549 | 806 | 0.0811 |
| 0.8560 | 807 | 0.093 |
| 0.8571 | 808 | 0.0862 |
| 0.8581 | 809 | 0.1061 |
| 0.8592 | 810 | 0.1301 |
| 0.8602 | 811 | 0.1807 |
| 0.8613 | 812 | 0.1437 |
| 0.8624 | 813 | 0.1105 |
| 0.8634 | 814 | 0.1493 |
| 0.8645 | 815 | 0.1062 |
| 0.8656 | 816 | 0.1005 |
| 0.8666 | 817 | 0.1121 |
| 0.8677 | 818 | 0.0767 |
| 0.8687 | 819 | 0.0823 |
| 0.8698 | 820 | 0.1009 |
| 0.8709 | 821 | 0.1006 |
| 0.8719 | 822 | 0.1404 |
| 0.8730 | 823 | 0.1079 |
| 0.8740 | 824 | 0.1414 |
| 0.8751 | 825 | 0.0947 |
| 0.8762 | 826 | 0.0827 |
| 0.8772 | 827 | 0.116 |
| 0.8783 | 828 | 0.1462 |
| 0.8793 | 829 | 0.1431 |
| 0.8804 | 830 | 0.0911 |
| 0.8815 | 831 | 0.1039 |
| 0.8825 | 832 | 0.0501 |
| 0.8836 | 833 | 0.066 |
| 0.8846 | 834 | 0.0775 |
| 0.8857 | 835 | 0.0605 |
| 0.8868 | 836 | 0.0651 |
| 0.8878 | 837 | 0.1079 |
| 0.8889 | 838 | 0.1664 |
| 0.8899 | 839 | 0.1982 |
| 0.8910 | 840 | 0.1549 |
| 0.8921 | 841 | 0.0944 |
| 0.8931 | 842 | 0.0645 |
| 0.8942 | 843 | 0.1407 |
| 0.8953 | 844 | 0.0975 |
| 0.8963 | 845 | 0.1142 |
| 0.8974 | 846 | 0.0814 |
| 0.8984 | 847 | 0.1161 |
| 0.8995 | 848 | 0.1087 |
| 0.9006 | 849 | 0.1345 |
| 0.9016 | 850 | 0.1259 |
| 0.9027 | 851 | 0.1402 |
| 0.9037 | 852 | 0.0984 |
| 0.9048 | 853 | 0.1238 |
| 0.9059 | 854 | 0.0818 |
| 0.9069 | 855 | 0.0998 |
| 0.9080 | 856 | 0.0865 |
| 0.9090 | 857 | 0.0814 |
| 0.9101 | 858 | 0.0685 |
| 0.9112 | 859 | 0.0847 |
| 0.9122 | 860 | 0.0518 |
| 0.9133 | 861 | 0.066 |
| 0.9143 | 862 | 0.1071 |
| 0.9154 | 863 | 0.0645 |
| 0.9165 | 864 | 0.0852 |
| 0.9175 | 865 | 0.0967 |
| 0.9186 | 866 | 0.1119 |
| 0.9196 | 867 | 0.0908 |
| 0.9207 | 868 | 0.0405 |
| 0.9218 | 869 | 0.0902 |
| 0.9228 | 870 | 0.0726 |
| 0.9239 | 871 | 0.067 |
| 0.9250 | 872 | 0.0636 |
| 0.9260 | 873 | 0.0576 |
| 0.9271 | 874 | 0.0712 |
| 0.9281 | 875 | 0.0881 |
| 0.9292 | 876 | 0.0716 |
| 0.9303 | 877 | 0.0509 |
| 0.9313 | 878 | 0.0756 |
| 0.9324 | 879 | 0.1082 |
| 0.9334 | 880 | 0.0581 |
| 0.9345 | 881 | 0.0861 |
| 0.9356 | 882 | 0.0692 |
| 0.9366 | 883 | 0.071 |
| 0.9377 | 884 | 0.0576 |
| 0.9387 | 885 | 0.0611 |
| 0.9398 | 886 | 0.056 |
| 0.9409 | 887 | 0.0728 |
| 0.9419 | 888 | 0.1169 |
| 0.9430 | 889 | 0.0735 |
| 0.9440 | 890 | 0.1343 |
| 0.9451 | 891 | 0.0717 |
| 0.9462 | 892 | 0.0953 |
| 0.9472 | 893 | 0.0759 |
| 0.9483 | 894 | 0.0806 |
| 0.9494 | 895 | 0.0753 |
| 0.9504 | 896 | 0.1001 |
| 0.9515 | 897 | 0.096 |
| 0.9525 | 898 | 0.0423 |
| 0.9536 | 899 | 0.0737 |
| 0.9547 | 900 | 0.1075 |
| 0.9557 | 901 | 0.0858 |
| 0.9568 | 902 | 0.0834 |
| 0.9578 | 903 | 0.0512 |
| 0.9589 | 904 | 0.0568 |
| 0.9600 | 905 | 0.1081 |
| 0.9610 | 906 | 0.0557 |
| 0.9621 | 907 | 0.0999 |
| 0.9631 | 908 | 0.097 |
| 0.9642 | 909 | 0.0998 |
| 0.9653 | 910 | 0.0831 |
| 0.9663 | 911 | 0.0559 |
| 0.9674 | 912 | 0.0925 |
| 0.9684 | 913 | 0.0911 |
| 0.9695 | 914 | 0.0703 |
| 0.9706 | 915 | 0.0773 |
| 0.9716 | 916 | 0.0684 |
| 0.9727 | 917 | 0.0727 |
| 0.9737 | 918 | 0.0993 |
| 0.9748 | 919 | 0.0551 |
| 0.9759 | 920 | 0.0857 |
| 0.9769 | 921 | 0.0686 |
| 0.9780 | 922 | 0.0647 |
| 0.9791 | 923 | 0.0654 |
| 0.9801 | 924 | 0.0866 |
| 0.9812 | 925 | 0.0769 |
| 0.9822 | 926 | 0.1067 |
| 0.9833 | 927 | 0.0949 |
| 0.9844 | 928 | 0.0519 |
| 0.9854 | 929 | 0.0648 |
| 0.9865 | 930 | 0.0573 |
| 0.9875 | 931 | 0.0757 |
| 0.9886 | 932 | 0.1013 |
| 0.9897 | 933 | 0.0385 |
| 0.9907 | 934 | 0.0622 |
| 0.9918 | 935 | 0.0365 |
| 0.9928 | 936 | 0.0314 |
| 0.9939 | 937 | 0.0599 |
| 0.9950 | 938 | 0.0655 |
| 0.9960 | 939 | 0.0313 |
| 0.9971 | 940 | 0.0495 |
| 0.9981 | 941 | 0.0337 |
| 0.9992 | 942 | 0.0296 |
| 1.0003 | 943 | 0.0909 |
| 1.0013 | 944 | 0.2702 |
| 1.0024 | 945 | 0.2833 |
| 1.0034 | 946 | 0.2875 |
| 1.0045 | 947 | 0.4469 |
| 1.0056 | 948 | 0.4596 |
| 1.0066 | 949 | 0.4541 |
| 1.0077 | 950 | 0.4298 |
| 1.0088 | 951 | 0.1818 |
| 1.0098 | 952 | 0.2236 |
| 1.0109 | 953 | 0.2475 |
| 1.0119 | 954 | 0.2393 |
| 1.0130 | 955 | 0.2203 |
| 1.0141 | 956 | 0.1878 |
| 1.0151 | 957 | 0.1573 |
| 1.0162 | 958 | 0.155 |
| 1.0172 | 959 | 0.2007 |
| 1.0183 | 960 | 0.3347 |
| 1.0194 | 961 | 0.2457 |
| 1.0204 | 962 | 0.2357 |
| 1.0215 | 963 | 0.2386 |
| 1.0225 | 964 | 0.3535 |
| 1.0236 | 965 | 0.2635 |
| 1.0247 | 966 | 0.3877 |
| 1.0257 | 967 | 0.2424 |
| 1.0268 | 968 | 0.4052 |
| 1.0278 | 969 | 0.2783 |
| 1.0289 | 970 | 0.4503 |
| 1.0300 | 971 | 0.3233 |
| 1.0310 | 972 | 0.4281 |
| 1.0321 | 973 | 0.3867 |
| 1.0331 | 974 | 0.3603 |
| 1.0342 | 975 | 0.3305 |
| 1.0353 | 976 | 0.3427 |
| 1.0363 | 977 | 0.3719 |
| 1.0374 | 978 | 0.3089 |
| 1.0385 | 979 | 0.2583 |
| 1.0395 | 980 | 0.2666 |
| 1.0406 | 981 | 0.2478 |
| 1.0416 | 982 | 0.3 |
| 1.0427 | 983 | 0.2226 |
| 1.0438 | 984 | 0.2448 |
| 1.0448 | 985 | 0.1496 |
| 1.0459 | 986 | 0.1866 |
| 1.0469 | 987 | 0.1322 |
| 1.0480 | 988 | 0.1383 |
| 1.0491 | 989 | 0.1007 |
| 1.0501 | 990 | 0.0931 |
| 1.0512 | 991 | 0.0771 |
| 1.0522 | 992 | 0.0945 |
| 1.0533 | 993 | 0.1203 |
| 1.0544 | 994 | 0.139 |
| 1.0554 | 995 | 0.1328 |
| 1.0565 | 996 | 0.13 |
| 1.0575 | 997 | 0.0796 |
| 1.0586 | 998 | 0.0324 |
| 1.0597 | 999 | 0.0289 |
| 1.0607 | 1000 | 0.0219 |
| 1.0618 | 1001 | 0.0375 |
| 1.0628 | 1002 | 0.022 |
| 1.0639 | 1003 | 0.0307 |
| 1.0650 | 1004 | 0.068 |
| 1.0660 | 1005 | 0.2106 |
| 1.0671 | 1006 | 0.2132 |
| 1.0682 | 1007 | 0.2303 |
| 1.0692 | 1008 | 0.1717 |
| 1.0703 | 1009 | 0.1677 |
| 1.0713 | 1010 | 0.2735 |
| 1.0724 | 1011 | 0.252 |
| 1.0735 | 1012 | 0.2336 |
| 1.0745 | 1013 | 0.233 |
| 1.0756 | 1014 | 0.3612 |
| 1.0766 | 1015 | 0.2526 |
| 1.0777 | 1016 | 0.2727 |
| 1.0788 | 1017 | 0.2948 |
| 1.0798 | 1018 | 0.2104 |
| 1.0809 | 1019 | 0.1519 |
| 1.0819 | 1020 | 0.2493 |
| 1.0830 | 1021 | 0.162 |
| 1.0841 | 1022 | 0.2143 |
| 1.0851 | 1023 | 0.1909 |
| 1.0862 | 1024 | 0.2608 |
| 1.0872 | 1025 | 0.2373 |
| 1.0883 | 1026 | 0.2523 |
| 1.0894 | 1027 | 0.2251 |
| 1.0904 | 1028 | 0.1989 |
| 1.0915 | 1029 | 0.1274 |
| 1.0925 | 1030 | 0.1261 |
| 1.0936 | 1031 | 0.0842 |
| 1.0947 | 1032 | 0.1165 |
| 1.0957 | 1033 | 0.122 |
| 1.0968 | 1034 | 0.1154 |
| 1.0979 | 1035 | 0.1832 |
| 1.0989 | 1036 | 0.1469 |
| 1.1000 | 1037 | 0.1614 |
| 1.1010 | 1038 | 0.0865 |
| 1.1021 | 1039 | 0.1235 |
| 1.1032 | 1040 | 0.1564 |
| 1.1042 | 1041 | 0.148 |
| 1.1053 | 1042 | 0.1657 |
| 1.1063 | 1043 | 0.1106 |
| 1.1074 | 1044 | 0.1182 |
| 1.1085 | 1045 | 0.133 |
| 1.1095 | 1046 | 0.0922 |
| 1.1106 | 1047 | 0.1104 |
| 1.1116 | 1048 | 0.0783 |
| 1.1127 | 1049 | 0.1089 |
| 1.1138 | 1050 | 0.0775 |
| 1.1148 | 1051 | 0.0558 |
| 1.1159 | 1052 | 0.0931 |
| 1.1169 | 1053 | 0.1448 |
| 1.1180 | 1054 | 0.104 |
| 1.1191 | 1055 | 0.1419 |
| 1.1201 | 1056 | 0.0952 |
| 1.1212 | 1057 | 0.1283 |
| 1.1222 | 1058 | 0.106 |
| 1.1233 | 1059 | 0.1464 |
| 1.1244 | 1060 | 0.1023 |
| 1.1254 | 1061 | 0.1623 |
| 1.1265 | 1062 | 0.2852 |
| 1.1276 | 1063 | 0.4375 |
| 1.1286 | 1064 | 0.3692 |
| 1.1297 | 1065 | 0.353 |
| 1.1307 | 1066 | 0.4234 |
| 1.1318 | 1067 | 0.2492 |
| 1.1329 | 1068 | 0.2313 |
| 1.1339 | 1069 | 0.2968 |
| 1.1350 | 1070 | 0.2625 |
| 1.1360 | 1071 | 0.1686 |
| 1.1371 | 1072 | 0.0894 |
| 1.1382 | 1073 | 0.1292 |
| 1.1392 | 1074 | 0.1375 |
| 1.1403 | 1075 | 0.1176 |
| 1.1413 | 1076 | 0.1892 |
| 1.1424 | 1077 | 0.3492 |
| 1.1435 | 1078 | 0.1426 |
| 1.1445 | 1079 | 0.0068 |
| 1.1456 | 1080 | 0.0103 |
| 1.1466 | 1081 | 0.0165 |
| 1.1477 | 1082 | 0.0033 |
| 1.1488 | 1083 | 0.0136 |
| 1.1498 | 1084 | 0.0014 |
| 1.1509 | 1085 | 0.0022 |
| 1.1519 | 1086 | 0.0012 |
| 1.1530 | 1087 | 0.0046 |
| 1.1541 | 1088 | 0.0148 |
| 1.1551 | 1089 | 0.0086 |
| 1.1562 | 1090 | 0.0041 |
| 1.1573 | 1091 | 0.0114 |
| 1.1583 | 1092 | 0.0016 |
| 1.1594 | 1093 | 0.0098 |
| 1.1604 | 1094 | 0.0026 |
| 1.1615 | 1095 | 0.0081 |
| 1.1626 | 1096 | 0.0016 |
| 1.1636 | 1097 | 0.0018 |
| 1.1647 | 1098 | 0.0086 |
| 1.1657 | 1099 | 0.002 |
| 1.1668 | 1100 | 0.0027 |
| 1.1679 | 1101 | 0.0036 |
| 1.1689 | 1102 | 0.0161 |
| 1.1700 | 1103 | 0.0038 |
| 1.1710 | 1104 | 0.0011 |
| 1.1721 | 1105 | 0.0087 |
| 1.1732 | 1106 | 0.0026 |
| 1.1742 | 1107 | 0.0095 |
| 1.1753 | 1108 | 0.0054 |
| 1.1763 | 1109 | 0.0014 |
| 1.1774 | 1110 | 0.0083 |
| 1.1785 | 1111 | 0.0081 |
| 1.1795 | 1112 | 0.0079 |
| 1.1806 | 1113 | 0.0078 |
| 1.1816 | 1114 | 0.0033 |
| 1.1827 | 1115 | 0.0016 |
| 1.1838 | 1116 | 0.0038 |
| 1.1848 | 1117 | 0.0074 |
| 1.1859 | 1118 | 0.003 |
| 1.1870 | 1119 | 0.0035 |
| 1.1880 | 1120 | 0.005 |
| 1.1891 | 1121 | 0.0046 |
| 1.1901 | 1122 | 0.0027 |
| 1.1912 | 1123 | 0.0162 |
| 1.1923 | 1124 | 0.0109 |
| 1.1933 | 1125 | 0.016 |
| 1.1944 | 1126 | 0.0113 |
| 1.1954 | 1127 | 0.0057 |
| 1.1965 | 1128 | 0.008 |
| 1.1976 | 1129 | 0.0086 |
| 1.1986 | 1130 | 0.0106 |
| 1.1997 | 1131 | 0.0081 |
| 1.2007 | 1132 | 0.0034 |
| 1.2018 | 1133 | 0.0098 |
| 1.2029 | 1134 | 0.0062 |
| 1.2039 | 1135 | 0.0072 |
| 1.2050 | 1136 | 0.0076 |
| 1.2060 | 1137 | 0.0134 |
| 1.2071 | 1138 | 0.0036 |
| 1.2082 | 1139 | 0.0044 |
| 1.2092 | 1140 | 0.0014 |
| 1.2103 | 1141 | 0.008 |
| 1.2113 | 1142 | 0.0069 |
| 1.2124 | 1143 | 0.0045 |
| 1.2135 | 1144 | 0.0165 |
| 1.2145 | 1145 | 0.0007 |
| 1.2156 | 1146 | 0.0055 |
| 1.2167 | 1147 | 0.0087 |
| 1.2177 | 1148 | 0.0132 |
| 1.2188 | 1149 | 0.0068 |
| 1.2198 | 1150 | 0.0121 |
| 1.2209 | 1151 | 0.0025 |
| 1.2220 | 1152 | 0.0069 |
| 1.2230 | 1153 | 0.0007 |
| 1.2241 | 1154 | 0.01 |
| 1.2251 | 1155 | 0.0069 |
| 1.2262 | 1156 | 0.0091 |
| 1.2273 | 1157 | 0.0022 |
| 1.2283 | 1158 | 0.0097 |
| 1.2294 | 1159 | 0.0081 |
| 1.2304 | 1160 | 0.0022 |
| 1.2315 | 1161 | 0.0022 |
| 1.2326 | 1162 | 0.0011 |
| 1.2336 | 1163 | 0.002 |
| 1.2347 | 1164 | 0.0117 |
| 1.2357 | 1165 | 0.0046 |
| 1.2368 | 1166 | 0.0068 |
| 1.2379 | 1167 | 0.0051 |
| 1.2389 | 1168 | 0.0041 |
| 1.2400 | 1169 | 0.0021 |
| 1.2411 | 1170 | 0.0029 |
| 1.2421 | 1171 | 0.0098 |
| 1.2432 | 1172 | 0.0061 |
| 1.2442 | 1173 | 0.0006 |
| 1.2453 | 1174 | 0.0017 |
| 1.2464 | 1175 | 0.0015 |
| 1.2474 | 1176 | 0.012 |
| 1.2485 | 1177 | 0.0112 |
| 1.2495 | 1178 | 0.011 |
| 1.2506 | 1179 | 0.0113 |
| 1.2517 | 1180 | 0.0112 |
| 1.2527 | 1181 | 0.0044 |
| 1.2538 | 1182 | 0.0037 |
| 1.2548 | 1183 | 0.0034 |
| 1.2559 | 1184 | 0.0093 |
| 1.2570 | 1185 | 0.0061 |
| 1.2580 | 1186 | 0.0176 |
| 1.2591 | 1187 | 0.0026 |
| 1.2601 | 1188 | 0.0042 |
| 1.2612 | 1189 | 0.0082 |
| 1.2623 | 1190 | 0.0246 |
| 1.2633 | 1191 | 0.0633 |
| 1.2644 | 1192 | 0.0574 |
| 1.2654 | 1193 | 0.0554 |
| 1.2665 | 1194 | 0.0376 |
| 1.2676 | 1195 | 0.0359 |
| 1.2686 | 1196 | 0.0581 |
| 1.2697 | 1197 | 0.0513 |
| 1.2708 | 1198 | 0.0462 |
| 1.2718 | 1199 | 0.0148 |
| 1.2729 | 1200 | 0.0154 |
| 1.2739 | 1201 | 0.0337 |
| 1.2750 | 1202 | 0.0259 |
| 1.2761 | 1203 | 0.041 |
| 1.2771 | 1204 | 0.0289 |
| 1.2782 | 1205 | 0.0164 |
| 1.2792 | 1206 | 0.0262 |
| 1.2803 | 1207 | 0.0215 |
| 1.2814 | 1208 | 0.0387 |
| 1.2824 | 1209 | 0.0232 |
| 1.2835 | 1210 | 0.0436 |
| 1.2845 | 1211 | 0.0393 |
| 1.2856 | 1212 | 0.0062 |
| 1.2867 | 1213 | 0.022 |
| 1.2877 | 1214 | 0.0116 |
| 1.2888 | 1215 | 0.021 |
| 1.2898 | 1216 | 0.0166 |
| 1.2909 | 1217 | 0.004 |
| 1.2920 | 1218 | 0.0308 |
| 1.2930 | 1219 | 0.024 |
| 1.2941 | 1220 | 0.0101 |
| 1.2951 | 1221 | 0.0115 |
| 1.2962 | 1222 | 0.0046 |
| 1.2973 | 1223 | 0.0114 |
| 1.2983 | 1224 | 0.016 |
| 1.2994 | 1225 | 0.0264 |
| 1.3005 | 1226 | 0.0097 |
| 1.3015 | 1227 | 0.0126 |
| 1.3026 | 1228 | 0.0062 |
| 1.3036 | 1229 | 0.0104 |
| 1.3047 | 1230 | 0.022 |
| 1.3058 | 1231 | 0.0045 |
| 1.3068 | 1232 | 0.0073 |
| 1.3079 | 1233 | 0.012 |
| 1.3089 | 1234 | 0.0103 |
| 1.3100 | 1235 | 0.0124 |
| 1.3111 | 1236 | 0.0088 |
| 1.3121 | 1237 | 0.0059 |
| 1.3132 | 1238 | 0.0115 |
| 1.3142 | 1239 | 0.0116 |
| 1.3153 | 1240 | 0.0234 |
| 1.3164 | 1241 | 0.0093 |
| 1.3174 | 1242 | 0.0012 |
| 1.3185 | 1243 | 0.0082 |
| 1.3195 | 1244 | 0.0094 |
| 1.3206 | 1245 | 0.0079 |
| 1.3217 | 1246 | 0.0109 |
| 1.3227 | 1247 | 0.0072 |
| 1.3238 | 1248 | 0.01 |
| 1.3248 | 1249 | 0.0157 |
| 1.3259 | 1250 | 0.0239 |
| 1.3270 | 1251 | 0.008 |
| 1.3280 | 1252 | 0.0022 |
| 1.3291 | 1253 | 0.0057 |
| 1.3302 | 1254 | 0.0134 |
| 1.3312 | 1255 | 0.01 |
| 1.3323 | 1256 | 0.0152 |
| 1.3333 | 1257 | 0.0226 |
| 1.3344 | 1258 | 0.0117 |
| 1.3355 | 1259 | 0.017 |
| 1.3365 | 1260 | 0.0255 |
| 1.3376 | 1261 | 0.008 |
| 1.3386 | 1262 | 0.0119 |
| 1.3397 | 1263 | 0.0126 |
| 1.3408 | 1264 | 0.0064 |
| 1.3418 | 1265 | 0.0069 |
| 1.3429 | 1266 | 0.0122 |
| 1.3439 | 1267 | 0.0266 |
| 1.3450 | 1268 | 0.0151 |
| 1.3461 | 1269 | 0.007 |
| 1.3471 | 1270 | 0.0132 |
| 1.3482 | 1271 | 0.0049 |
| 1.3492 | 1272 | 0.005 |
| 1.3503 | 1273 | 0.014 |
| 1.3514 | 1274 | 0.0157 |
| 1.3524 | 1275 | 0.0195 |
| 1.3535 | 1276 | 0.0135 |
| 1.3545 | 1277 | 0.006 |
| 1.3556 | 1278 | 0.0297 |
| 1.3567 | 1279 | 0.0079 |
| 1.3577 | 1280 | 0.0226 |
| 1.3588 | 1281 | 0.0126 |
| 1.3599 | 1282 | 0.019 |
| 1.3609 | 1283 | 0.0218 |
| 1.3620 | 1284 | 0.0088 |
| 1.3630 | 1285 | 0.0221 |
| 1.3641 | 1286 | 0.0186 |
| 1.3652 | 1287 | 0.007 |
| 1.3662 | 1288 | 0.0189 |
| 1.3673 | 1289 | 0.0117 |
| 1.3683 | 1290 | 0.0164 |
| 1.3694 | 1291 | 0.0297 |
| 1.3705 | 1292 | 0.014 |
| 1.3715 | 1293 | 0.0231 |
| 1.3726 | 1294 | 0.0547 |
| 1.3736 | 1295 | 0.0308 |
| 1.3747 | 1296 | 0.0171 |
| 1.3758 | 1297 | 0.0214 |
| 1.3768 | 1298 | 0.0254 |
| 1.3779 | 1299 | 0.0429 |
| 1.3789 | 1300 | 0.0062 |
| 1.3800 | 1301 | 0.0187 |
| 1.3811 | 1302 | 0.0117 |
| 1.3821 | 1303 | 0.0067 |
| 1.3832 | 1304 | 0.0189 |
| 1.3842 | 1305 | 0.0088 |
| 1.3853 | 1306 | 0.017 |
| 1.3864 | 1307 | 0.0125 |
| 1.3874 | 1308 | 0.0241 |
| 1.3885 | 1309 | 0.0161 |
| 1.3896 | 1310 | 0.0135 |
| 1.3906 | 1311 | 0.0152 |
| 1.3917 | 1312 | 0.0169 |
| 1.3927 | 1313 | 0.0173 |
| 1.3938 | 1314 | 0.0115 |
| 1.3949 | 1315 | 0.0143 |
| 1.3959 | 1316 | 0.0146 |
| 1.3970 | 1317 | 0.0219 |
| 1.3980 | 1318 | 0.0221 |
| 1.3991 | 1319 | 0.0076 |
| 1.4002 | 1320 | 0.0226 |
| 1.4012 | 1321 | 0.0203 |
| 1.4023 | 1322 | 0.0055 |
| 1.4033 | 1323 | 0.0193 |
| 1.4044 | 1324 | 0.0161 |
| 1.4055 | 1325 | 0.0252 |
| 1.4065 | 1326 | 0.0304 |
| 1.4076 | 1327 | 0.0187 |
| 1.4086 | 1328 | 0.0261 |
| 1.4097 | 1329 | 0.0072 |
| 1.4108 | 1330 | 0.0171 |
| 1.4118 | 1331 | 0.0235 |
| 1.4129 | 1332 | 0.0293 |
| 1.4139 | 1333 | 0.0253 |
| 1.4150 | 1334 | 0.0106 |
| 1.4161 | 1335 | 0.0092 |
| 1.4171 | 1336 | 0.0156 |
| 1.4182 | 1337 | 0.0325 |
| 1.4193 | 1338 | 0.0156 |
| 1.4203 | 1339 | 0.0137 |
| 1.4214 | 1340 | 0.0411 |
| 1.4224 | 1341 | 0.0236 |
| 1.4235 | 1342 | 0.0284 |
| 1.4246 | 1343 | 0.0489 |
| 1.4256 | 1344 | 0.023 |
| 1.4267 | 1345 | 0.0261 |
| 1.4277 | 1346 | 0.026 |
| 1.4288 | 1347 | 0.0208 |
| 1.4299 | 1348 | 0.0085 |
| 1.4309 | 1349 | 0.0199 |
| 1.4320 | 1350 | 0.0167 |
| 1.4330 | 1351 | 0.0213 |
| 1.4341 | 1352 | 0.0108 |
| 1.4352 | 1353 | 0.0102 |
| 1.4362 | 1354 | 0.0183 |
| 1.4373 | 1355 | 0.02 |
| 1.4383 | 1356 | 0.0182 |
| 1.4394 | 1357 | 0.03 |
| 1.4405 | 1358 | 0.0311 |
| 1.4415 | 1359 | 0.0253 |
| 1.4426 | 1360 | 0.0155 |
| 1.4436 | 1361 | 0.0141 |
| 1.4447 | 1362 | 0.0129 |
| 1.4458 | 1363 | 0.0202 |
| 1.4468 | 1364 | 0.0228 |
| 1.4479 | 1365 | 0.0269 |
| 1.4490 | 1366 | 0.0109 |
| 1.4500 | 1367 | 0.0379 |
| 1.4511 | 1368 | 0.0099 |
| 1.4521 | 1369 | 0.0166 |
| 1.4532 | 1370 | 0.0067 |
| 1.4543 | 1371 | 0.0078 |
| 1.4553 | 1372 | 0.0114 |
| 1.4564 | 1373 | 0.0215 |
| 1.4574 | 1374 | 0.0404 |
| 1.4585 | 1375 | 0.0314 |
| 1.4596 | 1376 | 0.0334 |
| 1.4606 | 1377 | 0.03 |
| 1.4617 | 1378 | 0.0256 |
| 1.4627 | 1379 | 0.0551 |
| 1.4638 | 1380 | 0.0262 |
| 1.4649 | 1381 | 0.0389 |
| 1.4659 | 1382 | 0.0419 |
| 1.4670 | 1383 | 0.0144 |
| 1.4680 | 1384 | 0.0191 |
| 1.4691 | 1385 | 0.0438 |
| 1.4702 | 1386 | 0.0711 |
| 1.4712 | 1387 | 0.0399 |
| 1.4723 | 1388 | 0.0269 |
| 1.4733 | 1389 | 0.0496 |
| 1.4744 | 1390 | 0.0565 |
| 1.4755 | 1391 | 0.0316 |
| 1.4765 | 1392 | 0.038 |
| 1.4776 | 1393 | 0.0471 |
| 1.4787 | 1394 | 0.0327 |
| 1.4797 | 1395 | 0.0296 |
| 1.4808 | 1396 | 0.0198 |
| 1.4818 | 1397 | 0.0383 |
| 1.4829 | 1398 | 0.0398 |
| 1.4840 | 1399 | 0.0357 |
| 1.4850 | 1400 | 0.0236 |
| 1.4861 | 1401 | 0.06 |
| 1.4871 | 1402 | 0.0564 |
| 1.4882 | 1403 | 0.0236 |
| 1.4893 | 1404 | 0.043 |
| 1.4903 | 1405 | 0.021 |
| 1.4914 | 1406 | 0.0359 |
| 1.4924 | 1407 | 0.0362 |
| 1.4935 | 1408 | 0.0323 |
| 1.4946 | 1409 | 0.0209 |
| 1.4956 | 1410 | 0.0155 |
| 1.4967 | 1411 | 0.0255 |
| 1.4977 | 1412 | 0.0216 |
| 1.4988 | 1413 | 0.0208 |
| 1.4999 | 1414 | 0.0263 |
| 1.5009 | 1415 | 0.0102 |
| 1.5020 | 1416 | 0.0115 |
| 1.5030 | 1417 | 0.0183 |
| 1.5041 | 1418 | 0.0148 |
| 1.5052 | 1419 | 0.0426 |
| 1.5062 | 1420 | 0.0255 |
| 1.5073 | 1421 | 0.0533 |
| 1.5084 | 1422 | 0.0657 |
| 1.5094 | 1423 | 0.068 |
| 1.5105 | 1424 | 0.0573 |
| 1.5115 | 1425 | 0.0429 |
| 1.5126 | 1426 | 0.0218 |
| 1.5137 | 1427 | 0.0136 |
| 1.5147 | 1428 | 0.0214 |
| 1.5158 | 1429 | 0.016 |
| 1.5168 | 1430 | 0.0411 |
| 1.5179 | 1431 | 0.0417 |
| 1.5190 | 1432 | 0.056 |
| 1.5200 | 1433 | 0.0299 |
| 1.5211 | 1434 | 0.016 |
| 1.5221 | 1435 | 0.0557 |
| 1.5232 | 1436 | 0.0174 |
| 1.5243 | 1437 | 0.024 |
| 1.5253 | 1438 | 0.059 |
| 1.5264 | 1439 | 0.097 |
| 1.5274 | 1440 | 0.0319 |
| 1.5285 | 1441 | 0.0174 |
| 1.5296 | 1442 | 0.044 |
| 1.5306 | 1443 | 0.0239 |
| 1.5317 | 1444 | 0.0142 |
| 1.5327 | 1445 | 0.0247 |
| 1.5338 | 1446 | 0.0239 |
| 1.5349 | 1447 | 0.0433 |
| 1.5359 | 1448 | 0.0214 |
| 1.5370 | 1449 | 0.053 |
| 1.5381 | 1450 | 0.0279 |
| 1.5391 | 1451 | 0.0216 |
| 1.5402 | 1452 | 0.0288 |
| 1.5412 | 1453 | 0.0295 |
| 1.5423 | 1454 | 0.0403 |
| 1.5434 | 1455 | 0.0227 |
| 1.5444 | 1456 | 0.0417 |
| 1.5455 | 1457 | 0.0259 |
| 1.5465 | 1458 | 0.0329 |
| 1.5476 | 1459 | 0.0386 |
| 1.5487 | 1460 | 0.0401 |
| 1.5497 | 1461 | 0.042 |
| 1.5508 | 1462 | 0.0115 |
| 1.5518 | 1463 | 0.029 |
| 1.5529 | 1464 | 0.0177 |
| 1.5540 | 1465 | 0.0276 |
| 1.5550 | 1466 | 0.0304 |
| 1.5561 | 1467 | 0.0401 |
| 1.5571 | 1468 | 0.0302 |
| 1.5582 | 1469 | 0.0282 |
| 1.5593 | 1470 | 0.0224 |
| 1.5603 | 1471 | 0.0271 |
| 1.5614 | 1472 | 0.0297 |
| 1.5625 | 1473 | 0.0419 |
| 1.5635 | 1474 | 0.0146 |
| 1.5646 | 1475 | 0.0288 |
| 1.5656 | 1476 | 0.0235 |
| 1.5667 | 1477 | 0.0287 |
| 1.5678 | 1478 | 0.028 |
| 1.5688 | 1479 | 0.0239 |
| 1.5699 | 1480 | 0.0374 |
| 1.5709 | 1481 | 0.0393 |
| 1.5720 | 1482 | 0.0234 |
| 1.5731 | 1483 | 0.0472 |
| 1.5741 | 1484 | 0.0254 |
| 1.5752 | 1485 | 0.0152 |
| 1.5762 | 1486 | 0.0378 |
| 1.5773 | 1487 | 0.0249 |
| 1.5784 | 1488 | 0.0228 |
| 1.5794 | 1489 | 0.0387 |
| 1.5805 | 1490 | 0.0576 |
| 1.5815 | 1491 | 0.0272 |
| 1.5826 | 1492 | 0.0457 |
| 1.5837 | 1493 | 0.034 |
| 1.5847 | 1494 | 0.0358 |
| 1.5858 | 1495 | 0.0638 |
| 1.5868 | 1496 | 0.03 |
| 1.5879 | 1497 | 0.0575 |
| 1.5890 | 1498 | 0.0318 |
| 1.5900 | 1499 | 0.0439 |
| 1.5911 | 1500 | 0.0962 |
| 1.5922 | 1501 | 0.0171 |
| 1.5932 | 1502 | 0.0155 |
| 1.5943 | 1503 | 0.0158 |
| 1.5953 | 1504 | 0.0117 |
| 1.5964 | 1505 | 0.0163 |
| 1.5975 | 1506 | 0.0392 |
| 1.5985 | 1507 | 0.0079 |
| 1.5996 | 1508 | 0.0142 |
| 1.6006 | 1509 | 0.0223 |
| 1.6017 | 1510 | 0.0274 |
| 1.6028 | 1511 | 0.0188 |
| 1.6038 | 1512 | 0.0129 |
| 1.6049 | 1513 | 0.0323 |
| 1.6059 | 1514 | 0.023 |
| 1.6070 | 1515 | 0.0111 |
| 1.6081 | 1516 | 0.0248 |
| 1.6091 | 1517 | 0.0215 |
| 1.6102 | 1518 | 0.0138 |
| 1.6112 | 1519 | 0.0283 |
| 1.6123 | 1520 | 0.0497 |
| 1.6134 | 1521 | 0.0151 |
| 1.6144 | 1522 | 0.0174 |
| 1.6155 | 1523 | 0.0457 |
| 1.6165 | 1524 | 0.0614 |
| 1.6176 | 1525 | 0.0433 |
| 1.6187 | 1526 | 0.0346 |
| 1.6197 | 1527 | 0.0267 |
| 1.6208 | 1528 | 0.0152 |
| 1.6219 | 1529 | 0.0408 |
| 1.6229 | 1530 | 0.0235 |
| 1.6240 | 1531 | 0.0467 |
| 1.6250 | 1532 | 0.0742 |
| 1.6261 | 1533 | 0.0303 |
| 1.6272 | 1534 | 0.0191 |
| 1.6282 | 1535 | 0.0153 |
| 1.6293 | 1536 | 0.0151 |
| 1.6303 | 1537 | 0.0189 |
| 1.6314 | 1538 | 0.0144 |
| 1.6325 | 1539 | 0.04 |
| 1.6335 | 1540 | 0.0167 |
| 1.6346 | 1541 | 0.0099 |
| 1.6356 | 1542 | 0.0219 |
| 1.6367 | 1543 | 0.0224 |
| 1.6378 | 1544 | 0.0116 |
| 1.6388 | 1545 | 0.0169 |
| 1.6399 | 1546 | 0.0346 |
| 1.6409 | 1547 | 0.0261 |
| 1.6420 | 1548 | 0.0209 |
| 1.6431 | 1549 | 0.0233 |
| 1.6441 | 1550 | 0.0103 |
| 1.6452 | 1551 | 0.0204 |
| 1.6462 | 1552 | 0.0203 |
| 1.6473 | 1553 | 0.0165 |
| 1.6484 | 1554 | 0.0243 |
| 1.6494 | 1555 | 0.0198 |
| 1.6505 | 1556 | 0.0171 |
| 1.6516 | 1557 | 0.0233 |
| 1.6526 | 1558 | 0.0137 |
| 1.6537 | 1559 | 0.003 |
| 1.6547 | 1560 | 0.0037 |
| 1.6558 | 1561 | 0.0127 |
| 1.6569 | 1562 | 0.0184 |
| 1.6579 | 1563 | 0.0207 |
| 1.6590 | 1564 | 0.0196 |
| 1.6600 | 1565 | 0.0131 |
| 1.6611 | 1566 | 0.0198 |
| 1.6622 | 1567 | 0.0273 |
| 1.6632 | 1568 | 0.0137 |
| 1.6643 | 1569 | 0.009 |
| 1.6653 | 1570 | 0.0054 |
| 1.6664 | 1571 | 0.0043 |
| 1.6675 | 1572 | 0.0051 |
| 1.6685 | 1573 | 0.0146 |
| 1.6696 | 1574 | 0.0128 |
| 1.6706 | 1575 | 0.0043 |
| 1.6717 | 1576 | 0.0047 |
| 1.6728 | 1577 | 0.0038 |
| 1.6738 | 1578 | 0.0156 |
| 1.6749 | 1579 | 0.0521 |
| 1.6759 | 1580 | 0.0339 |
| 1.6770 | 1581 | 0.0093 |
| 1.6781 | 1582 | 0.0028 |
| 1.6791 | 1583 | 0.0162 |
| 1.6802 | 1584 | 0.0497 |
| 1.6813 | 1585 | 0.0542 |
| 1.6823 | 1586 | 0.002 |
| 1.6834 | 1587 | 0.0059 |
| 1.6844 | 1588 | 0.0062 |
| 1.6855 | 1589 | 0.0019 |
| 1.6866 | 1590 | 0.0025 |
| 1.6876 | 1591 | 0.0056 |
| 1.6887 | 1592 | 0.0009 |
| 1.6897 | 1593 | 0.0036 |
| 1.6908 | 1594 | 0.007 |
| 1.6919 | 1595 | 0.0064 |
| 1.6929 | 1596 | 0.0048 |
| 1.6940 | 1597 | 0.1758 |
| 1.6950 | 1598 | 0.209 |
| 1.6961 | 1599 | 0.2029 |
| 1.6972 | 1600 | 0.1568 |
| 1.6982 | 1601 | 0.1842 |
| 1.6993 | 1602 | 0.1696 |
| 1.7003 | 1603 | 0.2118 |
| 1.7014 | 1604 | 0.1503 |
| 1.7025 | 1605 | 0.1528 |
| 1.7035 | 1606 | 0.2494 |
| 1.7046 | 1607 | 0.2627 |
| 1.7056 | 1608 | 0.1412 |
| 1.7067 | 1609 | 0.1272 |
| 1.7078 | 1610 | 0.1699 |
| 1.7088 | 1611 | 0.1036 |
| 1.7099 | 1612 | 0.1429 |
| 1.7110 | 1613 | 0.1593 |
| 1.7120 | 1614 | 0.1426 |
| 1.7131 | 1615 | 0.1173 |
| 1.7141 | 1616 | 0.0922 |
| 1.7152 | 1617 | 0.1081 |
| 1.7163 | 1618 | 0.1282 |
| 1.7173 | 1619 | 0.1956 |
| 1.7184 | 1620 | 0.1709 |
| 1.7194 | 1621 | 0.0834 |
| 1.7205 | 1622 | 0.0935 |
| 1.7216 | 1623 | 0.1185 |
| 1.7226 | 1624 | 0.1107 |
| 1.7237 | 1625 | 0.1134 |
| 1.7247 | 1626 | 0.0972 |
| 1.7258 | 1627 | 0.1023 |
| 1.7269 | 1628 | 0.1096 |
| 1.7279 | 1629 | 0.0631 |
| 1.7290 | 1630 | 0.1146 |
| 1.7300 | 1631 | 0.1754 |
| 1.7311 | 1632 | 0.4233 |
| 1.7322 | 1633 | 0.3293 |
| 1.7332 | 1634 | 0.2333 |
| 1.7343 | 1635 | 0.16 |
| 1.7353 | 1636 | 0.1357 |
| 1.7364 | 1637 | 0.1226 |
| 1.7375 | 1638 | 0.214 |
| 1.7385 | 1639 | 0.1011 |
| 1.7396 | 1640 | 0.1071 |
| 1.7407 | 1641 | 0.0949 |
| 1.7417 | 1642 | 0.0876 |
| 1.7428 | 1643 | 0.0949 |
| 1.7438 | 1644 | 0.1429 |
| 1.7449 | 1645 | 0.0649 |
| 1.7460 | 1646 | 0.0918 |
| 1.7470 | 1647 | 0.0687 |
| 1.7481 | 1648 | 0.1216 |
| 1.7491 | 1649 | 0.0785 |
| 1.7502 | 1650 | 0.0812 |
| 1.7513 | 1651 | 0.057 |
| 1.7523 | 1652 | 0.0815 |
| 1.7534 | 1653 | 0.0794 |
| 1.7544 | 1654 | 0.0861 |
| 1.7555 | 1655 | 0.0897 |
| 1.7566 | 1656 | 0.0891 |
| 1.7576 | 1657 | 0.101 |
| 1.7587 | 1658 | 0.1225 |
| 1.7597 | 1659 | 0.0586 |
| 1.7608 | 1660 | 0.0609 |
| 1.7619 | 1661 | 0.0702 |
| 1.7629 | 1662 | 0.0691 |
| 1.7640 | 1663 | 0.066 |
| 1.7650 | 1664 | 0.0573 |
| 1.7661 | 1665 | 0.0774 |
| 1.7672 | 1666 | 0.0948 |
| 1.7682 | 1667 | 0.0387 |
| 1.7693 | 1668 | 0.0433 |
| 1.7704 | 1669 | 0.0358 |
| 1.7714 | 1670 | 0.0703 |
| 1.7725 | 1671 | 0.0533 |
| 1.7735 | 1672 | 0.0882 |
| 1.7746 | 1673 | 0.0595 |
| 1.7757 | 1674 | 0.0844 |
| 1.7767 | 1675 | 0.0512 |
| 1.7778 | 1676 | 0.0558 |
| 1.7788 | 1677 | 0.0574 |
| 1.7799 | 1678 | 0.0317 |
| 1.7810 | 1679 | 0.0399 |
| 1.7820 | 1680 | 0.0826 |
| 1.7831 | 1681 | 0.0656 |
| 1.7841 | 1682 | 0.0463 |
| 1.7852 | 1683 | 0.0578 |
| 1.7863 | 1684 | 0.0278 |
| 1.7873 | 1685 | 0.0505 |
| 1.7884 | 1686 | 0.0603 |
| 1.7894 | 1687 | 0.0277 |
| 1.7905 | 1688 | 0.059 |
| 1.7916 | 1689 | 0.0582 |
| 1.7926 | 1690 | 0.0461 |
| 1.7937 | 1691 | 0.0725 |
| 1.7947 | 1692 | 0.0671 |
| 1.7958 | 1693 | 0.0957 |
| 1.7969 | 1694 | 0.0618 |
| 1.7979 | 1695 | 0.0794 |
| 1.7990 | 1696 | 0.0457 |
| 1.8001 | 1697 | 0.0539 |
| 1.8011 | 1698 | 0.0904 |
| 1.8022 | 1699 | 0.0346 |
| 1.8032 | 1700 | 0.0519 |
| 1.8043 | 1701 | 0.0343 |
| 1.8054 | 1702 | 0.0425 |
| 1.8064 | 1703 | 0.017 |
| 1.8075 | 1704 | 0.0778 |
| 1.8085 | 1705 | 0.0784 |
| 1.8096 | 1706 | 0.0356 |
| 1.8107 | 1707 | 0.0243 |
| 1.8117 | 1708 | 0.0393 |
| 1.8128 | 1709 | 0.0469 |
| 1.8138 | 1710 | 0.0386 |
| 1.8149 | 1711 | 0.0382 |
| 1.8160 | 1712 | 0.0692 |
| 1.8170 | 1713 | 0.039 |
| 1.8181 | 1714 | 0.0386 |
| 1.8191 | 1715 | 0.0293 |
| 1.8202 | 1716 | 0.0479 |
| 1.8213 | 1717 | 0.0404 |
| 1.8223 | 1718 | 0.0358 |
| 1.8234 | 1719 | 0.048 |
| 1.8244 | 1720 | 0.0404 |
| 1.8255 | 1721 | 0.0509 |
| 1.8266 | 1722 | 0.046 |
| 1.8276 | 1723 | 0.0297 |
| 1.8287 | 1724 | 0.059 |
| 1.8298 | 1725 | 0.0279 |
| 1.8308 | 1726 | 0.0557 |
| 1.8319 | 1727 | 0.0264 |
| 1.8329 | 1728 | 0.0384 |
| 1.8340 | 1729 | 0.0447 |
| 1.8351 | 1730 | 0.0631 |
| 1.8361 | 1731 | 0.0456 |
| 1.8372 | 1732 | 0.047 |
| 1.8382 | 1733 | 0.0218 |
| 1.8393 | 1734 | 0.0422 |
| 1.8404 | 1735 | 0.0387 |
| 1.8414 | 1736 | 0.0625 |
| 1.8425 | 1737 | 0.0606 |
| 1.8435 | 1738 | 0.0261 |
| 1.8446 | 1739 | 0.0513 |
| 1.8457 | 1740 | 0.0197 |
| 1.8467 | 1741 | 0.077 |
| 1.8478 | 1742 | 0.0386 |
| 1.8488 | 1743 | 0.0374 |
| 1.8499 | 1744 | 0.0324 |
| 1.8510 | 1745 | 0.0136 |
| 1.8520 | 1746 | 0.0293 |
| 1.8531 | 1747 | 0.0406 |
| 1.8542 | 1748 | 0.0218 |
| 1.8552 | 1749 | 0.0432 |
| 1.8563 | 1750 | 0.0239 |
| 1.8573 | 1751 | 0.025 |
| 1.8584 | 1752 | 0.0474 |
| 1.8595 | 1753 | 0.0799 |
| 1.8605 | 1754 | 0.0507 |
| 1.8616 | 1755 | 0.0851 |
| 1.8626 | 1756 | 0.0729 |
| 1.8637 | 1757 | 0.059 |
| 1.8648 | 1758 | 0.0346 |
| 1.8658 | 1759 | 0.0129 |
| 1.8669 | 1760 | 0.0364 |
| 1.8679 | 1761 | 0.0431 |
| 1.8690 | 1762 | 0.0536 |
| 1.8701 | 1763 | 0.0393 |
| 1.8711 | 1764 | 0.0568 |
| 1.8722 | 1765 | 0.0501 |
| 1.8732 | 1766 | 0.0273 |
| 1.8743 | 1767 | 0.022 |
| 1.8754 | 1768 | 0.0223 |
| 1.8764 | 1769 | 0.0463 |
| 1.8775 | 1770 | 0.0282 |
| 1.8785 | 1771 | 0.0594 |
| 1.8796 | 1772 | 0.0493 |
| 1.8807 | 1773 | 0.0374 |
| 1.8817 | 1774 | 0.0367 |
| 1.8828 | 1775 | 0.0131 |
| 1.8839 | 1776 | 0.0286 |
| 1.8849 | 1777 | 0.0263 |
| 1.8860 | 1778 | 0.019 |
| 1.8870 | 1779 | 0.0298 |
| 1.8881 | 1780 | 0.0356 |
| 1.8892 | 1781 | 0.0653 |
| 1.8902 | 1782 | 0.0717 |
| 1.8913 | 1783 | 0.0426 |
| 1.8923 | 1784 | 0.0305 |
| 1.8934 | 1785 | 0.0233 |
| 1.8945 | 1786 | 0.0459 |
| 1.8955 | 1787 | 0.0258 |
| 1.8966 | 1788 | 0.0445 |
| 1.8976 | 1789 | 0.0447 |
| 1.8987 | 1790 | 0.0366 |
| 1.8998 | 1791 | 0.0337 |
| 1.9008 | 1792 | 0.0335 |
| 1.9019 | 1793 | 0.0623 |
| 1.9029 | 1794 | 0.0451 |
| 1.9040 | 1795 | 0.036 |
| 1.9051 | 1796 | 0.0424 |
| 1.9061 | 1797 | 0.0513 |
| 1.9072 | 1798 | 0.0367 |
| 1.9082 | 1799 | 0.0263 |
| 1.9093 | 1800 | 0.023 |
| 1.9104 | 1801 | 0.0224 |
| 1.9114 | 1802 | 0.0309 |
| 1.9125 | 1803 | 0.0199 |
| 1.9136 | 1804 | 0.0281 |
| 1.9146 | 1805 | 0.0324 |
| 1.9157 | 1806 | 0.0149 |
| 1.9167 | 1807 | 0.0152 |
| 1.9178 | 1808 | 0.0584 |
| 1.9189 | 1809 | 0.013 |
| 1.9199 | 1810 | 0.0409 |
| 1.9210 | 1811 | 0.0297 |
| 1.9220 | 1812 | 0.0319 |
| 1.9231 | 1813 | 0.0215 |
| 1.9242 | 1814 | 0.0268 |
| 1.9252 | 1815 | 0.0137 |
| 1.9263 | 1816 | 0.044 |
| 1.9273 | 1817 | 0.0275 |
| 1.9284 | 1818 | 0.0438 |
| 1.9295 | 1819 | 0.0167 |
| 1.9305 | 1820 | 0.0095 |
| 1.9316 | 1821 | 0.0252 |
| 1.9326 | 1822 | 0.031 |
| 1.9337 | 1823 | 0.0268 |
| 1.9348 | 1824 | 0.0158 |
| 1.9358 | 1825 | 0.0199 |
| 1.9369 | 1826 | 0.0409 |
| 1.9379 | 1827 | 0.0299 |
| 1.9390 | 1828 | 0.0541 |
| 1.9401 | 1829 | 0.0282 |
| 1.9411 | 1830 | 0.0522 |
| 1.9422 | 1831 | 0.0464 |
| 1.9433 | 1832 | 0.028 |
| 1.9443 | 1833 | 0.0429 |
| 1.9454 | 1834 | 0.0542 |
| 1.9464 | 1835 | 0.036 |
| 1.9475 | 1836 | 0.0351 |
| 1.9486 | 1837 | 0.0264 |
| 1.9496 | 1838 | 0.0387 |
| 1.9507 | 1839 | 0.0247 |
| 1.9517 | 1840 | 0.03 |
| 1.9528 | 1841 | 0.0265 |
| 1.9539 | 1842 | 0.0267 |
| 1.9549 | 1843 | 0.0494 |
| 1.9560 | 1844 | 0.0451 |
| 1.9570 | 1845 | 0.0289 |
| 1.9581 | 1846 | 0.0272 |
| 1.9592 | 1847 | 0.0333 |
| 1.9602 | 1848 | 0.0371 |
| 1.9613 | 1849 | 0.0303 |
| 1.9623 | 1850 | 0.033 |
| 1.9634 | 1851 | 0.0324 |
| 1.9645 | 1852 | 0.0368 |
| 1.9655 | 1853 | 0.0471 |
| 1.9666 | 1854 | 0.0193 |
| 1.9676 | 1855 | 0.0202 |
| 1.9687 | 1856 | 0.043 |
| 1.9698 | 1857 | 0.0127 |
| 1.9708 | 1858 | 0.016 |
| 1.9719 | 1859 | 0.0326 |
| 1.9730 | 1860 | 0.0251 |
| 1.9740 | 1861 | 0.0227 |
| 1.9751 | 1862 | 0.018 |
| 1.9761 | 1863 | 0.0296 |
| 1.9772 | 1864 | 0.0143 |
| 1.9783 | 1865 | 0.0211 |
| 1.9793 | 1866 | 0.0125 |
| 1.9804 | 1867 | 0.0529 |
| 1.9814 | 1868 | 0.0175 |
| 1.9825 | 1869 | 0.0462 |
| 1.9836 | 1870 | 0.0299 |
| 1.9846 | 1871 | 0.0304 |
| 1.9857 | 1872 | 0.0152 |
| 1.9867 | 1873 | 0.0201 |
| 1.9878 | 1874 | 0.0299 |
| 1.9889 | 1875 | 0.0328 |
| 1.9899 | 1876 | 0.0265 |
| 1.9910 | 1877 | 0.0121 |
| 1.9920 | 1878 | 0.0171 |
| 1.9931 | 1879 | 0.0193 |
| 1.9942 | 1880 | 0.0113 |
| 1.9952 | 1881 | 0.017 |
| 1.9963 | 1882 | 0.0107 |
| 1.9973 | 1883 | 0.0135 |
| 1.9984 | 1884 | 0.0152 |
| 1.9995 | 1885 | 0.0194 |
| 2.0005 | 1886 | 0.0775 |
| 2.0016 | 1887 | 0.1686 |
| 2.0027 | 1888 | 0.2107 |
| 2.0037 | 1889 | 0.1864 |
| 2.0048 | 1890 | 0.265 |
| 2.0058 | 1891 | 0.2213 |
| 2.0069 | 1892 | 0.2837 |
| 2.0080 | 1893 | 0.2715 |
| 2.0090 | 1894 | 0.1485 |
| 2.0101 | 1895 | 0.0936 |
| 2.0111 | 1896 | 0.1611 |
| 2.0122 | 1897 | 0.1243 |
| 2.0133 | 1898 | 0.1413 |
| 2.0143 | 1899 | 0.1194 |
| 2.0154 | 1900 | 0.0844 |
| 2.0164 | 1901 | 0.0549 |
| 2.0175 | 1902 | 0.1664 |
| 2.0186 | 1903 | 0.1879 |
| 2.0196 | 1904 | 0.154 |
| 2.0207 | 1905 | 0.1536 |
| 2.0217 | 1906 | 0.173 |
| 2.0228 | 1907 | 0.1824 |
| 2.0239 | 1908 | 0.16 |
| 2.0249 | 1909 | 0.2089 |
| 2.0260 | 1910 | 0.1865 |
| 2.0270 | 1911 | 0.2149 |
| 2.0281 | 1912 | 0.1566 |
| 2.0292 | 1913 | 0.2483 |
| 2.0302 | 1914 | 0.201 |
| 2.0313 | 1915 | 0.2402 |
| 2.0324 | 1916 | 0.1943 |
| 2.0334 | 1917 | 0.1775 |
| 2.0345 | 1918 | 0.2215 |
| 2.0355 | 1919 | 0.1565 |
| 2.0366 | 1920 | 0.1824 |
| 2.0377 | 1921 | 0.1304 |
| 2.0387 | 1922 | 0.126 |
| 2.0398 | 1923 | 0.1521 |
| 2.0408 | 1924 | 0.1655 |
| 2.0419 | 1925 | 0.1336 |
| 2.0430 | 1926 | 0.1166 |
| 2.0440 | 1927 | 0.0995 |
| 2.0451 | 1928 | 0.0714 |
| 2.0461 | 1929 | 0.0775 |
| 2.0472 | 1930 | 0.0699 |
| 2.0483 | 1931 | 0.0612 |
| 2.0493 | 1932 | 0.042 |
| 2.0504 | 1933 | 0.0273 |
| 2.0514 | 1934 | 0.015 |
| 2.0525 | 1935 | 0.0588 |
| 2.0536 | 1936 | 0.071 |
| 2.0546 | 1937 | 0.0872 |
| 2.0557 | 1938 | 0.0518 |
| 2.0567 | 1939 | 0.0723 |
| 2.0578 | 1940 | 0.0364 |
| 2.0589 | 1941 | 0.0191 |
| 2.0599 | 1942 | 0.015 |
| 2.0610 | 1943 | 0.0322 |
| 2.0621 | 1944 | 0.0112 |
| 2.0631 | 1945 | 0.0077 |
| 2.0642 | 1946 | 0.0108 |
| 2.0652 | 1947 | 0.0444 |
| 2.0663 | 1948 | 0.0535 |
| 2.0674 | 1949 | 0.113 |
| 2.0684 | 1950 | 0.0623 |
| 2.0695 | 1951 | 0.0874 |
| 2.0705 | 1952 | 0.0531 |
| 2.0716 | 1953 | 0.1529 |
| 2.0727 | 1954 | 0.1247 |
| 2.0737 | 1955 | 0.1482 |
| 2.0748 | 1956 | 0.1586 |
| 2.0758 | 1957 | 0.156 |
| 2.0769 | 1958 | 0.1334 |
| 2.0780 | 1959 | 0.1719 |
| 2.0790 | 1960 | 0.0883 |
| 2.0801 | 1961 | 0.186 |
| 2.0811 | 1962 | 0.0901 |
| 2.0822 | 1963 | 0.0633 |
| 2.0833 | 1964 | 0.0713 |
| 2.0843 | 1965 | 0.0694 |
| 2.0854 | 1966 | 0.1311 |
| 2.0864 | 1967 | 0.1021 |
| 2.0875 | 1968 | 0.1638 |
| 2.0886 | 1969 | 0.1645 |
| 2.0896 | 1970 | 0.123 |
| 2.0907 | 1971 | 0.0946 |
| 2.0918 | 1972 | 0.0424 |
| 2.0928 | 1973 | 0.063 |
| 2.0939 | 1974 | 0.0634 |
| 2.0949 | 1975 | 0.0877 |
| 2.0960 | 1976 | 0.0795 |
| 2.0971 | 1977 | 0.0851 |
| 2.0981 | 1978 | 0.1073 |
| 2.0992 | 1979 | 0.0732 |
| 2.1002 | 1980 | 0.0904 |
| 2.1013 | 1981 | 0.0354 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-bs16-checkpoint-1981")
# Run inference
sentences = [
'전남지역의 석유와 화학제품은 왜 수출이 늘어나는 경향을 보였어',
'(2) 전남지역\n2013년중 전남지역 수출은 전년대비 1.2% 감소로 전환하였다. 품목별로는 석유(+9.3% → +3.8%) 및 화학제품(+1.2% → +7.1%)이 중국 등 해외수요확대로 증가세를 지속하였으나 철강금속(+1.8% → -8.6%)은 글로벌 공급과잉 및 중국의 저가 철강수출 확대로, 선박(+7.6% → -49.2%)은 수주물량이 급격히 줄어들면서 감소로 전환하였다. 전남지역 수입은 원유, 화학제품, 철강금속 등의 수입이 줄면서 전년대비 7.4% 감소로 전환하였다.',
'수출 증가세 지속\n1/4분기 중 수출은 전년동기대비 증가흐름을 지속하였다. 품목별로 보면 석유제품, 석유화학, 철강, 선박, 반도체, 자동차 등 대다수 품목에서 증가하였다. 석유제품은 글로벌 경기회복에 따른 에너지 수요 증가와 국제유가 급등으로 수출단가가 높은 상승세를 지속하면서 증가하였다. 석유화학도 중국, 아세안을 중심으로 합성수지, 고무 등의 수출이 큰 폭 증가한 데다 고유가로 인한 수출가격도 동반 상승하면서 증가세를 이어갔다. 철강은 건설, 조선 등 글로벌 전방산업의 수요 증대, 원자재가격 상승 및 중국 감산 등에 따른 수출단가 상승 등에 힘입어 증가세를 이어갔다. 선박은 1/4분기 중 인도물량이 확대됨에 따라 증가하였다. 반도체는 자동차 등 전방산업의 견조한 수요가 이어지는 가운데 전년동기대비로 높은 단가가 지속되면서 증가하였다. 자동차는 차량용 반도체 수급차질이 지속되었음에도 불구하고 글로벌 경기회복 흐름에 따라 수요가 늘어나면서 전년동기대비 소폭 증가하였다. 모니터링 결과 향후 수출은 증가세가 지속될 것으로 전망되었다. 석유화학 및 석유정제는 수출단가 상승과 전방산업의 수요확대 기조가 이어지면서 증가할 전망이다. 철강은 주요국 경기회복과 중국, 인도 등의 인프라 투자 확대 등으로 양호한 흐름을 이어갈 전망이다. 반도체는 글로벌 스마트폰 수요 회복, 디지털 전환 기조 등으로 견조한 증가세를 지속할 것으로 보인다. 자동차는 차량용 반도체 공급차질이 점차 완화되고 미국, 신흥시장을 중심으로 수요회복이 본격화됨에 따라 소폭 증가할 전망이다. 선박은 친환경 선박수요 지속, 글로별 교역 신장 등에도 불구하고 2021년 2/4분기 집중되었던 인도물량의 기저효과로 인해 감소할 것으로 보인다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `gradient_accumulation_steps`: 4
- `learning_rate`: 3e-05
- `warmup_ratio`: 0.05
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0011 | 1 | 3.7042 |
| 0.0021 | 2 | 4.4098 |
| 0.0032 | 3 | 4.5599 |
| 0.0042 | 4 | 4.5564 |
| 0.0053 | 5 | 5.3164 |
| 0.0064 | 6 | 4.9723 |
| 0.0074 | 7 | 5.2419 |
| 0.0085 | 8 | 3.6708 |
| 0.0095 | 9 | 3.4174 |
| 0.0106 | 10 | 3.7081 |
| 0.0117 | 11 | 3.5893 |
| 0.0127 | 12 | 2.8265 |
| 0.0138 | 13 | 1.8535 |
| 0.0149 | 14 | 2.2631 |
| 0.0159 | 15 | 1.6212 |
| 0.0170 | 16 | 1.3256 |
| 0.0180 | 17 | 3.1196 |
| 0.0191 | 18 | 2.6933 |
| 0.0202 | 19 | 2.7525 |
| 0.0212 | 20 | 1.8354 |
| 0.0223 | 21 | 1.5399 |
| 0.0233 | 22 | 1.2657 |
| 0.0244 | 23 | 1.5086 |
| 0.0255 | 24 | 1.4753 |
| 0.0265 | 25 | 1.4019 |
| 0.0276 | 26 | 1.0282 |
| 0.0286 | 27 | 1.1981 |
| 0.0297 | 28 | 1.1639 |
| 0.0308 | 29 | 1.064 |
| 0.0318 | 30 | 1.1106 |
| 0.0329 | 31 | 0.8862 |
| 0.0339 | 32 | 0.9067 |
| 0.0350 | 33 | 1.0234 |
| 0.0361 | 34 | 1.0057 |
| 0.0371 | 35 | 0.7404 |
| 0.0382 | 36 | 0.5796 |
| 0.0392 | 37 | 0.6 |
| 0.0403 | 38 | 0.6473 |
| 0.0414 | 39 | 0.7274 |
| 0.0424 | 40 | 0.5312 |
| 0.0435 | 41 | 0.6884 |
| 0.0446 | 42 | 0.4993 |
| 0.0456 | 43 | 0.5445 |
| 0.0467 | 44 | 0.2793 |
| 0.0477 | 45 | 0.4398 |
| 0.0488 | 46 | 0.4882 |
| 0.0499 | 47 | 0.3142 |
| 0.0509 | 48 | 0.253 |
| 0.0520 | 49 | 0.1723 |
| 0.0530 | 50 | 0.4482 |
| 0.0541 | 51 | 0.3704 |
| 0.0552 | 52 | 0.3844 |
| 0.0562 | 53 | 0.3141 |
| 0.0573 | 54 | 0.2717 |
| 0.0583 | 55 | 0.0936 |
| 0.0594 | 56 | 0.0795 |
| 0.0605 | 57 | 0.0754 |
| 0.0615 | 58 | 0.0839 |
| 0.0626 | 59 | 0.0739 |
| 0.0636 | 60 | 0.0622 |
| 0.0647 | 61 | 0.0541 |
| 0.0658 | 62 | 0.4835 |
| 0.0668 | 63 | 0.4849 |
| 0.0679 | 64 | 0.5093 |
| 0.0689 | 65 | 0.4725 |
| 0.0700 | 66 | 0.4658 |
| 0.0711 | 67 | 0.4257 |
| 0.0721 | 68 | 0.4656 |
| 0.0732 | 69 | 0.5188 |
| 0.0743 | 70 | 0.465 |
| 0.0753 | 71 | 0.5166 |
| 0.0764 | 72 | 0.4152 |
| 0.0774 | 73 | 0.4874 |
| 0.0785 | 74 | 0.435 |
| 0.0796 | 75 | 0.4698 |
| 0.0806 | 76 | 0.4075 |
| 0.0817 | 77 | 0.2881 |
| 0.0827 | 78 | 0.3375 |
| 0.0838 | 79 | 0.3183 |
| 0.0849 | 80 | 0.3046 |
| 0.0859 | 81 | 0.5192 |
| 0.0870 | 82 | 0.4832 |
| 0.0880 | 83 | 0.4467 |
| 0.0891 | 84 | 0.3109 |
| 0.0902 | 85 | 0.4108 |
| 0.0912 | 86 | 0.3034 |
| 0.0923 | 87 | 0.2636 |
| 0.0933 | 88 | 0.2169 |
| 0.0944 | 89 | 0.2991 |
| 0.0955 | 90 | 0.2901 |
| 0.0965 | 91 | 0.335 |
| 0.0976 | 92 | 0.3621 |
| 0.0986 | 93 | 0.2661 |
| 0.0997 | 94 | 0.3448 |
| 0.1008 | 95 | 0.1964 |
| 0.1018 | 96 | 0.2323 |
| 0.1029 | 97 | 0.2856 |
| 0.1040 | 98 | 0.2986 |
| 0.1050 | 99 | 0.2628 |
| 0.1061 | 100 | 0.2865 |
| 0.1071 | 101 | 0.2288 |
| 0.1082 | 102 | 0.208 |
| 0.1093 | 103 | 0.2074 |
| 0.1103 | 104 | 0.1906 |
| 0.1114 | 105 | 0.1639 |
| 0.1124 | 106 | 0.1597 |
| 0.1135 | 107 | 0.1896 |
| 0.1146 | 108 | 0.1387 |
| 0.1156 | 109 | 0.1281 |
| 0.1167 | 110 | 0.2742 |
| 0.1177 | 111 | 0.1787 |
| 0.1188 | 112 | 0.1449 |
| 0.1199 | 113 | 0.1114 |
| 0.1209 | 114 | 0.1889 |
| 0.1220 | 115 | 0.1044 |
| 0.1230 | 116 | 0.2556 |
| 0.1241 | 117 | 0.2081 |
| 0.1252 | 118 | 0.2649 |
| 0.1262 | 119 | 0.3898 |
| 0.1273 | 120 | 0.6489 |
| 0.1283 | 121 | 0.6267 |
| 0.1294 | 122 | 0.6013 |
| 0.1305 | 123 | 0.5391 |
| 0.1315 | 124 | 0.5176 |
| 0.1326 | 125 | 0.4483 |
| 0.1337 | 126 | 0.4734 |
| 0.1347 | 127 | 0.6635 |
| 0.1358 | 128 | 0.3238 |
| 0.1368 | 129 | 0.1651 |
| 0.1379 | 130 | 0.4351 |
| 0.1390 | 131 | 0.2721 |
| 0.1400 | 132 | 0.2922 |
| 0.1411 | 133 | 0.3631 |
| 0.1421 | 134 | 0.4333 |
| 0.1432 | 135 | 0.2805 |
| 0.1443 | 136 | 0.0546 |
| 0.1453 | 137 | 0.0316 |
| 0.1464 | 138 | 0.0278 |
| 0.1474 | 139 | 0.0151 |
| 0.1485 | 140 | 0.0177 |
| 0.1496 | 141 | 0.0247 |
| 0.1506 | 142 | 0.0168 |
| 0.1517 | 143 | 0.0278 |
| 0.1527 | 144 | 0.0422 |
| 0.1538 | 145 | 0.0363 |
| 0.1549 | 146 | 0.0484 |
| 0.1559 | 147 | 0.0326 |
| 0.1570 | 148 | 0.009 |
| 0.1580 | 149 | 0.0216 |
| 0.1591 | 150 | 0.005 |
| 0.1602 | 151 | 0.0514 |
| 0.1612 | 152 | 0.0131 |
| 0.1623 | 153 | 0.0145 |
| 0.1634 | 154 | 0.0246 |
| 0.1644 | 155 | 0.0111 |
| 0.1655 | 156 | 0.0184 |
| 0.1665 | 157 | 0.0168 |
| 0.1676 | 158 | 0.0055 |
| 0.1687 | 159 | 0.0091 |
| 0.1697 | 160 | 0.0363 |
| 0.1708 | 161 | 0.0039 |
| 0.1718 | 162 | 0.0119 |
| 0.1729 | 163 | 0.0284 |
| 0.1740 | 164 | 0.0055 |
| 0.1750 | 165 | 0.0193 |
| 0.1761 | 166 | 0.0138 |
| 0.1771 | 167 | 0.0099 |
| 0.1782 | 168 | 0.026 |
| 0.1793 | 169 | 0.025 |
| 0.1803 | 170 | 0.0318 |
| 0.1814 | 171 | 0.0088 |
| 0.1824 | 172 | 0.0137 |
| 0.1835 | 173 | 0.0158 |
| 0.1846 | 174 | 0.0271 |
| 0.1856 | 175 | 0.0181 |
| 0.1867 | 176 | 0.026 |
| 0.1877 | 177 | 0.0207 |
| 0.1888 | 178 | 0.009 |
| 0.1899 | 179 | 0.0117 |
| 0.1909 | 180 | 0.0265 |
| 0.1920 | 181 | 0.0151 |
| 0.1931 | 182 | 0.0254 |
| 0.1941 | 183 | 0.0101 |
| 0.1952 | 184 | 0.0096 |
| 0.1962 | 185 | 0.0225 |
| 0.1973 | 186 | 0.0122 |
| 0.1984 | 187 | 0.0184 |
| 0.1994 | 188 | 0.0326 |
| 0.2005 | 189 | 0.0163 |
| 0.2015 | 190 | 0.0257 |
| 0.2026 | 191 | 0.0126 |
| 0.2037 | 192 | 0.0121 |
| 0.2047 | 193 | 0.0251 |
| 0.2058 | 194 | 0.0145 |
| 0.2068 | 195 | 0.0244 |
| 0.2079 | 196 | 0.0196 |
| 0.2090 | 197 | 0.0121 |
| 0.2100 | 198 | 0.0145 |
| 0.2111 | 199 | 0.0084 |
| 0.2121 | 200 | 0.013 |
| 0.2132 | 201 | 0.0123 |
| 0.2143 | 202 | 0.009 |
| 0.2153 | 203 | 0.0248 |
| 0.2164 | 204 | 0.0236 |
| 0.2174 | 205 | 0.0195 |
| 0.2185 | 206 | 0.0206 |
| 0.2196 | 207 | 0.0201 |
| 0.2206 | 208 | 0.0185 |
| 0.2217 | 209 | 0.0206 |
| 0.2228 | 210 | 0.0233 |
| 0.2238 | 211 | 0.0429 |
| 0.2249 | 212 | 0.0161 |
| 0.2259 | 213 | 0.0334 |
| 0.2270 | 214 | 0.0128 |
| 0.2281 | 215 | 0.0273 |
| 0.2291 | 216 | 0.0228 |
| 0.2302 | 217 | 0.0199 |
| 0.2312 | 218 | 0.0154 |
| 0.2323 | 219 | 0.0051 |
| 0.2334 | 220 | 0.018 |
| 0.2344 | 221 | 0.0194 |
| 0.2355 | 222 | 0.0095 |
| 0.2365 | 223 | 0.0058 |
| 0.2376 | 224 | 0.0285 |
| 0.2387 | 225 | 0.0107 |
| 0.2397 | 226 | 0.0196 |
| 0.2408 | 227 | 0.0311 |
| 0.2418 | 228 | 0.0198 |
| 0.2429 | 229 | 0.0126 |
| 0.2440 | 230 | 0.0168 |
| 0.2450 | 231 | 0.0069 |
| 0.2461 | 232 | 0.0112 |
| 0.2471 | 233 | 0.0133 |
| 0.2482 | 234 | 0.0234 |
| 0.2493 | 235 | 0.0174 |
| 0.2503 | 236 | 0.0133 |
| 0.2514 | 237 | 0.0068 |
| 0.2525 | 238 | 0.0213 |
| 0.2535 | 239 | 0.0197 |
| 0.2546 | 240 | 0.011 |
| 0.2556 | 241 | 0.0226 |
| 0.2567 | 242 | 0.0305 |
| 0.2578 | 243 | 0.0198 |
| 0.2588 | 244 | 0.0318 |
| 0.2599 | 245 | 0.024 |
| 0.2609 | 246 | 0.0349 |
| 0.2620 | 247 | 0.1405 |
| 0.2631 | 248 | 0.1075 |
| 0.2641 | 249 | 0.1303 |
| 0.2652 | 250 | 0.1108 |
| 0.2662 | 251 | 0.0913 |
| 0.2673 | 252 | 0.081 |
| 0.2684 | 253 | 0.0516 |
| 0.2694 | 254 | 0.082 |
| 0.2705 | 255 | 0.0558 |
| 0.2715 | 256 | 0.05 |
| 0.2726 | 257 | 0.0829 |
| 0.2737 | 258 | 0.1127 |
| 0.2747 | 259 | 0.0559 |
| 0.2758 | 260 | 0.1117 |
| 0.2768 | 261 | 0.06 |
| 0.2779 | 262 | 0.0525 |
| 0.2790 | 263 | 0.0488 |
| 0.2800 | 264 | 0.0403 |
| 0.2811 | 265 | 0.0978 |
| 0.2822 | 266 | 0.0404 |
| 0.2832 | 267 | 0.0481 |
| 0.2843 | 268 | 0.0357 |
| 0.2853 | 269 | 0.0327 |
| 0.2864 | 270 | 0.0615 |
| 0.2875 | 271 | 0.0662 |
| 0.2885 | 272 | 0.0546 |
| 0.2896 | 273 | 0.0523 |
| 0.2906 | 274 | 0.0436 |
| 0.2917 | 275 | 0.0509 |
| 0.2928 | 276 | 0.0279 |
| 0.2938 | 277 | 0.0405 |
| 0.2949 | 278 | 0.0608 |
| 0.2959 | 279 | 0.0223 |
| 0.2970 | 280 | 0.0103 |
| 0.2981 | 281 | 0.0432 |
| 0.2991 | 282 | 0.0491 |
| 0.3002 | 283 | 0.0237 |
| 0.3012 | 284 | 0.0458 |
| 0.3023 | 285 | 0.0362 |
| 0.3034 | 286 | 0.0235 |
| 0.3044 | 287 | 0.025 |
| 0.3055 | 288 | 0.0354 |
| 0.3065 | 289 | 0.0164 |
| 0.3076 | 290 | 0.0323 |
| 0.3087 | 291 | 0.0334 |
| 0.3097 | 292 | 0.019 |
| 0.3108 | 293 | 0.0246 |
| 0.3119 | 294 | 0.0243 |
| 0.3129 | 295 | 0.0373 |
| 0.3140 | 296 | 0.0247 |
| 0.3150 | 297 | 0.017 |
| 0.3161 | 298 | 0.0158 |
| 0.3172 | 299 | 0.0447 |
| 0.3182 | 300 | 0.036 |
| 0.3193 | 301 | 0.0467 |
| 0.3203 | 302 | 0.0498 |
| 0.3214 | 303 | 0.0371 |
| 0.3225 | 304 | 0.0367 |
| 0.3235 | 305 | 0.0696 |
| 0.3246 | 306 | 0.0432 |
| 0.3256 | 307 | 0.0472 |
| 0.3267 | 308 | 0.0361 |
| 0.3278 | 309 | 0.0282 |
| 0.3288 | 310 | 0.0427 |
| 0.3299 | 311 | 0.0264 |
| 0.3309 | 312 | 0.0857 |
| 0.3320 | 313 | 0.0697 |
| 0.3331 | 314 | 0.09 |
| 0.3341 | 315 | 0.0509 |
| 0.3352 | 316 | 0.0438 |
| 0.3363 | 317 | 0.0451 |
| 0.3373 | 318 | 0.0337 |
| 0.3384 | 319 | 0.032 |
| 0.3394 | 320 | 0.0299 |
| 0.3405 | 321 | 0.0262 |
| 0.3416 | 322 | 0.0394 |
| 0.3426 | 323 | 0.0358 |
| 0.3437 | 324 | 0.0296 |
| 0.3447 | 325 | 0.029 |
| 0.3458 | 326 | 0.0235 |
| 0.3469 | 327 | 0.0541 |
| 0.3479 | 328 | 0.0502 |
| 0.3490 | 329 | 0.0566 |
| 0.3500 | 330 | 0.059 |
| 0.3511 | 331 | 0.0526 |
| 0.3522 | 332 | 0.0142 |
| 0.3532 | 333 | 0.0502 |
| 0.3543 | 334 | 0.0188 |
| 0.3553 | 335 | 0.0348 |
| 0.3564 | 336 | 0.0369 |
| 0.3575 | 337 | 0.0171 |
| 0.3585 | 338 | 0.0251 |
| 0.3596 | 339 | 0.0594 |
| 0.3606 | 340 | 0.0661 |
| 0.3617 | 341 | 0.0671 |
| 0.3628 | 342 | 0.0492 |
| 0.3638 | 343 | 0.0712 |
| 0.3649 | 344 | 0.0678 |
| 0.3660 | 345 | 0.0722 |
| 0.3670 | 346 | 0.0464 |
| 0.3681 | 347 | 0.0373 |
| 0.3691 | 348 | 0.0879 |
| 0.3702 | 349 | 0.0712 |
| 0.3713 | 350 | 0.0527 |
| 0.3723 | 351 | 0.0927 |
| 0.3734 | 352 | 0.0562 |
| 0.3744 | 353 | 0.0676 |
| 0.3755 | 354 | 0.0603 |
| 0.3766 | 355 | 0.0529 |
| 0.3776 | 356 | 0.1075 |
| 0.3787 | 357 | 0.0553 |
| 0.3797 | 358 | 0.048 |
| 0.3808 | 359 | 0.0347 |
| 0.3819 | 360 | 0.0132 |
| 0.3829 | 361 | 0.0364 |
| 0.3840 | 362 | 0.0521 |
| 0.3850 | 363 | 0.0636 |
| 0.3861 | 364 | 0.0467 |
| 0.3872 | 365 | 0.0391 |
| 0.3882 | 366 | 0.0151 |
| 0.3893 | 367 | 0.017 |
| 0.3903 | 368 | 0.0415 |
| 0.3914 | 369 | 0.0307 |
| 0.3925 | 370 | 0.077 |
| 0.3935 | 371 | 0.0317 |
| 0.3946 | 372 | 0.0395 |
| 0.3957 | 373 | 0.0475 |
| 0.3967 | 374 | 0.0451 |
| 0.3978 | 375 | 0.0224 |
| 0.3988 | 376 | 0.0427 |
| 0.3999 | 377 | 0.0337 |
| 0.4010 | 378 | 0.0198 |
| 0.4020 | 379 | 0.0716 |
| 0.4031 | 380 | 0.0342 |
| 0.4041 | 381 | 0.0718 |
| 0.4052 | 382 | 0.0783 |
| 0.4063 | 383 | 0.0702 |
| 0.4073 | 384 | 0.0365 |
| 0.4084 | 385 | 0.0575 |
| 0.4094 | 386 | 0.0278 |
| 0.4105 | 387 | 0.0531 |
| 0.4116 | 388 | 0.0521 |
| 0.4126 | 389 | 0.0817 |
| 0.4137 | 390 | 0.0484 |
| 0.4147 | 391 | 0.0642 |
| 0.4158 | 392 | 0.0374 |
| 0.4169 | 393 | 0.0504 |
| 0.4179 | 394 | 0.0353 |
| 0.4190 | 395 | 0.0556 |
| 0.4200 | 396 | 0.0354 |
| 0.4211 | 397 | 0.0609 |
| 0.4222 | 398 | 0.056 |
| 0.4232 | 399 | 0.042 |
| 0.4243 | 400 | 0.0266 |
| 0.4254 | 401 | 0.0461 |
| 0.4264 | 402 | 0.0674 |
| 0.4275 | 403 | 0.0293 |
| 0.4285 | 404 | 0.0489 |
| 0.4296 | 405 | 0.0546 |
| 0.4307 | 406 | 0.0649 |
| 0.4317 | 407 | 0.039 |
| 0.4328 | 408 | 0.0358 |
| 0.4338 | 409 | 0.0515 |
| 0.4349 | 410 | 0.026 |
| 0.4360 | 411 | 0.0476 |
| 0.4370 | 412 | 0.0736 |
| 0.4381 | 413 | 0.0479 |
| 0.4391 | 414 | 0.0742 |
| 0.4402 | 415 | 0.0435 |
| 0.4413 | 416 | 0.0585 |
| 0.4423 | 417 | 0.051 |
| 0.4434 | 418 | 0.0374 |
| 0.4444 | 419 | 0.0271 |
| 0.4455 | 420 | 0.0397 |
| 0.4466 | 421 | 0.0555 |
| 0.4476 | 422 | 0.0406 |
| 0.4487 | 423 | 0.0282 |
| 0.4497 | 424 | 0.0225 |
| 0.4508 | 425 | 0.0303 |
| 0.4519 | 426 | 0.0763 |
| 0.4529 | 427 | 0.0438 |
| 0.4540 | 428 | 0.0521 |
| 0.4551 | 429 | 0.0415 |
| 0.4561 | 430 | 0.0796 |
| 0.4572 | 431 | 0.0703 |
| 0.4582 | 432 | 0.0754 |
| 0.4593 | 433 | 0.131 |
| 0.4604 | 434 | 0.0805 |
| 0.4614 | 435 | 0.0816 |
| 0.4625 | 436 | 0.096 |
| 0.4635 | 437 | 0.119 |
| 0.4646 | 438 | 0.0648 |
| 0.4657 | 439 | 0.0961 |
| 0.4667 | 440 | 0.0612 |
| 0.4678 | 441 | 0.036 |
| 0.4688 | 442 | 0.2117 |
| 0.4699 | 443 | 0.1767 |
| 0.4710 | 444 | 0.2005 |
| 0.4720 | 445 | 0.1606 |
| 0.4731 | 446 | 0.1282 |
| 0.4741 | 447 | 0.1721 |
| 0.4752 | 448 | 0.1293 |
| 0.4763 | 449 | 0.1211 |
| 0.4773 | 450 | 0.1445 |
| 0.4784 | 451 | 0.1381 |
| 0.4794 | 452 | 0.1315 |
| 0.4805 | 453 | 0.0651 |
| 0.4816 | 454 | 0.0783 |
| 0.4826 | 455 | 0.1153 |
| 0.4837 | 456 | 0.1458 |
| 0.4848 | 457 | 0.0817 |
| 0.4858 | 458 | 0.1302 |
| 0.4869 | 459 | 0.1129 |
| 0.4879 | 460 | 0.0853 |
| 0.4890 | 461 | 0.0934 |
| 0.4901 | 462 | 0.0802 |
| 0.4911 | 463 | 0.0876 |
| 0.4922 | 464 | 0.0927 |
| 0.4932 | 465 | 0.1007 |
| 0.4943 | 466 | 0.0904 |
| 0.4954 | 467 | 0.0951 |
| 0.4964 | 468 | 0.0582 |
| 0.4975 | 469 | 0.0722 |
| 0.4985 | 470 | 0.0545 |
| 0.4996 | 471 | 0.0802 |
| 0.5007 | 472 | 0.075 |
| 0.5017 | 473 | 0.058 |
| 0.5028 | 474 | 0.0583 |
| 0.5038 | 475 | 0.0737 |
| 0.5049 | 476 | 0.0371 |
| 0.5060 | 477 | 0.0896 |
| 0.5070 | 478 | 0.0999 |
| 0.5081 | 479 | 0.1346 |
| 0.5091 | 480 | 0.1087 |
| 0.5102 | 481 | 0.1317 |
| 0.5113 | 482 | 0.0484 |
| 0.5123 | 483 | 0.0754 |
| 0.5134 | 484 | 0.0845 |
| 0.5145 | 485 | 0.0571 |
| 0.5155 | 486 | 0.0698 |
| 0.5166 | 487 | 0.0715 |
| 0.5176 | 488 | 0.1011 |
| 0.5187 | 489 | 0.0773 |
| 0.5198 | 490 | 0.0657 |
| 0.5208 | 491 | 0.075 |
| 0.5219 | 492 | 0.1186 |
| 0.5229 | 493 | 0.0799 |
| 0.5240 | 494 | 0.1062 |
| 0.5251 | 495 | 0.0814 |
| 0.5261 | 496 | 0.1071 |
| 0.5272 | 497 | 0.127 |
| 0.5282 | 498 | 0.0792 |
| 0.5293 | 499 | 0.0559 |
| 0.5304 | 500 | 0.0813 |
| 0.5314 | 501 | 0.0822 |
| 0.5325 | 502 | 0.0704 |
| 0.5335 | 503 | 0.0919 |
| 0.5346 | 504 | 0.0927 |
| 0.5357 | 505 | 0.0851 |
| 0.5367 | 506 | 0.0766 |
| 0.5378 | 507 | 0.0919 |
| 0.5388 | 508 | 0.0489 |
| 0.5399 | 509 | 0.0491 |
| 0.5410 | 510 | 0.0813 |
| 0.5420 | 511 | 0.0763 |
| 0.5431 | 512 | 0.0736 |
| 0.5442 | 513 | 0.0588 |
| 0.5452 | 514 | 0.057 |
| 0.5463 | 515 | 0.0662 |
| 0.5473 | 516 | 0.0859 |
| 0.5484 | 517 | 0.0824 |
| 0.5495 | 518 | 0.0548 |
| 0.5505 | 519 | 0.0565 |
| 0.5516 | 520 | 0.0938 |
| 0.5526 | 521 | 0.0796 |
| 0.5537 | 522 | 0.0891 |
| 0.5548 | 523 | 0.0975 |
| 0.5558 | 524 | 0.0772 |
| 0.5569 | 525 | 0.0548 |
| 0.5579 | 526 | 0.0508 |
| 0.5590 | 527 | 0.0857 |
| 0.5601 | 528 | 0.0755 |
| 0.5611 | 529 | 0.0851 |
| 0.5622 | 530 | 0.0695 |
| 0.5632 | 531 | 0.0711 |
| 0.5643 | 532 | 0.1109 |
| 0.5654 | 533 | 0.048 |
| 0.5664 | 534 | 0.0823 |
| 0.5675 | 535 | 0.0609 |
| 0.5685 | 536 | 0.0701 |
| 0.5696 | 537 | 0.0722 |
| 0.5707 | 538 | 0.1006 |
| 0.5717 | 539 | 0.0827 |
| 0.5728 | 540 | 0.0852 |
| 0.5739 | 541 | 0.1153 |
| 0.5749 | 542 | 0.078 |
| 0.5760 | 543 | 0.0584 |
| 0.5770 | 544 | 0.0766 |
| 0.5781 | 545 | 0.0441 |
| 0.5792 | 546 | 0.0858 |
| 0.5802 | 547 | 0.0984 |
| 0.5813 | 548 | 0.0931 |
| 0.5823 | 549 | 0.1091 |
| 0.5834 | 550 | 0.0704 |
| 0.5845 | 551 | 0.0765 |
| 0.5855 | 552 | 0.0689 |
| 0.5866 | 553 | 0.0776 |
| 0.5876 | 554 | 0.0648 |
| 0.5887 | 555 | 0.1011 |
| 0.5898 | 556 | 0.0574 |
| 0.5908 | 557 | 0.1231 |
| 0.5919 | 558 | 0.0598 |
| 0.5929 | 559 | 0.04 |
| 0.5940 | 560 | 0.0412 |
| 0.5951 | 561 | 0.0644 |
| 0.5961 | 562 | 0.0408 |
| 0.5972 | 563 | 0.0597 |
| 0.5982 | 564 | 0.0455 |
| 0.5993 | 565 | 0.0356 |
| 0.6004 | 566 | 0.0267 |
| 0.6014 | 567 | 0.063 |
| 0.6025 | 568 | 0.0683 |
| 0.6036 | 569 | 0.0576 |
| 0.6046 | 570 | 0.0473 |
| 0.6057 | 571 | 0.0728 |
| 0.6067 | 572 | 0.0411 |
| 0.6078 | 573 | 0.0459 |
| 0.6089 | 574 | 0.0538 |
| 0.6099 | 575 | 0.0431 |
| 0.6110 | 576 | 0.0592 |
| 0.6120 | 577 | 0.0717 |
| 0.6131 | 578 | 0.0897 |
| 0.6142 | 579 | 0.0537 |
| 0.6152 | 580 | 0.0603 |
| 0.6163 | 581 | 0.1405 |
| 0.6173 | 582 | 0.1461 |
| 0.6184 | 583 | 0.0665 |
| 0.6195 | 584 | 0.0783 |
| 0.6205 | 585 | 0.0403 |
| 0.6216 | 586 | 0.0407 |
| 0.6226 | 587 | 0.0896 |
| 0.6237 | 588 | 0.0875 |
| 0.6248 | 589 | 0.11 |
| 0.6258 | 590 | 0.1066 |
| 0.6269 | 591 | 0.0471 |
| 0.6280 | 592 | 0.0338 |
| 0.6290 | 593 | 0.0524 |
| 0.6301 | 594 | 0.0563 |
| 0.6311 | 595 | 0.0526 |
| 0.6322 | 596 | 0.0325 |
| 0.6333 | 597 | 0.0602 |
| 0.6343 | 598 | 0.0395 |
| 0.6354 | 599 | 0.0545 |
| 0.6364 | 600 | 0.0713 |
| 0.6375 | 601 | 0.0402 |
| 0.6386 | 602 | 0.0399 |
| 0.6396 | 603 | 0.0415 |
| 0.6407 | 604 | 0.0536 |
| 0.6417 | 605 | 0.035 |
| 0.6428 | 606 | 0.044 |
| 0.6439 | 607 | 0.0502 |
| 0.6449 | 608 | 0.0209 |
| 0.6460 | 609 | 0.0426 |
| 0.6470 | 610 | 0.0364 |
| 0.6481 | 611 | 0.0529 |
| 0.6492 | 612 | 0.0651 |
| 0.6502 | 613 | 0.0418 |
| 0.6513 | 614 | 0.0562 |
| 0.6523 | 615 | 0.0408 |
| 0.6534 | 616 | 0.0242 |
| 0.6545 | 617 | 0.0265 |
| 0.6555 | 618 | 0.0309 |
| 0.6566 | 619 | 0.0463 |
| 0.6577 | 620 | 0.0256 |
| 0.6587 | 621 | 0.0368 |
| 0.6598 | 622 | 0.0253 |
| 0.6608 | 623 | 0.0444 |
| 0.6619 | 624 | 0.0628 |
| 0.6630 | 625 | 0.0414 |
| 0.6640 | 626 | 0.0244 |
| 0.6651 | 627 | 0.0205 |
| 0.6661 | 628 | 0.0162 |
| 0.6672 | 629 | 0.0166 |
| 0.6683 | 630 | 0.0281 |
| 0.6693 | 631 | 0.0252 |
| 0.6704 | 632 | 0.0192 |
| 0.6714 | 633 | 0.0223 |
| 0.6725 | 634 | 0.0141 |
| 0.6736 | 635 | 0.0165 |
| 0.6746 | 636 | 0.0314 |
| 0.6757 | 637 | 0.1062 |
| 0.6767 | 638 | 0.0371 |
| 0.6778 | 639 | 0.015 |
| 0.6789 | 640 | 0.0301 |
| 0.6799 | 641 | 0.0417 |
| 0.6810 | 642 | 0.0593 |
| 0.6820 | 643 | 0.0399 |
| 0.6831 | 644 | 0.0126 |
| 0.6842 | 645 | 0.0047 |
| 0.6852 | 646 | 0.0013 |
| 0.6863 | 647 | 0.0067 |
| 0.6874 | 648 | 0.0019 |
| 0.6884 | 649 | 0.0154 |
| 0.6895 | 650 | 0.0006 |
| 0.6905 | 651 | 0.0504 |
| 0.6916 | 652 | 0.0159 |
| 0.6927 | 653 | 0.0198 |
| 0.6937 | 654 | 0.2964 |
| 0.6948 | 655 | 0.5783 |
| 0.6958 | 656 | 0.4341 |
| 0.6969 | 657 | 0.3179 |
| 0.6980 | 658 | 0.3856 |
| 0.6990 | 659 | 0.4783 |
| 0.7001 | 660 | 0.3014 |
| 0.7011 | 661 | 0.3303 |
| 0.7022 | 662 | 0.358 |
| 0.7033 | 663 | 0.4306 |
| 0.7043 | 664 | 0.4152 |
| 0.7054 | 665 | 0.2776 |
| 0.7064 | 666 | 0.288 |
| 0.7075 | 667 | 0.2787 |
| 0.7086 | 668 | 0.2555 |
| 0.7096 | 669 | 0.2825 |
| 0.7107 | 670 | 0.2834 |
| 0.7117 | 671 | 0.2461 |
| 0.7128 | 672 | 0.2625 |
| 0.7139 | 673 | 0.2299 |
| 0.7149 | 674 | 0.2478 |
| 0.7160 | 675 | 0.2339 |
| 0.7171 | 676 | 0.3259 |
| 0.7181 | 677 | 0.4917 |
| 0.7192 | 678 | 0.231 |
| 0.7202 | 679 | 0.1833 |
| 0.7213 | 680 | 0.1768 |
| 0.7224 | 681 | 0.1955 |
| 0.7234 | 682 | 0.235 |
| 0.7245 | 683 | 0.224 |
| 0.7255 | 684 | 0.2083 |
| 0.7266 | 685 | 0.2632 |
| 0.7277 | 686 | 0.1705 |
| 0.7287 | 687 | 0.239 |
| 0.7298 | 688 | 0.2403 |
| 0.7308 | 689 | 0.3655 |
| 0.7319 | 690 | 0.3451 |
| 0.7330 | 691 | 0.2554 |
| 0.7340 | 692 | 0.3059 |
| 0.7351 | 693 | 0.2546 |
| 0.7361 | 694 | 0.2133 |
| 0.7372 | 695 | 0.3031 |
| 0.7383 | 696 | 0.1978 |
| 0.7393 | 697 | 0.2051 |
| 0.7404 | 698 | 0.1882 |
| 0.7414 | 699 | 0.2374 |
| 0.7425 | 700 | 0.2157 |
| 0.7436 | 701 | 0.2917 |
| 0.7446 | 702 | 0.1717 |
| 0.7457 | 703 | 0.1438 |
| 0.7468 | 704 | 0.1678 |
| 0.7478 | 705 | 0.2295 |
| 0.7489 | 706 | 0.1697 |
| 0.7499 | 707 | 0.2032 |
| 0.7510 | 708 | 0.1568 |
| 0.7521 | 709 | 0.1483 |
| 0.7531 | 710 | 0.1863 |
| 0.7542 | 711 | 0.1585 |
| 0.7552 | 712 | 0.16 |
| 0.7563 | 713 | 0.1809 |
| 0.7574 | 714 | 0.1599 |
| 0.7584 | 715 | 0.1851 |
| 0.7595 | 716 | 0.1722 |
| 0.7605 | 717 | 0.1718 |
| 0.7616 | 718 | 0.182 |
| 0.7627 | 719 | 0.1263 |
| 0.7637 | 720 | 0.1608 |
| 0.7648 | 721 | 0.1589 |
| 0.7658 | 722 | 0.1615 |
| 0.7669 | 723 | 0.1385 |
| 0.7680 | 724 | 0.1626 |
| 0.7690 | 725 | 0.1592 |
| 0.7701 | 726 | 0.1405 |
| 0.7711 | 727 | 0.1793 |
| 0.7722 | 728 | 0.1639 |
| 0.7733 | 729 | 0.2661 |
| 0.7743 | 730 | 0.1306 |
| 0.7754 | 731 | 0.1914 |
| 0.7765 | 732 | 0.1682 |
| 0.7775 | 733 | 0.2162 |
| 0.7786 | 734 | 0.1439 |
| 0.7796 | 735 | 0.1177 |
| 0.7807 | 736 | 0.1595 |
| 0.7818 | 737 | 0.1303 |
| 0.7828 | 738 | 0.2224 |
| 0.7839 | 739 | 0.1414 |
| 0.7849 | 740 | 0.1395 |
| 0.7860 | 741 | 0.1238 |
| 0.7871 | 742 | 0.1319 |
| 0.7881 | 743 | 0.2027 |
| 0.7892 | 744 | 0.0817 |
| 0.7902 | 745 | 0.101 |
| 0.7913 | 746 | 0.1914 |
| 0.7924 | 747 | 0.1235 |
| 0.7934 | 748 | 0.1635 |
| 0.7945 | 749 | 0.1551 |
| 0.7955 | 750 | 0.2518 |
| 0.7966 | 751 | 0.1477 |
| 0.7977 | 752 | 0.1588 |
| 0.7987 | 753 | 0.1384 |
| 0.7998 | 754 | 0.1724 |
| 0.8008 | 755 | 0.1841 |
| 0.8019 | 756 | 0.1639 |
| 0.8030 | 757 | 0.1974 |
| 0.8040 | 758 | 0.066 |
| 0.8051 | 759 | 0.1331 |
| 0.8062 | 760 | 0.1444 |
| 0.8072 | 761 | 0.1243 |
| 0.8083 | 762 | 0.1583 |
| 0.8093 | 763 | 0.1378 |
| 0.8104 | 764 | 0.1309 |
| 0.8115 | 765 | 0.1588 |
| 0.8125 | 766 | 0.0926 |
| 0.8136 | 767 | 0.1255 |
| 0.8146 | 768 | 0.0968 |
| 0.8157 | 769 | 0.1393 |
| 0.8168 | 770 | 0.1094 |
| 0.8178 | 771 | 0.0904 |
| 0.8189 | 772 | 0.1572 |
| 0.8199 | 773 | 0.0711 |
| 0.8210 | 774 | 0.1014 |
| 0.8221 | 775 | 0.1613 |
| 0.8231 | 776 | 0.1737 |
| 0.8242 | 777 | 0.1312 |
| 0.8252 | 778 | 0.1142 |
| 0.8263 | 779 | 0.1416 |
| 0.8274 | 780 | 0.0773 |
| 0.8284 | 781 | 0.1457 |
| 0.8295 | 782 | 0.1125 |
| 0.8305 | 783 | 0.0863 |
| 0.8316 | 784 | 0.0884 |
| 0.8327 | 785 | 0.1128 |
| 0.8337 | 786 | 0.137 |
| 0.8348 | 787 | 0.1402 |
| 0.8359 | 788 | 0.0916 |
| 0.8369 | 789 | 0.129 |
| 0.8380 | 790 | 0.0848 |
| 0.8390 | 791 | 0.1328 |
| 0.8401 | 792 | 0.1102 |
| 0.8412 | 793 | 0.0634 |
| 0.8422 | 794 | 0.1209 |
| 0.8433 | 795 | 0.0593 |
| 0.8443 | 796 | 0.1537 |
| 0.8454 | 797 | 0.118 |
| 0.8465 | 798 | 0.2072 |
| 0.8475 | 799 | 0.0652 |
| 0.8486 | 800 | 0.0991 |
| 0.8496 | 801 | 0.1198 |
| 0.8507 | 802 | 0.0435 |
| 0.8518 | 803 | 0.0973 |
| 0.8528 | 804 | 0.1537 |
| 0.8539 | 805 | 0.0665 |
| 0.8549 | 806 | 0.0811 |
| 0.8560 | 807 | 0.093 |
| 0.8571 | 808 | 0.0862 |
| 0.8581 | 809 | 0.1061 |
| 0.8592 | 810 | 0.1301 |
| 0.8602 | 811 | 0.1807 |
| 0.8613 | 812 | 0.1437 |
| 0.8624 | 813 | 0.1105 |
| 0.8634 | 814 | 0.1493 |
| 0.8645 | 815 | 0.1062 |
| 0.8656 | 816 | 0.1005 |
| 0.8666 | 817 | 0.1121 |
| 0.8677 | 818 | 0.0767 |
| 0.8687 | 819 | 0.0823 |
| 0.8698 | 820 | 0.1009 |
| 0.8709 | 821 | 0.1006 |
| 0.8719 | 822 | 0.1404 |
| 0.8730 | 823 | 0.1079 |
| 0.8740 | 824 | 0.1414 |
| 0.8751 | 825 | 0.0947 |
| 0.8762 | 826 | 0.0827 |
| 0.8772 | 827 | 0.116 |
| 0.8783 | 828 | 0.1462 |
| 0.8793 | 829 | 0.1431 |
| 0.8804 | 830 | 0.0911 |
| 0.8815 | 831 | 0.1039 |
| 0.8825 | 832 | 0.0501 |
| 0.8836 | 833 | 0.066 |
| 0.8846 | 834 | 0.0775 |
| 0.8857 | 835 | 0.0605 |
| 0.8868 | 836 | 0.0651 |
| 0.8878 | 837 | 0.1079 |
| 0.8889 | 838 | 0.1664 |
| 0.8899 | 839 | 0.1982 |
| 0.8910 | 840 | 0.1549 |
| 0.8921 | 841 | 0.0944 |
| 0.8931 | 842 | 0.0645 |
| 0.8942 | 843 | 0.1407 |
| 0.8953 | 844 | 0.0975 |
| 0.8963 | 845 | 0.1142 |
| 0.8974 | 846 | 0.0814 |
| 0.8984 | 847 | 0.1161 |
| 0.8995 | 848 | 0.1087 |
| 0.9006 | 849 | 0.1345 |
| 0.9016 | 850 | 0.1259 |
| 0.9027 | 851 | 0.1402 |
| 0.9037 | 852 | 0.0984 |
| 0.9048 | 853 | 0.1238 |
| 0.9059 | 854 | 0.0818 |
| 0.9069 | 855 | 0.0998 |
| 0.9080 | 856 | 0.0865 |
| 0.9090 | 857 | 0.0814 |
| 0.9101 | 858 | 0.0685 |
| 0.9112 | 859 | 0.0847 |
| 0.9122 | 860 | 0.0518 |
| 0.9133 | 861 | 0.066 |
| 0.9143 | 862 | 0.1071 |
| 0.9154 | 863 | 0.0645 |
| 0.9165 | 864 | 0.0852 |
| 0.9175 | 865 | 0.0967 |
| 0.9186 | 866 | 0.1119 |
| 0.9196 | 867 | 0.0908 |
| 0.9207 | 868 | 0.0405 |
| 0.9218 | 869 | 0.0902 |
| 0.9228 | 870 | 0.0726 |
| 0.9239 | 871 | 0.067 |
| 0.9250 | 872 | 0.0636 |
| 0.9260 | 873 | 0.0576 |
| 0.9271 | 874 | 0.0712 |
| 0.9281 | 875 | 0.0881 |
| 0.9292 | 876 | 0.0716 |
| 0.9303 | 877 | 0.0509 |
| 0.9313 | 878 | 0.0756 |
| 0.9324 | 879 | 0.1082 |
| 0.9334 | 880 | 0.0581 |
| 0.9345 | 881 | 0.0861 |
| 0.9356 | 882 | 0.0692 |
| 0.9366 | 883 | 0.071 |
| 0.9377 | 884 | 0.0576 |
| 0.9387 | 885 | 0.0611 |
| 0.9398 | 886 | 0.056 |
| 0.9409 | 887 | 0.0728 |
| 0.9419 | 888 | 0.1169 |
| 0.9430 | 889 | 0.0735 |
| 0.9440 | 890 | 0.1343 |
| 0.9451 | 891 | 0.0717 |
| 0.9462 | 892 | 0.0953 |
| 0.9472 | 893 | 0.0759 |
| 0.9483 | 894 | 0.0806 |
| 0.9494 | 895 | 0.0753 |
| 0.9504 | 896 | 0.1001 |
| 0.9515 | 897 | 0.096 |
| 0.9525 | 898 | 0.0423 |
| 0.9536 | 899 | 0.0737 |
| 0.9547 | 900 | 0.1075 |
| 0.9557 | 901 | 0.0858 |
| 0.9568 | 902 | 0.0834 |
| 0.9578 | 903 | 0.0512 |
| 0.9589 | 904 | 0.0568 |
| 0.9600 | 905 | 0.1081 |
| 0.9610 | 906 | 0.0557 |
| 0.9621 | 907 | 0.0999 |
| 0.9631 | 908 | 0.097 |
| 0.9642 | 909 | 0.0998 |
| 0.9653 | 910 | 0.0831 |
| 0.9663 | 911 | 0.0559 |
| 0.9674 | 912 | 0.0925 |
| 0.9684 | 913 | 0.0911 |
| 0.9695 | 914 | 0.0703 |
| 0.9706 | 915 | 0.0773 |
| 0.9716 | 916 | 0.0684 |
| 0.9727 | 917 | 0.0727 |
| 0.9737 | 918 | 0.0993 |
| 0.9748 | 919 | 0.0551 |
| 0.9759 | 920 | 0.0857 |
| 0.9769 | 921 | 0.0686 |
| 0.9780 | 922 | 0.0647 |
| 0.9791 | 923 | 0.0654 |
| 0.9801 | 924 | 0.0866 |
| 0.9812 | 925 | 0.0769 |
| 0.9822 | 926 | 0.1067 |
| 0.9833 | 927 | 0.0949 |
| 0.9844 | 928 | 0.0519 |
| 0.9854 | 929 | 0.0648 |
| 0.9865 | 930 | 0.0573 |
| 0.9875 | 931 | 0.0757 |
| 0.9886 | 932 | 0.1013 |
| 0.9897 | 933 | 0.0385 |
| 0.9907 | 934 | 0.0622 |
| 0.9918 | 935 | 0.0365 |
| 0.9928 | 936 | 0.0314 |
| 0.9939 | 937 | 0.0599 |
| 0.9950 | 938 | 0.0655 |
| 0.9960 | 939 | 0.0313 |
| 0.9971 | 940 | 0.0495 |
| 0.9981 | 941 | 0.0337 |
| 0.9992 | 942 | 0.0296 |
| 1.0003 | 943 | 0.0909 |
| 1.0013 | 944 | 0.2702 |
| 1.0024 | 945 | 0.2833 |
| 1.0034 | 946 | 0.2875 |
| 1.0045 | 947 | 0.4469 |
| 1.0056 | 948 | 0.4596 |
| 1.0066 | 949 | 0.4541 |
| 1.0077 | 950 | 0.4298 |
| 1.0088 | 951 | 0.1818 |
| 1.0098 | 952 | 0.2236 |
| 1.0109 | 953 | 0.2475 |
| 1.0119 | 954 | 0.2393 |
| 1.0130 | 955 | 0.2203 |
| 1.0141 | 956 | 0.1878 |
| 1.0151 | 957 | 0.1573 |
| 1.0162 | 958 | 0.155 |
| 1.0172 | 959 | 0.2007 |
| 1.0183 | 960 | 0.3347 |
| 1.0194 | 961 | 0.2457 |
| 1.0204 | 962 | 0.2357 |
| 1.0215 | 963 | 0.2386 |
| 1.0225 | 964 | 0.3535 |
| 1.0236 | 965 | 0.2635 |
| 1.0247 | 966 | 0.3877 |
| 1.0257 | 967 | 0.2424 |
| 1.0268 | 968 | 0.4052 |
| 1.0278 | 969 | 0.2783 |
| 1.0289 | 970 | 0.4503 |
| 1.0300 | 971 | 0.3233 |
| 1.0310 | 972 | 0.4281 |
| 1.0321 | 973 | 0.3867 |
| 1.0331 | 974 | 0.3603 |
| 1.0342 | 975 | 0.3305 |
| 1.0353 | 976 | 0.3427 |
| 1.0363 | 977 | 0.3719 |
| 1.0374 | 978 | 0.3089 |
| 1.0385 | 979 | 0.2583 |
| 1.0395 | 980 | 0.2666 |
| 1.0406 | 981 | 0.2478 |
| 1.0416 | 982 | 0.3 |
| 1.0427 | 983 | 0.2226 |
| 1.0438 | 984 | 0.2448 |
| 1.0448 | 985 | 0.1496 |
| 1.0459 | 986 | 0.1866 |
| 1.0469 | 987 | 0.1322 |
| 1.0480 | 988 | 0.1383 |
| 1.0491 | 989 | 0.1007 |
| 1.0501 | 990 | 0.0931 |
| 1.0512 | 991 | 0.0771 |
| 1.0522 | 992 | 0.0945 |
| 1.0533 | 993 | 0.1203 |
| 1.0544 | 994 | 0.139 |
| 1.0554 | 995 | 0.1328 |
| 1.0565 | 996 | 0.13 |
| 1.0575 | 997 | 0.0796 |
| 1.0586 | 998 | 0.0324 |
| 1.0597 | 999 | 0.0289 |
| 1.0607 | 1000 | 0.0219 |
| 1.0618 | 1001 | 0.0375 |
| 1.0628 | 1002 | 0.022 |
| 1.0639 | 1003 | 0.0307 |
| 1.0650 | 1004 | 0.068 |
| 1.0660 | 1005 | 0.2106 |
| 1.0671 | 1006 | 0.2132 |
| 1.0682 | 1007 | 0.2303 |
| 1.0692 | 1008 | 0.1717 |
| 1.0703 | 1009 | 0.1677 |
| 1.0713 | 1010 | 0.2735 |
| 1.0724 | 1011 | 0.252 |
| 1.0735 | 1012 | 0.2336 |
| 1.0745 | 1013 | 0.233 |
| 1.0756 | 1014 | 0.3612 |
| 1.0766 | 1015 | 0.2526 |
| 1.0777 | 1016 | 0.2727 |
| 1.0788 | 1017 | 0.2948 |
| 1.0798 | 1018 | 0.2104 |
| 1.0809 | 1019 | 0.1519 |
| 1.0819 | 1020 | 0.2493 |
| 1.0830 | 1021 | 0.162 |
| 1.0841 | 1022 | 0.2143 |
| 1.0851 | 1023 | 0.1909 |
| 1.0862 | 1024 | 0.2608 |
| 1.0872 | 1025 | 0.2373 |
| 1.0883 | 1026 | 0.2523 |
| 1.0894 | 1027 | 0.2251 |
| 1.0904 | 1028 | 0.1989 |
| 1.0915 | 1029 | 0.1274 |
| 1.0925 | 1030 | 0.1261 |
| 1.0936 | 1031 | 0.0842 |
| 1.0947 | 1032 | 0.1165 |
| 1.0957 | 1033 | 0.122 |
| 1.0968 | 1034 | 0.1154 |
| 1.0979 | 1035 | 0.1832 |
| 1.0989 | 1036 | 0.1469 |
| 1.1000 | 1037 | 0.1614 |
| 1.1010 | 1038 | 0.0865 |
| 1.1021 | 1039 | 0.1235 |
| 1.1032 | 1040 | 0.1564 |
| 1.1042 | 1041 | 0.148 |
| 1.1053 | 1042 | 0.1657 |
| 1.1063 | 1043 | 0.1106 |
| 1.1074 | 1044 | 0.1182 |
| 1.1085 | 1045 | 0.133 |
| 1.1095 | 1046 | 0.0922 |
| 1.1106 | 1047 | 0.1104 |
| 1.1116 | 1048 | 0.0783 |
| 1.1127 | 1049 | 0.1089 |
| 1.1138 | 1050 | 0.0775 |
| 1.1148 | 1051 | 0.0558 |
| 1.1159 | 1052 | 0.0931 |
| 1.1169 | 1053 | 0.1448 |
| 1.1180 | 1054 | 0.104 |
| 1.1191 | 1055 | 0.1419 |
| 1.1201 | 1056 | 0.0952 |
| 1.1212 | 1057 | 0.1283 |
| 1.1222 | 1058 | 0.106 |
| 1.1233 | 1059 | 0.1464 |
| 1.1244 | 1060 | 0.1023 |
| 1.1254 | 1061 | 0.1623 |
| 1.1265 | 1062 | 0.2852 |
| 1.1276 | 1063 | 0.4375 |
| 1.1286 | 1064 | 0.3692 |
| 1.1297 | 1065 | 0.353 |
| 1.1307 | 1066 | 0.4234 |
| 1.1318 | 1067 | 0.2492 |
| 1.1329 | 1068 | 0.2313 |
| 1.1339 | 1069 | 0.2968 |
| 1.1350 | 1070 | 0.2625 |
| 1.1360 | 1071 | 0.1686 |
| 1.1371 | 1072 | 0.0894 |
| 1.1382 | 1073 | 0.1292 |
| 1.1392 | 1074 | 0.1375 |
| 1.1403 | 1075 | 0.1176 |
| 1.1413 | 1076 | 0.1892 |
| 1.1424 | 1077 | 0.3492 |
| 1.1435 | 1078 | 0.1426 |
| 1.1445 | 1079 | 0.0068 |
| 1.1456 | 1080 | 0.0103 |
| 1.1466 | 1081 | 0.0165 |
| 1.1477 | 1082 | 0.0033 |
| 1.1488 | 1083 | 0.0136 |
| 1.1498 | 1084 | 0.0014 |
| 1.1509 | 1085 | 0.0022 |
| 1.1519 | 1086 | 0.0012 |
| 1.1530 | 1087 | 0.0046 |
| 1.1541 | 1088 | 0.0148 |
| 1.1551 | 1089 | 0.0086 |
| 1.1562 | 1090 | 0.0041 |
| 1.1573 | 1091 | 0.0114 |
| 1.1583 | 1092 | 0.0016 |
| 1.1594 | 1093 | 0.0098 |
| 1.1604 | 1094 | 0.0026 |
| 1.1615 | 1095 | 0.0081 |
| 1.1626 | 1096 | 0.0016 |
| 1.1636 | 1097 | 0.0018 |
| 1.1647 | 1098 | 0.0086 |
| 1.1657 | 1099 | 0.002 |
| 1.1668 | 1100 | 0.0027 |
| 1.1679 | 1101 | 0.0036 |
| 1.1689 | 1102 | 0.0161 |
| 1.1700 | 1103 | 0.0038 |
| 1.1710 | 1104 | 0.0011 |
| 1.1721 | 1105 | 0.0087 |
| 1.1732 | 1106 | 0.0026 |
| 1.1742 | 1107 | 0.0095 |
| 1.1753 | 1108 | 0.0054 |
| 1.1763 | 1109 | 0.0014 |
| 1.1774 | 1110 | 0.0083 |
| 1.1785 | 1111 | 0.0081 |
| 1.1795 | 1112 | 0.0079 |
| 1.1806 | 1113 | 0.0078 |
| 1.1816 | 1114 | 0.0033 |
| 1.1827 | 1115 | 0.0016 |
| 1.1838 | 1116 | 0.0038 |
| 1.1848 | 1117 | 0.0074 |
| 1.1859 | 1118 | 0.003 |
| 1.1870 | 1119 | 0.0035 |
| 1.1880 | 1120 | 0.005 |
| 1.1891 | 1121 | 0.0046 |
| 1.1901 | 1122 | 0.0027 |
| 1.1912 | 1123 | 0.0162 |
| 1.1923 | 1124 | 0.0109 |
| 1.1933 | 1125 | 0.016 |
| 1.1944 | 1126 | 0.0113 |
| 1.1954 | 1127 | 0.0057 |
| 1.1965 | 1128 | 0.008 |
| 1.1976 | 1129 | 0.0086 |
| 1.1986 | 1130 | 0.0106 |
| 1.1997 | 1131 | 0.0081 |
| 1.2007 | 1132 | 0.0034 |
| 1.2018 | 1133 | 0.0098 |
| 1.2029 | 1134 | 0.0062 |
| 1.2039 | 1135 | 0.0072 |
| 1.2050 | 1136 | 0.0076 |
| 1.2060 | 1137 | 0.0134 |
| 1.2071 | 1138 | 0.0036 |
| 1.2082 | 1139 | 0.0044 |
| 1.2092 | 1140 | 0.0014 |
| 1.2103 | 1141 | 0.008 |
| 1.2113 | 1142 | 0.0069 |
| 1.2124 | 1143 | 0.0045 |
| 1.2135 | 1144 | 0.0165 |
| 1.2145 | 1145 | 0.0007 |
| 1.2156 | 1146 | 0.0055 |
| 1.2167 | 1147 | 0.0087 |
| 1.2177 | 1148 | 0.0132 |
| 1.2188 | 1149 | 0.0068 |
| 1.2198 | 1150 | 0.0121 |
| 1.2209 | 1151 | 0.0025 |
| 1.2220 | 1152 | 0.0069 |
| 1.2230 | 1153 | 0.0007 |
| 1.2241 | 1154 | 0.01 |
| 1.2251 | 1155 | 0.0069 |
| 1.2262 | 1156 | 0.0091 |
| 1.2273 | 1157 | 0.0022 |
| 1.2283 | 1158 | 0.0097 |
| 1.2294 | 1159 | 0.0081 |
| 1.2304 | 1160 | 0.0022 |
| 1.2315 | 1161 | 0.0022 |
| 1.2326 | 1162 | 0.0011 |
| 1.2336 | 1163 | 0.002 |
| 1.2347 | 1164 | 0.0117 |
| 1.2357 | 1165 | 0.0046 |
| 1.2368 | 1166 | 0.0068 |
| 1.2379 | 1167 | 0.0051 |
| 1.2389 | 1168 | 0.0041 |
| 1.2400 | 1169 | 0.0021 |
| 1.2411 | 1170 | 0.0029 |
| 1.2421 | 1171 | 0.0098 |
| 1.2432 | 1172 | 0.0061 |
| 1.2442 | 1173 | 0.0006 |
| 1.2453 | 1174 | 0.0017 |
| 1.2464 | 1175 | 0.0015 |
| 1.2474 | 1176 | 0.012 |
| 1.2485 | 1177 | 0.0112 |
| 1.2495 | 1178 | 0.011 |
| 1.2506 | 1179 | 0.0113 |
| 1.2517 | 1180 | 0.0112 |
| 1.2527 | 1181 | 0.0044 |
| 1.2538 | 1182 | 0.0037 |
| 1.2548 | 1183 | 0.0034 |
| 1.2559 | 1184 | 0.0093 |
| 1.2570 | 1185 | 0.0061 |
| 1.2580 | 1186 | 0.0176 |
| 1.2591 | 1187 | 0.0026 |
| 1.2601 | 1188 | 0.0042 |
| 1.2612 | 1189 | 0.0082 |
| 1.2623 | 1190 | 0.0246 |
| 1.2633 | 1191 | 0.0633 |
| 1.2644 | 1192 | 0.0574 |
| 1.2654 | 1193 | 0.0554 |
| 1.2665 | 1194 | 0.0376 |
| 1.2676 | 1195 | 0.0359 |
| 1.2686 | 1196 | 0.0581 |
| 1.2697 | 1197 | 0.0513 |
| 1.2708 | 1198 | 0.0462 |
| 1.2718 | 1199 | 0.0148 |
| 1.2729 | 1200 | 0.0154 |
| 1.2739 | 1201 | 0.0337 |
| 1.2750 | 1202 | 0.0259 |
| 1.2761 | 1203 | 0.041 |
| 1.2771 | 1204 | 0.0289 |
| 1.2782 | 1205 | 0.0164 |
| 1.2792 | 1206 | 0.0262 |
| 1.2803 | 1207 | 0.0215 |
| 1.2814 | 1208 | 0.0387 |
| 1.2824 | 1209 | 0.0232 |
| 1.2835 | 1210 | 0.0436 |
| 1.2845 | 1211 | 0.0393 |
| 1.2856 | 1212 | 0.0062 |
| 1.2867 | 1213 | 0.022 |
| 1.2877 | 1214 | 0.0116 |
| 1.2888 | 1215 | 0.021 |
| 1.2898 | 1216 | 0.0166 |
| 1.2909 | 1217 | 0.004 |
| 1.2920 | 1218 | 0.0308 |
| 1.2930 | 1219 | 0.024 |
| 1.2941 | 1220 | 0.0101 |
| 1.2951 | 1221 | 0.0115 |
| 1.2962 | 1222 | 0.0046 |
| 1.2973 | 1223 | 0.0114 |
| 1.2983 | 1224 | 0.016 |
| 1.2994 | 1225 | 0.0264 |
| 1.3005 | 1226 | 0.0097 |
| 1.3015 | 1227 | 0.0126 |
| 1.3026 | 1228 | 0.0062 |
| 1.3036 | 1229 | 0.0104 |
| 1.3047 | 1230 | 0.022 |
| 1.3058 | 1231 | 0.0045 |
| 1.3068 | 1232 | 0.0073 |
| 1.3079 | 1233 | 0.012 |
| 1.3089 | 1234 | 0.0103 |
| 1.3100 | 1235 | 0.0124 |
| 1.3111 | 1236 | 0.0088 |
| 1.3121 | 1237 | 0.0059 |
| 1.3132 | 1238 | 0.0115 |
| 1.3142 | 1239 | 0.0116 |
| 1.3153 | 1240 | 0.0234 |
| 1.3164 | 1241 | 0.0093 |
| 1.3174 | 1242 | 0.0012 |
| 1.3185 | 1243 | 0.0082 |
| 1.3195 | 1244 | 0.0094 |
| 1.3206 | 1245 | 0.0079 |
| 1.3217 | 1246 | 0.0109 |
| 1.3227 | 1247 | 0.0072 |
| 1.3238 | 1248 | 0.01 |
| 1.3248 | 1249 | 0.0157 |
| 1.3259 | 1250 | 0.0239 |
| 1.3270 | 1251 | 0.008 |
| 1.3280 | 1252 | 0.0022 |
| 1.3291 | 1253 | 0.0057 |
| 1.3302 | 1254 | 0.0134 |
| 1.3312 | 1255 | 0.01 |
| 1.3323 | 1256 | 0.0152 |
| 1.3333 | 1257 | 0.0226 |
| 1.3344 | 1258 | 0.0117 |
| 1.3355 | 1259 | 0.017 |
| 1.3365 | 1260 | 0.0255 |
| 1.3376 | 1261 | 0.008 |
| 1.3386 | 1262 | 0.0119 |
| 1.3397 | 1263 | 0.0126 |
| 1.3408 | 1264 | 0.0064 |
| 1.3418 | 1265 | 0.0069 |
| 1.3429 | 1266 | 0.0122 |
| 1.3439 | 1267 | 0.0266 |
| 1.3450 | 1268 | 0.0151 |
| 1.3461 | 1269 | 0.007 |
| 1.3471 | 1270 | 0.0132 |
| 1.3482 | 1271 | 0.0049 |
| 1.3492 | 1272 | 0.005 |
| 1.3503 | 1273 | 0.014 |
| 1.3514 | 1274 | 0.0157 |
| 1.3524 | 1275 | 0.0195 |
| 1.3535 | 1276 | 0.0135 |
| 1.3545 | 1277 | 0.006 |
| 1.3556 | 1278 | 0.0297 |
| 1.3567 | 1279 | 0.0079 |
| 1.3577 | 1280 | 0.0226 |
| 1.3588 | 1281 | 0.0126 |
| 1.3599 | 1282 | 0.019 |
| 1.3609 | 1283 | 0.0218 |
| 1.3620 | 1284 | 0.0088 |
| 1.3630 | 1285 | 0.0221 |
| 1.3641 | 1286 | 0.0186 |
| 1.3652 | 1287 | 0.007 |
| 1.3662 | 1288 | 0.0189 |
| 1.3673 | 1289 | 0.0117 |
| 1.3683 | 1290 | 0.0164 |
| 1.3694 | 1291 | 0.0297 |
| 1.3705 | 1292 | 0.014 |
| 1.3715 | 1293 | 0.0231 |
| 1.3726 | 1294 | 0.0547 |
| 1.3736 | 1295 | 0.0308 |
| 1.3747 | 1296 | 0.0171 |
| 1.3758 | 1297 | 0.0214 |
| 1.3768 | 1298 | 0.0254 |
| 1.3779 | 1299 | 0.0429 |
| 1.3789 | 1300 | 0.0062 |
| 1.3800 | 1301 | 0.0187 |
| 1.3811 | 1302 | 0.0117 |
| 1.3821 | 1303 | 0.0067 |
| 1.3832 | 1304 | 0.0189 |
| 1.3842 | 1305 | 0.0088 |
| 1.3853 | 1306 | 0.017 |
| 1.3864 | 1307 | 0.0125 |
| 1.3874 | 1308 | 0.0241 |
| 1.3885 | 1309 | 0.0161 |
| 1.3896 | 1310 | 0.0135 |
| 1.3906 | 1311 | 0.0152 |
| 1.3917 | 1312 | 0.0169 |
| 1.3927 | 1313 | 0.0173 |
| 1.3938 | 1314 | 0.0115 |
| 1.3949 | 1315 | 0.0143 |
| 1.3959 | 1316 | 0.0146 |
| 1.3970 | 1317 | 0.0219 |
| 1.3980 | 1318 | 0.0221 |
| 1.3991 | 1319 | 0.0076 |
| 1.4002 | 1320 | 0.0226 |
| 1.4012 | 1321 | 0.0203 |
| 1.4023 | 1322 | 0.0055 |
| 1.4033 | 1323 | 0.0193 |
| 1.4044 | 1324 | 0.0161 |
| 1.4055 | 1325 | 0.0252 |
| 1.4065 | 1326 | 0.0304 |
| 1.4076 | 1327 | 0.0187 |
| 1.4086 | 1328 | 0.0261 |
| 1.4097 | 1329 | 0.0072 |
| 1.4108 | 1330 | 0.0171 |
| 1.4118 | 1331 | 0.0235 |
| 1.4129 | 1332 | 0.0293 |
| 1.4139 | 1333 | 0.0253 |
| 1.4150 | 1334 | 0.0106 |
| 1.4161 | 1335 | 0.0092 |
| 1.4171 | 1336 | 0.0156 |
| 1.4182 | 1337 | 0.0325 |
| 1.4193 | 1338 | 0.0156 |
| 1.4203 | 1339 | 0.0137 |
| 1.4214 | 1340 | 0.0411 |
| 1.4224 | 1341 | 0.0236 |
| 1.4235 | 1342 | 0.0284 |
| 1.4246 | 1343 | 0.0489 |
| 1.4256 | 1344 | 0.023 |
| 1.4267 | 1345 | 0.0261 |
| 1.4277 | 1346 | 0.026 |
| 1.4288 | 1347 | 0.0208 |
| 1.4299 | 1348 | 0.0085 |
| 1.4309 | 1349 | 0.0199 |
| 1.4320 | 1350 | 0.0167 |
| 1.4330 | 1351 | 0.0213 |
| 1.4341 | 1352 | 0.0108 |
| 1.4352 | 1353 | 0.0102 |
| 1.4362 | 1354 | 0.0183 |
| 1.4373 | 1355 | 0.02 |
| 1.4383 | 1356 | 0.0182 |
| 1.4394 | 1357 | 0.03 |
| 1.4405 | 1358 | 0.0311 |
| 1.4415 | 1359 | 0.0253 |
| 1.4426 | 1360 | 0.0155 |
| 1.4436 | 1361 | 0.0141 |
| 1.4447 | 1362 | 0.0129 |
| 1.4458 | 1363 | 0.0202 |
| 1.4468 | 1364 | 0.0228 |
| 1.4479 | 1365 | 0.0269 |
| 1.4490 | 1366 | 0.0109 |
| 1.4500 | 1367 | 0.0379 |
| 1.4511 | 1368 | 0.0099 |
| 1.4521 | 1369 | 0.0166 |
| 1.4532 | 1370 | 0.0067 |
| 1.4543 | 1371 | 0.0078 |
| 1.4553 | 1372 | 0.0114 |
| 1.4564 | 1373 | 0.0215 |
| 1.4574 | 1374 | 0.0404 |
| 1.4585 | 1375 | 0.0314 |
| 1.4596 | 1376 | 0.0334 |
| 1.4606 | 1377 | 0.03 |
| 1.4617 | 1378 | 0.0256 |
| 1.4627 | 1379 | 0.0551 |
| 1.4638 | 1380 | 0.0262 |
| 1.4649 | 1381 | 0.0389 |
| 1.4659 | 1382 | 0.0419 |
| 1.4670 | 1383 | 0.0144 |
| 1.4680 | 1384 | 0.0191 |
| 1.4691 | 1385 | 0.0438 |
| 1.4702 | 1386 | 0.0711 |
| 1.4712 | 1387 | 0.0399 |
| 1.4723 | 1388 | 0.0269 |
| 1.4733 | 1389 | 0.0496 |
| 1.4744 | 1390 | 0.0565 |
| 1.4755 | 1391 | 0.0316 |
| 1.4765 | 1392 | 0.038 |
| 1.4776 | 1393 | 0.0471 |
| 1.4787 | 1394 | 0.0327 |
| 1.4797 | 1395 | 0.0296 |
| 1.4808 | 1396 | 0.0198 |
| 1.4818 | 1397 | 0.0383 |
| 1.4829 | 1398 | 0.0398 |
| 1.4840 | 1399 | 0.0357 |
| 1.4850 | 1400 | 0.0236 |
| 1.4861 | 1401 | 0.06 |
| 1.4871 | 1402 | 0.0564 |
| 1.4882 | 1403 | 0.0236 |
| 1.4893 | 1404 | 0.043 |
| 1.4903 | 1405 | 0.021 |
| 1.4914 | 1406 | 0.0359 |
| 1.4924 | 1407 | 0.0362 |
| 1.4935 | 1408 | 0.0323 |
| 1.4946 | 1409 | 0.0209 |
| 1.4956 | 1410 | 0.0155 |
| 1.4967 | 1411 | 0.0255 |
| 1.4977 | 1412 | 0.0216 |
| 1.4988 | 1413 | 0.0208 |
| 1.4999 | 1414 | 0.0263 |
| 1.5009 | 1415 | 0.0102 |
| 1.5020 | 1416 | 0.0115 |
| 1.5030 | 1417 | 0.0183 |
| 1.5041 | 1418 | 0.0148 |
| 1.5052 | 1419 | 0.0426 |
| 1.5062 | 1420 | 0.0255 |
| 1.5073 | 1421 | 0.0533 |
| 1.5084 | 1422 | 0.0657 |
| 1.5094 | 1423 | 0.068 |
| 1.5105 | 1424 | 0.0573 |
| 1.5115 | 1425 | 0.0429 |
| 1.5126 | 1426 | 0.0218 |
| 1.5137 | 1427 | 0.0136 |
| 1.5147 | 1428 | 0.0214 |
| 1.5158 | 1429 | 0.016 |
| 1.5168 | 1430 | 0.0411 |
| 1.5179 | 1431 | 0.0417 |
| 1.5190 | 1432 | 0.056 |
| 1.5200 | 1433 | 0.0299 |
| 1.5211 | 1434 | 0.016 |
| 1.5221 | 1435 | 0.0557 |
| 1.5232 | 1436 | 0.0174 |
| 1.5243 | 1437 | 0.024 |
| 1.5253 | 1438 | 0.059 |
| 1.5264 | 1439 | 0.097 |
| 1.5274 | 1440 | 0.0319 |
| 1.5285 | 1441 | 0.0174 |
| 1.5296 | 1442 | 0.044 |
| 1.5306 | 1443 | 0.0239 |
| 1.5317 | 1444 | 0.0142 |
| 1.5327 | 1445 | 0.0247 |
| 1.5338 | 1446 | 0.0239 |
| 1.5349 | 1447 | 0.0433 |
| 1.5359 | 1448 | 0.0214 |
| 1.5370 | 1449 | 0.053 |
| 1.5381 | 1450 | 0.0279 |
| 1.5391 | 1451 | 0.0216 |
| 1.5402 | 1452 | 0.0288 |
| 1.5412 | 1453 | 0.0295 |
| 1.5423 | 1454 | 0.0403 |
| 1.5434 | 1455 | 0.0227 |
| 1.5444 | 1456 | 0.0417 |
| 1.5455 | 1457 | 0.0259 |
| 1.5465 | 1458 | 0.0329 |
| 1.5476 | 1459 | 0.0386 |
| 1.5487 | 1460 | 0.0401 |
| 1.5497 | 1461 | 0.042 |
| 1.5508 | 1462 | 0.0115 |
| 1.5518 | 1463 | 0.029 |
| 1.5529 | 1464 | 0.0177 |
| 1.5540 | 1465 | 0.0276 |
| 1.5550 | 1466 | 0.0304 |
| 1.5561 | 1467 | 0.0401 |
| 1.5571 | 1468 | 0.0302 |
| 1.5582 | 1469 | 0.0282 |
| 1.5593 | 1470 | 0.0224 |
| 1.5603 | 1471 | 0.0271 |
| 1.5614 | 1472 | 0.0297 |
| 1.5625 | 1473 | 0.0419 |
| 1.5635 | 1474 | 0.0146 |
| 1.5646 | 1475 | 0.0288 |
| 1.5656 | 1476 | 0.0235 |
| 1.5667 | 1477 | 0.0287 |
| 1.5678 | 1478 | 0.028 |
| 1.5688 | 1479 | 0.0239 |
| 1.5699 | 1480 | 0.0374 |
| 1.5709 | 1481 | 0.0393 |
| 1.5720 | 1482 | 0.0234 |
| 1.5731 | 1483 | 0.0472 |
| 1.5741 | 1484 | 0.0254 |
| 1.5752 | 1485 | 0.0152 |
| 1.5762 | 1486 | 0.0378 |
| 1.5773 | 1487 | 0.0249 |
| 1.5784 | 1488 | 0.0228 |
| 1.5794 | 1489 | 0.0387 |
| 1.5805 | 1490 | 0.0576 |
| 1.5815 | 1491 | 0.0272 |
| 1.5826 | 1492 | 0.0457 |
| 1.5837 | 1493 | 0.034 |
| 1.5847 | 1494 | 0.0358 |
| 1.5858 | 1495 | 0.0638 |
| 1.5868 | 1496 | 0.03 |
| 1.5879 | 1497 | 0.0575 |
| 1.5890 | 1498 | 0.0318 |
| 1.5900 | 1499 | 0.0439 |
| 1.5911 | 1500 | 0.0962 |
| 1.5922 | 1501 | 0.0171 |
| 1.5932 | 1502 | 0.0155 |
| 1.5943 | 1503 | 0.0158 |
| 1.5953 | 1504 | 0.0117 |
| 1.5964 | 1505 | 0.0163 |
| 1.5975 | 1506 | 0.0392 |
| 1.5985 | 1507 | 0.0079 |
| 1.5996 | 1508 | 0.0142 |
| 1.6006 | 1509 | 0.0223 |
| 1.6017 | 1510 | 0.0274 |
| 1.6028 | 1511 | 0.0188 |
| 1.6038 | 1512 | 0.0129 |
| 1.6049 | 1513 | 0.0323 |
| 1.6059 | 1514 | 0.023 |
| 1.6070 | 1515 | 0.0111 |
| 1.6081 | 1516 | 0.0248 |
| 1.6091 | 1517 | 0.0215 |
| 1.6102 | 1518 | 0.0138 |
| 1.6112 | 1519 | 0.0283 |
| 1.6123 | 1520 | 0.0497 |
| 1.6134 | 1521 | 0.0151 |
| 1.6144 | 1522 | 0.0174 |
| 1.6155 | 1523 | 0.0457 |
| 1.6165 | 1524 | 0.0614 |
| 1.6176 | 1525 | 0.0433 |
| 1.6187 | 1526 | 0.0346 |
| 1.6197 | 1527 | 0.0267 |
| 1.6208 | 1528 | 0.0152 |
| 1.6219 | 1529 | 0.0408 |
| 1.6229 | 1530 | 0.0235 |
| 1.6240 | 1531 | 0.0467 |
| 1.6250 | 1532 | 0.0742 |
| 1.6261 | 1533 | 0.0303 |
| 1.6272 | 1534 | 0.0191 |
| 1.6282 | 1535 | 0.0153 |
| 1.6293 | 1536 | 0.0151 |
| 1.6303 | 1537 | 0.0189 |
| 1.6314 | 1538 | 0.0144 |
| 1.6325 | 1539 | 0.04 |
| 1.6335 | 1540 | 0.0167 |
| 1.6346 | 1541 | 0.0099 |
| 1.6356 | 1542 | 0.0219 |
| 1.6367 | 1543 | 0.0224 |
| 1.6378 | 1544 | 0.0116 |
| 1.6388 | 1545 | 0.0169 |
| 1.6399 | 1546 | 0.0346 |
| 1.6409 | 1547 | 0.0261 |
| 1.6420 | 1548 | 0.0209 |
| 1.6431 | 1549 | 0.0233 |
| 1.6441 | 1550 | 0.0103 |
| 1.6452 | 1551 | 0.0204 |
| 1.6462 | 1552 | 0.0203 |
| 1.6473 | 1553 | 0.0165 |
| 1.6484 | 1554 | 0.0243 |
| 1.6494 | 1555 | 0.0198 |
| 1.6505 | 1556 | 0.0171 |
| 1.6516 | 1557 | 0.0233 |
| 1.6526 | 1558 | 0.0137 |
| 1.6537 | 1559 | 0.003 |
| 1.6547 | 1560 | 0.0037 |
| 1.6558 | 1561 | 0.0127 |
| 1.6569 | 1562 | 0.0184 |
| 1.6579 | 1563 | 0.0207 |
| 1.6590 | 1564 | 0.0196 |
| 1.6600 | 1565 | 0.0131 |
| 1.6611 | 1566 | 0.0198 |
| 1.6622 | 1567 | 0.0273 |
| 1.6632 | 1568 | 0.0137 |
| 1.6643 | 1569 | 0.009 |
| 1.6653 | 1570 | 0.0054 |
| 1.6664 | 1571 | 0.0043 |
| 1.6675 | 1572 | 0.0051 |
| 1.6685 | 1573 | 0.0146 |
| 1.6696 | 1574 | 0.0128 |
| 1.6706 | 1575 | 0.0043 |
| 1.6717 | 1576 | 0.0047 |
| 1.6728 | 1577 | 0.0038 |
| 1.6738 | 1578 | 0.0156 |
| 1.6749 | 1579 | 0.0521 |
| 1.6759 | 1580 | 0.0339 |
| 1.6770 | 1581 | 0.0093 |
| 1.6781 | 1582 | 0.0028 |
| 1.6791 | 1583 | 0.0162 |
| 1.6802 | 1584 | 0.0497 |
| 1.6813 | 1585 | 0.0542 |
| 1.6823 | 1586 | 0.002 |
| 1.6834 | 1587 | 0.0059 |
| 1.6844 | 1588 | 0.0062 |
| 1.6855 | 1589 | 0.0019 |
| 1.6866 | 1590 | 0.0025 |
| 1.6876 | 1591 | 0.0056 |
| 1.6887 | 1592 | 0.0009 |
| 1.6897 | 1593 | 0.0036 |
| 1.6908 | 1594 | 0.007 |
| 1.6919 | 1595 | 0.0064 |
| 1.6929 | 1596 | 0.0048 |
| 1.6940 | 1597 | 0.1758 |
| 1.6950 | 1598 | 0.209 |
| 1.6961 | 1599 | 0.2029 |
| 1.6972 | 1600 | 0.1568 |
| 1.6982 | 1601 | 0.1842 |
| 1.6993 | 1602 | 0.1696 |
| 1.7003 | 1603 | 0.2118 |
| 1.7014 | 1604 | 0.1503 |
| 1.7025 | 1605 | 0.1528 |
| 1.7035 | 1606 | 0.2494 |
| 1.7046 | 1607 | 0.2627 |
| 1.7056 | 1608 | 0.1412 |
| 1.7067 | 1609 | 0.1272 |
| 1.7078 | 1610 | 0.1699 |
| 1.7088 | 1611 | 0.1036 |
| 1.7099 | 1612 | 0.1429 |
| 1.7110 | 1613 | 0.1593 |
| 1.7120 | 1614 | 0.1426 |
| 1.7131 | 1615 | 0.1173 |
| 1.7141 | 1616 | 0.0922 |
| 1.7152 | 1617 | 0.1081 |
| 1.7163 | 1618 | 0.1282 |
| 1.7173 | 1619 | 0.1956 |
| 1.7184 | 1620 | 0.1709 |
| 1.7194 | 1621 | 0.0834 |
| 1.7205 | 1622 | 0.0935 |
| 1.7216 | 1623 | 0.1185 |
| 1.7226 | 1624 | 0.1107 |
| 1.7237 | 1625 | 0.1134 |
| 1.7247 | 1626 | 0.0972 |
| 1.7258 | 1627 | 0.1023 |
| 1.7269 | 1628 | 0.1096 |
| 1.7279 | 1629 | 0.0631 |
| 1.7290 | 1630 | 0.1146 |
| 1.7300 | 1631 | 0.1754 |
| 1.7311 | 1632 | 0.4233 |
| 1.7322 | 1633 | 0.3293 |
| 1.7332 | 1634 | 0.2333 |
| 1.7343 | 1635 | 0.16 |
| 1.7353 | 1636 | 0.1357 |
| 1.7364 | 1637 | 0.1226 |
| 1.7375 | 1638 | 0.214 |
| 1.7385 | 1639 | 0.1011 |
| 1.7396 | 1640 | 0.1071 |
| 1.7407 | 1641 | 0.0949 |
| 1.7417 | 1642 | 0.0876 |
| 1.7428 | 1643 | 0.0949 |
| 1.7438 | 1644 | 0.1429 |
| 1.7449 | 1645 | 0.0649 |
| 1.7460 | 1646 | 0.0918 |
| 1.7470 | 1647 | 0.0687 |
| 1.7481 | 1648 | 0.1216 |
| 1.7491 | 1649 | 0.0785 |
| 1.7502 | 1650 | 0.0812 |
| 1.7513 | 1651 | 0.057 |
| 1.7523 | 1652 | 0.0815 |
| 1.7534 | 1653 | 0.0794 |
| 1.7544 | 1654 | 0.0861 |
| 1.7555 | 1655 | 0.0897 |
| 1.7566 | 1656 | 0.0891 |
| 1.7576 | 1657 | 0.101 |
| 1.7587 | 1658 | 0.1225 |
| 1.7597 | 1659 | 0.0586 |
| 1.7608 | 1660 | 0.0609 |
| 1.7619 | 1661 | 0.0702 |
| 1.7629 | 1662 | 0.0691 |
| 1.7640 | 1663 | 0.066 |
| 1.7650 | 1664 | 0.0573 |
| 1.7661 | 1665 | 0.0774 |
| 1.7672 | 1666 | 0.0948 |
| 1.7682 | 1667 | 0.0387 |
| 1.7693 | 1668 | 0.0433 |
| 1.7704 | 1669 | 0.0358 |
| 1.7714 | 1670 | 0.0703 |
| 1.7725 | 1671 | 0.0533 |
| 1.7735 | 1672 | 0.0882 |
| 1.7746 | 1673 | 0.0595 |
| 1.7757 | 1674 | 0.0844 |
| 1.7767 | 1675 | 0.0512 |
| 1.7778 | 1676 | 0.0558 |
| 1.7788 | 1677 | 0.0574 |
| 1.7799 | 1678 | 0.0317 |
| 1.7810 | 1679 | 0.0399 |
| 1.7820 | 1680 | 0.0826 |
| 1.7831 | 1681 | 0.0656 |
| 1.7841 | 1682 | 0.0463 |
| 1.7852 | 1683 | 0.0578 |
| 1.7863 | 1684 | 0.0278 |
| 1.7873 | 1685 | 0.0505 |
| 1.7884 | 1686 | 0.0603 |
| 1.7894 | 1687 | 0.0277 |
| 1.7905 | 1688 | 0.059 |
| 1.7916 | 1689 | 0.0582 |
| 1.7926 | 1690 | 0.0461 |
| 1.7937 | 1691 | 0.0725 |
| 1.7947 | 1692 | 0.0671 |
| 1.7958 | 1693 | 0.0957 |
| 1.7969 | 1694 | 0.0618 |
| 1.7979 | 1695 | 0.0794 |
| 1.7990 | 1696 | 0.0457 |
| 1.8001 | 1697 | 0.0539 |
| 1.8011 | 1698 | 0.0904 |
| 1.8022 | 1699 | 0.0346 |
| 1.8032 | 1700 | 0.0519 |
| 1.8043 | 1701 | 0.0343 |
| 1.8054 | 1702 | 0.0425 |
| 1.8064 | 1703 | 0.017 |
| 1.8075 | 1704 | 0.0778 |
| 1.8085 | 1705 | 0.0784 |
| 1.8096 | 1706 | 0.0356 |
| 1.8107 | 1707 | 0.0243 |
| 1.8117 | 1708 | 0.0393 |
| 1.8128 | 1709 | 0.0469 |
| 1.8138 | 1710 | 0.0386 |
| 1.8149 | 1711 | 0.0382 |
| 1.8160 | 1712 | 0.0692 |
| 1.8170 | 1713 | 0.039 |
| 1.8181 | 1714 | 0.0386 |
| 1.8191 | 1715 | 0.0293 |
| 1.8202 | 1716 | 0.0479 |
| 1.8213 | 1717 | 0.0404 |
| 1.8223 | 1718 | 0.0358 |
| 1.8234 | 1719 | 0.048 |
| 1.8244 | 1720 | 0.0404 |
| 1.8255 | 1721 | 0.0509 |
| 1.8266 | 1722 | 0.046 |
| 1.8276 | 1723 | 0.0297 |
| 1.8287 | 1724 | 0.059 |
| 1.8298 | 1725 | 0.0279 |
| 1.8308 | 1726 | 0.0557 |
| 1.8319 | 1727 | 0.0264 |
| 1.8329 | 1728 | 0.0384 |
| 1.8340 | 1729 | 0.0447 |
| 1.8351 | 1730 | 0.0631 |
| 1.8361 | 1731 | 0.0456 |
| 1.8372 | 1732 | 0.047 |
| 1.8382 | 1733 | 0.0218 |
| 1.8393 | 1734 | 0.0422 |
| 1.8404 | 1735 | 0.0387 |
| 1.8414 | 1736 | 0.0625 |
| 1.8425 | 1737 | 0.0606 |
| 1.8435 | 1738 | 0.0261 |
| 1.8446 | 1739 | 0.0513 |
| 1.8457 | 1740 | 0.0197 |
| 1.8467 | 1741 | 0.077 |
| 1.8478 | 1742 | 0.0386 |
| 1.8488 | 1743 | 0.0374 |
| 1.8499 | 1744 | 0.0324 |
| 1.8510 | 1745 | 0.0136 |
| 1.8520 | 1746 | 0.0293 |
| 1.8531 | 1747 | 0.0406 |
| 1.8542 | 1748 | 0.0218 |
| 1.8552 | 1749 | 0.0432 |
| 1.8563 | 1750 | 0.0239 |
| 1.8573 | 1751 | 0.025 |
| 1.8584 | 1752 | 0.0474 |
| 1.8595 | 1753 | 0.0799 |
| 1.8605 | 1754 | 0.0507 |
| 1.8616 | 1755 | 0.0851 |
| 1.8626 | 1756 | 0.0729 |
| 1.8637 | 1757 | 0.059 |
| 1.8648 | 1758 | 0.0346 |
| 1.8658 | 1759 | 0.0129 |
| 1.8669 | 1760 | 0.0364 |
| 1.8679 | 1761 | 0.0431 |
| 1.8690 | 1762 | 0.0536 |
| 1.8701 | 1763 | 0.0393 |
| 1.8711 | 1764 | 0.0568 |
| 1.8722 | 1765 | 0.0501 |
| 1.8732 | 1766 | 0.0273 |
| 1.8743 | 1767 | 0.022 |
| 1.8754 | 1768 | 0.0223 |
| 1.8764 | 1769 | 0.0463 |
| 1.8775 | 1770 | 0.0282 |
| 1.8785 | 1771 | 0.0594 |
| 1.8796 | 1772 | 0.0493 |
| 1.8807 | 1773 | 0.0374 |
| 1.8817 | 1774 | 0.0367 |
| 1.8828 | 1775 | 0.0131 |
| 1.8839 | 1776 | 0.0286 |
| 1.8849 | 1777 | 0.0263 |
| 1.8860 | 1778 | 0.019 |
| 1.8870 | 1779 | 0.0298 |
| 1.8881 | 1780 | 0.0356 |
| 1.8892 | 1781 | 0.0653 |
| 1.8902 | 1782 | 0.0717 |
| 1.8913 | 1783 | 0.0426 |
| 1.8923 | 1784 | 0.0305 |
| 1.8934 | 1785 | 0.0233 |
| 1.8945 | 1786 | 0.0459 |
| 1.8955 | 1787 | 0.0258 |
| 1.8966 | 1788 | 0.0445 |
| 1.8976 | 1789 | 0.0447 |
| 1.8987 | 1790 | 0.0366 |
| 1.8998 | 1791 | 0.0337 |
| 1.9008 | 1792 | 0.0335 |
| 1.9019 | 1793 | 0.0623 |
| 1.9029 | 1794 | 0.0451 |
| 1.9040 | 1795 | 0.036 |
| 1.9051 | 1796 | 0.0424 |
| 1.9061 | 1797 | 0.0513 |
| 1.9072 | 1798 | 0.0367 |
| 1.9082 | 1799 | 0.0263 |
| 1.9093 | 1800 | 0.023 |
| 1.9104 | 1801 | 0.0224 |
| 1.9114 | 1802 | 0.0309 |
| 1.9125 | 1803 | 0.0199 |
| 1.9136 | 1804 | 0.0281 |
| 1.9146 | 1805 | 0.0324 |
| 1.9157 | 1806 | 0.0149 |
| 1.9167 | 1807 | 0.0152 |
| 1.9178 | 1808 | 0.0584 |
| 1.9189 | 1809 | 0.013 |
| 1.9199 | 1810 | 0.0409 |
| 1.9210 | 1811 | 0.0297 |
| 1.9220 | 1812 | 0.0319 |
| 1.9231 | 1813 | 0.0215 |
| 1.9242 | 1814 | 0.0268 |
| 1.9252 | 1815 | 0.0137 |
| 1.9263 | 1816 | 0.044 |
| 1.9273 | 1817 | 0.0275 |
| 1.9284 | 1818 | 0.0438 |
| 1.9295 | 1819 | 0.0167 |
| 1.9305 | 1820 | 0.0095 |
| 1.9316 | 1821 | 0.0252 |
| 1.9326 | 1822 | 0.031 |
| 1.9337 | 1823 | 0.0268 |
| 1.9348 | 1824 | 0.0158 |
| 1.9358 | 1825 | 0.0199 |
| 1.9369 | 1826 | 0.0409 |
| 1.9379 | 1827 | 0.0299 |
| 1.9390 | 1828 | 0.0541 |
| 1.9401 | 1829 | 0.0282 |
| 1.9411 | 1830 | 0.0522 |
| 1.9422 | 1831 | 0.0464 |
| 1.9433 | 1832 | 0.028 |
| 1.9443 | 1833 | 0.0429 |
| 1.9454 | 1834 | 0.0542 |
| 1.9464 | 1835 | 0.036 |
| 1.9475 | 1836 | 0.0351 |
| 1.9486 | 1837 | 0.0264 |
| 1.9496 | 1838 | 0.0387 |
| 1.9507 | 1839 | 0.0247 |
| 1.9517 | 1840 | 0.03 |
| 1.9528 | 1841 | 0.0265 |
| 1.9539 | 1842 | 0.0267 |
| 1.9549 | 1843 | 0.0494 |
| 1.9560 | 1844 | 0.0451 |
| 1.9570 | 1845 | 0.0289 |
| 1.9581 | 1846 | 0.0272 |
| 1.9592 | 1847 | 0.0333 |
| 1.9602 | 1848 | 0.0371 |
| 1.9613 | 1849 | 0.0303 |
| 1.9623 | 1850 | 0.033 |
| 1.9634 | 1851 | 0.0324 |
| 1.9645 | 1852 | 0.0368 |
| 1.9655 | 1853 | 0.0471 |
| 1.9666 | 1854 | 0.0193 |
| 1.9676 | 1855 | 0.0202 |
| 1.9687 | 1856 | 0.043 |
| 1.9698 | 1857 | 0.0127 |
| 1.9708 | 1858 | 0.016 |
| 1.9719 | 1859 | 0.0326 |
| 1.9730 | 1860 | 0.0251 |
| 1.9740 | 1861 | 0.0227 |
| 1.9751 | 1862 | 0.018 |
| 1.9761 | 1863 | 0.0296 |
| 1.9772 | 1864 | 0.0143 |
| 1.9783 | 1865 | 0.0211 |
| 1.9793 | 1866 | 0.0125 |
| 1.9804 | 1867 | 0.0529 |
| 1.9814 | 1868 | 0.0175 |
| 1.9825 | 1869 | 0.0462 |
| 1.9836 | 1870 | 0.0299 |
| 1.9846 | 1871 | 0.0304 |
| 1.9857 | 1872 | 0.0152 |
| 1.9867 | 1873 | 0.0201 |
| 1.9878 | 1874 | 0.0299 |
| 1.9889 | 1875 | 0.0328 |
| 1.9899 | 1876 | 0.0265 |
| 1.9910 | 1877 | 0.0121 |
| 1.9920 | 1878 | 0.0171 |
| 1.9931 | 1879 | 0.0193 |
| 1.9942 | 1880 | 0.0113 |
| 1.9952 | 1881 | 0.017 |
| 1.9963 | 1882 | 0.0107 |
| 1.9973 | 1883 | 0.0135 |
| 1.9984 | 1884 | 0.0152 |
| 1.9995 | 1885 | 0.0194 |
| 2.0005 | 1886 | 0.0775 |
| 2.0016 | 1887 | 0.1686 |
| 2.0027 | 1888 | 0.2107 |
| 2.0037 | 1889 | 0.1864 |
| 2.0048 | 1890 | 0.265 |
| 2.0058 | 1891 | 0.2213 |
| 2.0069 | 1892 | 0.2837 |
| 2.0080 | 1893 | 0.2715 |
| 2.0090 | 1894 | 0.1485 |
| 2.0101 | 1895 | 0.0936 |
| 2.0111 | 1896 | 0.1611 |
| 2.0122 | 1897 | 0.1243 |
| 2.0133 | 1898 | 0.1413 |
| 2.0143 | 1899 | 0.1194 |
| 2.0154 | 1900 | 0.0844 |
| 2.0164 | 1901 | 0.0549 |
| 2.0175 | 1902 | 0.1664 |
| 2.0186 | 1903 | 0.1879 |
| 2.0196 | 1904 | 0.154 |
| 2.0207 | 1905 | 0.1536 |
| 2.0217 | 1906 | 0.173 |
| 2.0228 | 1907 | 0.1824 |
| 2.0239 | 1908 | 0.16 |
| 2.0249 | 1909 | 0.2089 |
| 2.0260 | 1910 | 0.1865 |
| 2.0270 | 1911 | 0.2149 |
| 2.0281 | 1912 | 0.1566 |
| 2.0292 | 1913 | 0.2483 |
| 2.0302 | 1914 | 0.201 |
| 2.0313 | 1915 | 0.2402 |
| 2.0324 | 1916 | 0.1943 |
| 2.0334 | 1917 | 0.1775 |
| 2.0345 | 1918 | 0.2215 |
| 2.0355 | 1919 | 0.1565 |
| 2.0366 | 1920 | 0.1824 |
| 2.0377 | 1921 | 0.1304 |
| 2.0387 | 1922 | 0.126 |
| 2.0398 | 1923 | 0.1521 |
| 2.0408 | 1924 | 0.1655 |
| 2.0419 | 1925 | 0.1336 |
| 2.0430 | 1926 | 0.1166 |
| 2.0440 | 1927 | 0.0995 |
| 2.0451 | 1928 | 0.0714 |
| 2.0461 | 1929 | 0.0775 |
| 2.0472 | 1930 | 0.0699 |
| 2.0483 | 1931 | 0.0612 |
| 2.0493 | 1932 | 0.042 |
| 2.0504 | 1933 | 0.0273 |
| 2.0514 | 1934 | 0.015 |
| 2.0525 | 1935 | 0.0588 |
| 2.0536 | 1936 | 0.071 |
| 2.0546 | 1937 | 0.0872 |
| 2.0557 | 1938 | 0.0518 |
| 2.0567 | 1939 | 0.0723 |
| 2.0578 | 1940 | 0.0364 |
| 2.0589 | 1941 | 0.0191 |
| 2.0599 | 1942 | 0.015 |
| 2.0610 | 1943 | 0.0322 |
| 2.0621 | 1944 | 0.0112 |
| 2.0631 | 1945 | 0.0077 |
| 2.0642 | 1946 | 0.0108 |
| 2.0652 | 1947 | 0.0444 |
| 2.0663 | 1948 | 0.0535 |
| 2.0674 | 1949 | 0.113 |
| 2.0684 | 1950 | 0.0623 |
| 2.0695 | 1951 | 0.0874 |
| 2.0705 | 1952 | 0.0531 |
| 2.0716 | 1953 | 0.1529 |
| 2.0727 | 1954 | 0.1247 |
| 2.0737 | 1955 | 0.1482 |
| 2.0748 | 1956 | 0.1586 |
| 2.0758 | 1957 | 0.156 |
| 2.0769 | 1958 | 0.1334 |
| 2.0780 | 1959 | 0.1719 |
| 2.0790 | 1960 | 0.0883 |
| 2.0801 | 1961 | 0.186 |
| 2.0811 | 1962 | 0.0901 |
| 2.0822 | 1963 | 0.0633 |
| 2.0833 | 1964 | 0.0713 |
| 2.0843 | 1965 | 0.0694 |
| 2.0854 | 1966 | 0.1311 |
| 2.0864 | 1967 | 0.1021 |
| 2.0875 | 1968 | 0.1638 |
| 2.0886 | 1969 | 0.1645 |
| 2.0896 | 1970 | 0.123 |
| 2.0907 | 1971 | 0.0946 |
| 2.0918 | 1972 | 0.0424 |
| 2.0928 | 1973 | 0.063 |
| 2.0939 | 1974 | 0.0634 |
| 2.0949 | 1975 | 0.0877 |
| 2.0960 | 1976 | 0.0795 |
| 2.0971 | 1977 | 0.0851 |
| 2.0981 | 1978 | 0.1073 |
| 2.0992 | 1979 | 0.0732 |
| 2.1002 | 1980 | 0.0904 |
| 2.1013 | 1981 | 0.0354 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-m3", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:482743", "loss:CachedGISTEmbedLoss"], "widget": [{"source_sentence": "언제 청년층 생산가능인구 수가 9,149,000명으로 드러났어", "sentences": ["제2장 청년층 노동시장 동향 및 청년내일채움공제의 성과 개괄\n□ 본 장에서는 청년내일채움공제 사업의 도입 배경으로 작용한 청년층 노동시장의 현황을 살펴보고, 세부 성과에 앞서 청년내일채움공제가 현 청년고용 실태에 미친 개괄적 성과를 평가하고자 한다\n제1절 청년층 노동시장 동향\n1. 청년층 인구구조 변화 추이\n□ 고용률과 실업률, 취업자 수 등의 고용지표는 경제 및 노동시장의 상황 뿐 만 아니라, 인구구조 변화에 의해 영향을 받는 만큼, 청년층의 노동시장 동향을 파악하기 위해서는 청년층 인구구조의 변화양상을 이해할 필요가 있음 □ 청년층 인구구조 변화 추이\n○ 전체 생산가능인구가 여전히 증가추세에 있는 것에 비해, 청년층(15∼29세) 생산가능인구는 1990년을 기점으로 감소추세에 있음. 2018년 청년층 생산가능인구는 전년대비 133천명 감소한 9,149천명임 - 15∼19세, 20∼24세 연령대의 생산가능인구는 감소추세에 있으나, 25∼29세 생산가능인구(3,432천명, 전년대비 96천명 증가)는 증가추세에 있으며 향후 2∼3년간 증가추세가 지속될 것으로 전망됨\n○ 청년층 경제활동인구는 2000년대 지속적으로 감소하다가 2013∼2016년 기간 증가추세를 보였으나, 2017년 이후 감소하고 있음. 2018년 청년층 경제활동인구는 전년대비 21천명 감소한 4,312천명을 기록하였음", "2. 인구구조변화의 현황<br>UN 인구국 자료를 분석해보면 2010년 인구와 2050년의 중간 수준 예측치를 기준으로 인구 1,000만 명 이상의 국가이면서 2050년까지 생산가능인구(15세~64세)가 감소하는 국가는 198개국중에서 19개국이다. 이 중에서 우리나라는 생산 가능인구(15세~64세) 감소율이 세계 6위 수준을 기록할 것으로 분석되었고, 이에 따라 생산가능 인구가 2010년 대비 2050년에는 27%가 줄어들 것으로 예상된다. 이에 반하여 노인인구는 236%가 증가할 것으로 예상된다. 우리나라는 생산가능인구 감소율과 노인인구(65세 이상) 증가율이 매우 높은 나라 중 하나이며, 동시에 유소년(0~14세) 인구의 감소도 22% 정도 될 것으로 예상하고 있다. <br>UN의 인구국(局) 자료인 [표 1]을 보면, 첫째 현재 우리나라를 포함하여 선진국에서조차도 현재 인구감소를 겪어본 국가가 거의 없어 이것이 가져다 줄 영향에 대한 경험적 지식이 거의 전무하다는 점이다. <br>둘째, 우리나라의 경우에는 무엇보다도 인구감소기로의 전환이 급격하게 이루어지고 있다. 따라서 우리 스스로 이의 긍정적ㆍ부정적 영향을 연역적으로 추론해볼 필요가 있다. 이를 위해서는 우리 사회의 전체 변화를 거시적으로 바라보고 정책 수단에 따라 가져다 줄 다양한 변화를 살펴볼 수 있는 시뮬레이션(simulation) 도구 및 정책 개발 그리고 집행 체제가 필요하다고 할 수 있다.", "7. 해외사례\n플랫폼 노동관계에 특화된 산재보험제도를 별도로 규정하고 있는 포괄적 입법례는 아직 보이지 않는다. 다만, 플랫폼 노동에 관한 미국의 앨라배마 주법(SB363, 2018.3.13. 제정)에서는 플랫폼 노동자들에게 일정한 수준의 복지를 허용하기 위한 법적 기초를 만들고 있다(부표 1 참조). SB363은 플랫폼 노동자들의 이동성을 고려한 ‘이동 가능 복지제도’(portable benefit plans)를 플랫폼 사업주들이 만들면, 이 복지제도를 통해 다수의 플랫폼과 계약을 맺는 플랫폼 노동자들이 복수 사업주 혹은 사업주의 변경에도 불구하고 복지제도를 유지하면서 그로부터 복지수혜를 얻을 수 있도록 하고 있다. 이때의 복지제도는 건강보험, 재해보험, 퇴직수당제도 등으로 구성될 수 있고, 플랫폼 노동자가 복지계좌를 개설하면 여기에 플랫폼 사업주가 복지기여금을 납부하는 방식을 취하고 있다. 국가에 의한 사회보험제도가 아니고, 플랫폼 사업주에 의한 자발적 복지제도의 기초를 구성한 것이기는 하지만, 지방정부가 이를 관리하도록 함으로써 공정성과 객관성을 담보한다. 한국의 산재보험제도에 대입시켜 비교할 수 있는 입법례가 아니기는 하지만, 플랫폼 사업주로 하여금 플랫폼 노동자에 대한 일정한 복지제도 형성을 촉구하고 있다는 점, 해당 복지제도에 대하여 플랫폼이 일정한 기여금을 납부하도록 하고 있다는 점 등에서 하나의 제도적 모델로 시사가능하다. 한편, 미국의 플로리다, 인디아나, 아이오와, 켄터키, 테네시, 텍사스, 유타주 등에서는 플랫폼노동자(경우에 따라서는 가사노동자에 국한)의 의미를 규정하면서, 주법상 재해보험제도 및(또는) 실업급여제도의 가입자가 될 수 있도록 하고 있다."]}, {"source_sentence": "도매 전력 매매를 주로 한 전력거래체계를 만들었을 때 어느 부문이 안정성 영향을 받을 수 있어", "sentences": ["(4) 외환시장\n원/달러 환율 2016년 말 대비 하락\n원/달러 환율은 연초 미 보호무역주의에 대한 우려, 수출 호조, 외국인 증권투자자금 유입 등으로 3월 말까지 상당 폭 하락하였다. 이후 북한 관련 리스크 전개 상황, 미 연준의 연내 금리인상 기대 변화 등에 따라 등락하는 모습을 보였다. 10월 들어 북한 리스크 완화, 미 달러화 약세 지속, 국내경제의 견실한 성장세 등으로 하락세로 전환하였으며, 2017년 말에는 연중 최저치인 1,070.5원을 기록하였다. 2017년 중 미 달러화에 대한 원화의 가치는 2016년 말 대비 12.8% 상승하였다. 한편 원/엔 환율(100엔당)은 미 연준 금리인상 기대변화, 북한 리스크 등에 영향을 받으며 등락을 지속하다가 10월 들어 원화가 상대적으로 강세를 보이면서 하락하였다. 2017년 말 원화의 엔화에 대한 가치는 100엔당 949.2원으로 2016년 말 대비 9.1% 상승하였다. 원/달러 환율의 일중 및 전일대비 변동폭은 연평균 각각 5.5원, 4.4원을 기록하여 2016년(각각 7.5원 및 6.0원)에 비해 축소되었다. 외환스왑레이트 하락, 통화스왑금리 상승\n외환스왑레이트(3개월물)는 외화자금수요 확대 등으로 2016년 말 대비 0.61% 포인트 하락한 -0.63%를 기록하였다. 통화스왑금리(3년물)는 경제지표 호조 등에 따른 국고채 금리 상승 등으로 2016년 말 대비 0.37% 포인트 상승한 1.57%를 기록하였다.", "4. 국내 전력거래 형태 전망\n앞서 살펴보았듯이, 구조개편을 단행한 외국의 전력거래 형태는 대체로 도매 전력거래 위주, 소매 전력거래 위주, 계약위주 등 세 가지로 구분되어 있다. 국내 전력산업의 현재 여건을 고려할 경우 가장 가능성이 높은 전력거래 형태는 도매 전력거래 위주의 거래형태일 것이다. 만약 도매 전력거래 위주의 전력거래체계를 구축하는 경우 가격과 공급의 안정성에 상당한 영향을 미칠 것으로 보인다. 하지만 규제계약(Vesting Contract)을 포함하여 장단기 계약물량을 확대함으로써 현물시장의 가격 및 공급 변동성을 완화할 수 있는 대안이 존재한다. 만약 전력시장에서의 가격과 공급 안정성 측면을 염두에 둔다면 소매 전력거래 형태 또는 장단기 계약 위주의 전력거래 형태를 고려해 볼 수 있다. 소매 전력거래 형태에서는 주로 기존의 수직통합체제 또는 이미 발전과 판매가 송배전 부문으로부터 분리된 경우라면 발전과 판매의 통합과 이로 인한 규모의 확대에 따라 유효경쟁의 약화 가능성이 문제가 될 수 있다. 이렇게 통합된 기업의 소매시장에서의 시장지배력 문제가 존재하기 때문에 순수 판매회사 또는 신규기업과의 실질적인 경쟁이 가능한 방향으로 제도적 장치를 마련할 필요가 있다.", "Ⅲ. 우리나라 전력시장에서의 유연성 자원\n1. 우리나라 전력시장의 현황\n우리나라의 전력시장은 전력산업의 효율성 향상을 위하여 2001년 전력산업구조개편을 통해 독점사업자였던 한국전력공사로부터 발전부문을 분리시켜 발전자회사로 재편시키고, 발전부문에 대한 경쟁체제를 도입하면서 출범하였다. 우리나라 전력시장은 소비자가 직접 거래에 참여하는 소매시장이 아닌 발전회사와 판매회사가 참여하는 도매시장으로, 다수의 발전회사가 존재하는 공급측과는 달리 수요측은 단일 구매자인 한국전력공사만 존재하는 구매독점 형태로 이루어져 있다. 또한 설비용량이 20MW를 초과하는 상업용 발전기는 전력시장을 통해서만 거래가 가능한 의무적 시장이다."]}, {"source_sentence": "해외 파생상품시장에서 거래되는 경우는 어떤 파생상품으로 분류할 수 있어", "sentences": ["3) 연금 세제\n▩ 근퇴법 상 연금수령을 활성화하기 위한 강제조항이 부재한 상황에서 세제체계(소득세법)에 따라 수령방법이 결정되는 구조임. ○ 제도적으로 일시금 수령을 방지하거나 연금수령을 강제하는 규정이 부재한 상황에서 수령 방법은 주로 세금부담에 따라 결정될 수밖에 없음. ○ 일시금 수령을 경감시키기 위해 2016년 소득세법을 개정하였으나 개정된 소득세법 조차도 일시금과 연금수령 간 세금부담 차이가 크지 않아 일시금 수령을 방지하지 못하는 한계가 존재함. ▩ 연금과 일시금 수령 간 세제차이가 크지 않은 주요 원인은 일시금 수령시 제공하는 공제혜택이 크기 때문임. ○ 국내 퇴직연금시장에서 일시금 수령이 만연한 것은 연금수령 시 부담하는 세율이 높아서가 아니라 일시금 수령 시 부담하는 세율이 낮기 때문임. ○ 일시금 수령의 세금부담이 높지 않은 이유는 일시금 수령 시에도 다수의 공제혜택(근속연수공제+환산급여공제)이 제공되고 혜택의 규모도 크기 때문임. ○ 즉 연금수령 시에는 일시금 수령 시 부담하는 세금의 70%만을 부담하도록 규정하고(소득세법 제129조(원천징수세율)제1항제5의336)) 있으나, 일시금 수령 시 세금부담이 작기 때문에 연금수령 시 추가 제공되는 세제혜택의 크기가 미미하게 됨. ○ 특히 연금수령 기간에 따른 세금차이가 없어 연금으로 수령하더라도 단기간 연금(예, 10년)을 수령하는 유인을 제공함.", "□ 장외파생 권역별 대표 상품을 살펴보면 금리관련상품 중에는 금리스왑(Interest Rate Swap:IRS)이, 통화관련 상품 중에서는 통화선도계약이, 신용관련거래에서는 신용파산스왑(Credit Default Swap:CDS)이 가장 높은 비중을 차지<br>○ 금리스왑은 금리관련 장외파생상품시장의 91.2%를 차지하고 있으며, 신용관련상품의 경우 신용파산스왑과 총수익스왑(Total Return Swap:TRS)만이 거래<br>○ 통화선도는 대부분 NDF(Non-Deliverable Forward)의 형태로 거래<br>○ 장외파생상품 거래는 일대일 계약을 기반으로 하고 거래당사자의 높은 신용도를 요구하기 때문에 신용도가 낮은 개인 또는 금융기관은 참가하기가 어려운 실정<br>○ 특히, 신용관련거래는 다른 금융기관에 비해 상대적으로 높은 신용도를 가지고 있는 은행과 증권사를 통해서만 거래가 이루어지고 있는 것으로 파악", "제5조 (파생상품) ① 이 법에서 ˝파생상품˝이란 다음 각 호의 어느 하나에 해당하는 계약상의 권리를 말한다.\n1. 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된 금전등을 장래의 특정 시점에 인도할 것을 약정하는 계약\n2. 당사자 어느 한쪽의 의사표시에 의하여 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된 금전등을 수수하는 거래를 성립시킬 수 있는 권리를 부여하는 것을 약정하는 계약\n3. 장래의 일정기간 동안 미리 정한 가격으로 기초자산이나 기초자산의 가격ㆍ이자율ㆍ지표ㆍ단위 또는 이를 기초로 하는 지수 등에 의하여 산출된 금전등을 교환할 것을 약정하는 계약\n② 이 법에서 ˝장내파생상품˝이란 파생상품으로서 파생상품시장에서 거래되는 것 또는 해외 파생상품시장(파생상품시장과 유사한 시장으로서 해외에 있는 시장과 대통령령으로 정하는 해외 파생상품거래가 이루어지는 시장을 말한다)에서 거래되는 것을 말한다. ③ 이 법에서 ˝장외파생상품˝이란 파생상품으로서 장내파생상품이 아닌 것을 말한다. ④ 제1항 각 호의 어느 하나에 해당하는 계약 중 매매계약이 아닌 계약의 체결은 이 법을 적용함에 있어서 매매계약의 체결로 본다."]}, {"source_sentence": "어디서 자금세탁 전제범죄에서 비영리단체 관련 자금세탁범죄를 찾아내는 방식을 선택했어", "sentences": ["IV. 비영리단체 분야의 범죄유형 및 사례\n1. 국내사례와 범죄유형\n□ 한국은 아직 비영리단체에 대한 명확한 개념 구분이 정립되어 있지 않은 실정으로 이에 따라 그동안 한국에서는 비영리단체 관련범죄에 대한 인식이 제대로 이루어지지 않음. -비영리단체 관련 범죄에 있어서도 비영리단체에 대한 명확한 개념 구분 없이 판결문이 작성되어 비영리단체 관련 자금세탁범죄를 조사하는데 있어서 큰 장애로 작용함. -국내 판례에서 비영리단체와 관련하여 자금세탁이라는 용어를 직접 사용한 경우는 없으며 이에 따라 부득이 자금세탁 전제범죄를 통하여 비영리단체 관련 자금세탁범죄를 조사하는 방법을 택함.", "VI. 비영리단체를 통한 테러자금조달방지를 위한 제도개선방안\n□ FATF는 2001년 10월 테러자금조달에 대한 특별권고 8항(Special Recommendationson Terrorist Financing)을 통하여 비영리단체에 대한 관리감독을 강화하고 관련 법규를 정비할 것을 권고하였음. -우리나라에서 비영리단체를 통한 자금세탁을 이용해서 테러자금이 조달될 가능성은 매우 낮은 것으로 평가되나 자금세탁의 우회로를 차단하고, 또한 예방적인 조치를 취해 국제적으로 자금세탁을 통한 불법적인 테러자금조달 방지 노력에 적극 부응하기 위해서는 비영리단체에 대한 자금세탁 방지제도 도입에 적극 참여해야 할 것임.", "(2) 전북의 풍력산업 약점\n전북의 풍력산업은 내부환경관점에서 다음과 같은 약점을 보유하고 있다. 첫째, 두산(창원), 유니슨(사천), 효성(창원), 한진(양산) 등 풍력터빈시스템업체는 모두 경남권에 위치하고 있으며, 현재 경남 이외 지역에서는 MW급 이상 풍력용 터빈시스템업체가 존재하지 않는다. 다만 최근 터빈의 대형화에 따라, 터빈의 무게 및 부피 등에 따른 운송에 한계가 존재하고 상당기간 고정식 형태의 해상풍력발전단지 구축이 중심이 될 것으로 판단됨에 따라, 부유식 해상풍력단지개발이 시작되기 전까지는 GW급 해상풍력단지 개발계획에 따라 서해안권으로 시스템업체의 이전가능성은 일부 존재한다. 둘째, 해상풍력 관련 연구/생산인력 부족 역시 장기적 해상풍력기술의 지속성 관점에서 시급한 문제이다. 현재 전북은 해상풍력터빈과 관련된 기술은 없으나, 휴먼컴퍼지트와 재료연구원 풍력핵심기술연구센터를 통해 해상풍력 블레이드 및 재료 등에 대한 기술이 축적되어 있다. 이외에 하부구조물 및 타워 관련 기술 역시 에드벡트가 석션버켓(suction bucket)방식의 하부구조물 관련 기술을 보유하고 있고 현재 군산공장을 리모델링 중인 CS윈드는 세계 최고수준의 타워제작기술을 확보하고 있다. 따라서 전북은 블레이드, 재료, 타워 및 하부구조물 등과 관련된 기술을 확보하고 있다."]}, {"source_sentence": "전남지역의 석유와 화학제품은 왜 수출이 늘어나는 경향을 보였어", "sentences": ["수출 증가세 지속\n1/4분기 중 수출은 전년동기대비 증가흐름을 지속하였다. 품목별로 보면 석유제품, 석유화학, 철강, 선박, 반도체, 자동차 등 대다수 품목에서 증가하였다. 석유제품은 글로벌 경기회복에 따른 에너지 수요 증가와 국제유가 급등으로 수출단가가 높은 상승세를 지속하면서 증가하였다. 석유화학도 중국, 아세안을 중심으로 합성수지, 고무 등의 수출이 큰 폭 증가한 데다 고유가로 인한 수출가격도 동반 상승하면서 증가세를 이어갔다. 철강은 건설, 조선 등 글로벌 전방산업의 수요 증대, 원자재가격 상승 및 중국 감산 등에 따른 수출단가 상승 등에 힘입어 증가세를 이어갔다. 선박은 1/4분기 중 인도물량이 확대됨에 따라 증가하였다. 반도체는 자동차 등 전방산업의 견조한 수요가 이어지는 가운데 전년동기대비로 높은 단가가 지속되면서 증가하였다. 자동차는 차량용 반도체 수급차질이 지속되었음에도 불구하고 글로벌 경기회복 흐름에 따라 수요가 늘어나면서 전년동기대비 소폭 증가하였다. 모니터링 결과 향후 수출은 증가세가 지속될 것으로 전망되었다. 석유화학 및 석유정제는 수출단가 상승과 전방산업의 수요확대 기조가 이어지면서 증가할 전망이다. 철강은 주요국 경기회복과 중국, 인도 등의 인프라 투자 확대 등으로 양호한 흐름을 이어갈 전망이다. 반도체는 글로벌 스마트폰 수요 회복, 디지털 전환 기조 등으로 견조한 증가세를 지속할 것으로 보인다. 자동차는 차량용 반도체 공급차질이 점차 완화되고 미국, 신흥시장을 중심으로 수요회복이 본격화됨에 따라 소폭 증가할 전망이다. 선박은 친환경 선박수요 지속, 글로별 교역 신장 등에도 불구하고 2021년 2/4분기 집중되었던 인도물량의 기저효과로 인해 감소할 것으로 보인다.", "(3) 금융기관 여수신\n은행 수신 큰 폭 확대\n은행 수신은 2019년에 비해 증가폭이 크게 확대되었다. 수시입출식예금은 불확실성 증대에 따른 가계 및 기업의 예비자금 확보 등의 영향으로 증가 규모가 전년대비 3배가량 확대되었다. 반면 정기예금은 예금금리 하락, 예대율 및 LCR 규제 완화에 따른 은행의 정기예금 유치 유인 축소 등에 기인하여 감소로 전환하였다. 자산운용사 수신은 증가폭이 축소되었다. MMF는 꾸준한 증가세를 유지하였으나 주식형 및 채권형 펀드는 개인투자자의 주식 직접투자증가, 신용증권에 대한 시장 경계감 확대 등으로 감소로 전환하였다. 또한 기타펀드는 2019년 중 일부 사모펀드 손실 및 환매중단 사태, 사모펀드 일반투자자 요건 강화 등으로 증가 규모가 절반 수준으로 축소되었다. 한편 신용협동기구 등 비은행예금취급기관 수신은 대체로 2019년과 비슷한 증가 규모를 나타내었다.", "(2) 전남지역\n2013년중 전남지역 수출은 전년대비 1.2% 감소로 전환하였다. 품목별로는 석유(+9.3% → +3.8%) 및 화학제품(+1.2% → +7.1%)이 중국 등 해외수요확대로 증가세를 지속하였으나 철강금속(+1.8% → -8.6%)은 글로벌 공급과잉 및 중국의 저가 철강수출 확대로, 선박(+7.6% → -49.2%)은 수주물량이 급격히 줄어들면서 감소로 전환하였다. 전남지역 수입은 원유, 화학제품, 철강금속 등의 수입이 줄면서 전년대비 7.4% 감소로 전환하였다."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,195 |
dlaiymani/bert-base-ner-coreml
|
dlaiymani
| null |
[
"coreml",
"en",
"license:mit",
"region:us"
] | 2023-08-21T11:56:54Z |
2023-08-21T13:10:07+00:00
| 1 | 0 |
---
language:
- en
license: mit
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
**bert-base-ner-coreml** is a coreml version of the [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) model for Named Entity Recognition (NER).
The conversion was performed with the [transformers-to-coreml](https://huggingface.co/spaces/coreml-projects/transformers-to-coreml) conversion Space.
| null |
Non_BioNLP
|
### Model Description
<!-- Provide a longer summary of what this model is. -->
**bert-base-ner-coreml** is a coreml version of the [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) model for Named Entity Recognition (NER).
The conversion was performed with the [transformers-to-coreml](https://huggingface.co/spaces/coreml-projects/transformers-to-coreml) conversion Space.
|
{"language": ["en"], "license": "mit"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 42,196 |
ggml-org/gemma-3-27b-it-GGUF
|
ggml-org
| null |
[
"gguf",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-03-12T06:49:48Z |
2025-03-12T10:31:32+00:00
| 10,078 | 17 |
---
base_model:
- google/gemma-3-27b-it
license: gemma
---
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
| null |
Non_BioNLP
|
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
|
{"base_model": ["google/gemma-3-27b-it"], "license": "gemma"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 42,197 |
hopkins/eng-kor-wsample.47
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-04T22:43:02Z |
2023-07-04T23:01:00+00:00
| 12 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-kor-wsample.47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-wsample.47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9922
- Bleu: 6.8895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-wsample.47
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9922
- Bleu: 6.8895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-kor-wsample.47", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,198 |
AhmedSSoliman/DistilRoBERTa-Marian-Model-on-DJANGO
|
AhmedSSoliman
|
translation
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Machine Translation",
"Code Generation",
"Text Generation",
"translation",
"en",
"dataset:AhmedSSoliman/DJANGO",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-11T22:10:42Z |
2023-07-30T12:01:09+00:00
| 15 | 0 |
---
datasets:
- AhmedSSoliman/DJANGO
language:
- en
license: mit
metrics:
- bleu
- accuracy
pipeline_tag: translation
tags:
- Machine Translation
- Code Generation
- Text Generation
---
| null |
Non_BioNLP
|
{"datasets": ["AhmedSSoliman/DJANGO"], "language": ["en"], "license": "mit", "metrics": ["bleu", "accuracy"], "pipeline_tag": "translation", "tags": ["Machine Translation", "Code Generation", "Text Generation"]}
|
task
|
[
"TRANSLATION"
] | 42,199 |
|
YunzheLv/x-llama-zh-7b
|
YunzheLv
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:tatsu-lab/alpaca",
"dataset:news_commentary",
"arxiv:2308.04948",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-09T10:47:08Z |
2024-01-13T09:12:07+00:00
| 8 | 0 |
---
datasets:
- tatsu-lab/alpaca
- news_commentary
language:
- zh
- en
license: apache-2.0
metrics:
- bleu
- bleurt
- comet
pipeline_tag: text-generation
---
# Extrapolating Large Language Models to Non-English by Aligning Languages
This repository contains the code implementation for the project that aims to empower pre-trained Large Language Models (LLMs) on non-English languages by building semantic alignment across languages. The project explores cross-lingual instruction-tuning and multilingual instruction-tuning techniques. The code implementation is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).

## Requirements and Installation
To install this repository, follow these steps:
```
git clone [email protected]:NJUNLP/x-LLM.git
cd x-LLM
pip install --editable ./
```
For detailed information about the conda environment, refer to the environment.yml file.
## Usage
### Download Pre-trained LLM
Start by downloading the pre-trained LLM into the ./model directory.
### Download Dataset
You can download all the datasets used in this project from this [link](https://drive.google.com/file/d/1bkejieKDJFDJ45UmQYiY4eeqpGBwj-r-/view?usp=drive_link). Once downloaded, place the datasets in the ./data directory. The datasets include:
* Training dataset
* Alpaca
* Wikimatrix
* Newscommentary
* Evaluation dataset
* XQUAD
* MLQA
* Flores-101
* MI-Eval
### Load Raw Data Along with Instruction
You can load raw data along with instruction using the provided scripts (./data/<dataset>/<dataset.py>). If you want to use a new dataset, you need to implement the corresponding script. The loaded data will have the following structure:
``` python
datasets.Features(
{
"id": datasets.Value("string"),
"instruction": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string")
}
)
```
## Instruction-tune Pre-trained LLM
To instruction-tune the pre-trained LLM, run the train.sh script. For example, you can instruction-tune LLaMA-7B to x-LLaMA-7B (Chinese) with the following command:
``` bash
bash script/train.sh llama-7b-hf alpaca_en+alpaca_zh+translation_ncwm_en-zh
```
In this command, the first argument denotes the pre-trained LLM to use, and the second argument represents the training data to use. You can use + to concatenate multiple datasets, and the training data will be shuffled by the Huggingface Trainer.
Once the training is complete, the finetuned LLM will be saved in ./model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune. You can use aliases to define shorter names, and more details can be found in ./data/alias/alias.json.
## Test Finetuned LLM
To test the finetuned LLM, run the inference.sh script. For example, you can test the tuned LLM on the Flores dataset with the following command:
``` bash
bash script/inference.sh llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune translation_flores_en-zh
```
The output results will be saved in model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune/test/translation_flores_en-zh.inference.jsonl. The prediction field represents the generated content of the LLM.
## Interact with LLM Through Web UI
To interact with the LLM through a web UI, run app.py with the following command:
``` bash
bash app.py model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune
```
## Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{zhu2023extrapolating,
title={Extrapolating Large Language Models to Non-English by Aligning Languages},
author={Wenhao Zhu and Yunzhe Lv and Qingxiu Dong and Fei Yuan and Jingjing Xu and Shujian Huang and Lingpeng Kong and Jiajun Chen and Lei Li},
year={2023},
eprint={2308.04948},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# Extrapolating Large Language Models to Non-English by Aligning Languages
This repository contains the code implementation for the project that aims to empower pre-trained Large Language Models (LLMs) on non-English languages by building semantic alignment across languages. The project explores cross-lingual instruction-tuning and multilingual instruction-tuning techniques. The code implementation is based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca).

## Requirements and Installation
To install this repository, follow these steps:
```
git clone [email protected]:NJUNLP/x-LLM.git
cd x-LLM
pip install --editable ./
```
For detailed information about the conda environment, refer to the environment.yml file.
## Usage
### Download Pre-trained LLM
Start by downloading the pre-trained LLM into the ./model directory.
### Download Dataset
You can download all the datasets used in this project from this [link](https://drive.google.com/file/d/1bkejieKDJFDJ45UmQYiY4eeqpGBwj-r-/view?usp=drive_link). Once downloaded, place the datasets in the ./data directory. The datasets include:
* Training dataset
* Alpaca
* Wikimatrix
* Newscommentary
* Evaluation dataset
* XQUAD
* MLQA
* Flores-101
* MI-Eval
### Load Raw Data Along with Instruction
You can load raw data along with instruction using the provided scripts (./data/<dataset>/<dataset.py>). If you want to use a new dataset, you need to implement the corresponding script. The loaded data will have the following structure:
``` python
datasets.Features(
{
"id": datasets.Value("string"),
"instruction": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string")
}
)
```
## Instruction-tune Pre-trained LLM
To instruction-tune the pre-trained LLM, run the train.sh script. For example, you can instruction-tune LLaMA-7B to x-LLaMA-7B (Chinese) with the following command:
``` bash
bash script/train.sh llama-7b-hf alpaca_en+alpaca_zh+translation_ncwm_en-zh
```
In this command, the first argument denotes the pre-trained LLM to use, and the second argument represents the training data to use. You can use + to concatenate multiple datasets, and the training data will be shuffled by the Huggingface Trainer.
Once the training is complete, the finetuned LLM will be saved in ./model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune. You can use aliases to define shorter names, and more details can be found in ./data/alias/alias.json.
## Test Finetuned LLM
To test the finetuned LLM, run the inference.sh script. For example, you can test the tuned LLM on the Flores dataset with the following command:
``` bash
bash script/inference.sh llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune translation_flores_en-zh
```
The output results will be saved in model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune/test/translation_flores_en-zh.inference.jsonl. The prediction field represents the generated content of the LLM.
## Interact with LLM Through Web UI
To interact with the LLM through a web UI, run app.py with the following command:
``` bash
bash app.py model/llama-7b-hf.alpaca_en+alpaca_zh+translation_ncwm_en-zh.finetune
```
## Citation
If you find this repository helpful, please consider citing our paper:
```
@misc{zhu2023extrapolating,
title={Extrapolating Large Language Models to Non-English by Aligning Languages},
author={Wenhao Zhu and Yunzhe Lv and Qingxiu Dong and Fei Yuan and Jingjing Xu and Shujian Huang and Lingpeng Kong and Jiajun Chen and Lei Li},
year={2023},
eprint={2308.04948},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"datasets": ["tatsu-lab/alpaca", "news_commentary"], "language": ["zh", "en"], "license": "apache-2.0", "metrics": ["bleu", "bleurt", "comet"], "pipeline_tag": "text-generation"}
|
task
|
[
"TRANSLATION"
] | 42,200 |
HPLT/hplt_bert_base_mk
|
HPLT
|
fill-mask
|
[
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"mk",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 2024-04-22T01:27:32Z |
2024-11-24T19:13:11+00:00
| 9 | 0 |
---
datasets:
- HPLT/hplt_monolingual_v1_2
language:
- mk
license: apache-2.0
tags:
- BERT
- HPLT
- encoder
inference: false
---
# HPLT Bert for Macedonian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_mk")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_mk", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_mk", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_mk")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
| null |
Non_BioNLP
|
# HPLT Bert for Macedonian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_mk")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_mk", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_mk", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_mk")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
{"datasets": ["HPLT/hplt_monolingual_v1_2"], "language": ["mk"], "license": "apache-2.0", "tags": ["BERT", "HPLT", "encoder"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 42,201 |
mrapacz/interlinear-pl-mt5-base-emb-sum-diacritics-bh
|
mrapacz
|
text2text-generation
|
[
"transformers",
"pytorch",
"morph-t5-sum",
"text2text-generation",
"pl",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-08T12:26:45Z |
2025-02-21T21:30:43+00:00
| 14 | 0 |
---
base_model:
- mT5-base
datasets:
- mrapacz/greek-interlinear-translations
language:
- pl
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5SumForConditionalGeneration
- **Base Model:** mT5-base
- **Tokenizer:** mT5
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** BH (Bible Hub)
- **Text Preprocessing:** Diacritics
- **Morphological Encoding:** emb-sum
### Model Performance
- **BLEU Score:** 52.54
- **SemScore:** 0.92
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer
>>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει']
>>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-diacritics-bh")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-diacritics-bh")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'Mówi | mu | - | Jezus | chodź | wyrzuć | - | matę | twoją | i | chodź'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
| null |
Non_BioNLP
|
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MorphT5SumForConditionalGeneration
- **Base Model:** mT5-base
- **Tokenizer:** mT5
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** BH (Bible Hub)
- **Text Preprocessing:** Diacritics
- **Morphological Encoding:** emb-sum
### Model Performance
- **BLEU Score:** 52.54
- **SemScore:** 0.92
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
> **Note**: This model uses a modification of T5-family models that includes dedicated embedding layers for encoding morphological information. To load these models, install the [morpht5](https://github.com/mrapacz/loreslm-interlinear-translation/blob/master/morpht5/README.md) package:
> ```bash
> pip install morpht5
> ```
```python
>>> from morpht5 import MorphT5SumForConditionalGeneration, MorphT5Tokenizer
>>> text = ['Λέγει', 'αὐτῷ', 'ὁ', 'Ἰησοῦς', 'Ἔγειρε', 'ἆρον', 'τὸν', 'κράβαττόν', 'σου', 'καὶ', 'περιπάτει']
>>> tags = ['V-PIA-3S', 'PPro-DM3S', 'Art-NMS', 'N-NMS', 'V-PMA-2S', 'V-AMA-2S', 'Art-AMS', 'N-AMS', 'PPro-G2S', 'Conj', 'V-PMA-2S']
>>> tokenizer = MorphT5Tokenizer.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-diacritics-bh")
>>> inputs = tokenizer(
text=text,
morph_tags=tags,
return_tensors="pt"
)
>>> model = MorphT5SumForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-mt5-base-emb-sum-diacritics-bh")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> decoded = tokenizer.decode(outputs[0], skip_special_tokens=True, keep_block_separator=True)
>>> decoded = decoded.replace(tokenizer.target_block_separator_token, " | ")
>>> decoded
'Mówi | mu | - | Jezus | chodź | wyrzuć | - | matę | twoją | i | chodź'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
```
|
{"base_model": ["mT5-base"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["pl"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]}
|
task
|
[
"TRANSLATION"
] | 42,203 |
joeranbosma/dragon-bert-base-domain-specific
|
joeranbosma
|
fill-mask
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"doi:10.57967/hf/2167",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-03T09:29:17Z |
2025-02-07T09:31:30+00:00
| 16 | 0 |
---
license: cc-by-nc-sa-4.0
---
# DRAGON BERT base domain-specific
Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper. The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base).
## Model description
BERT is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts.
This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
## Model variations
Multiple architectures were pretrained for the DRAGON challenge.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch → Dutch |
| [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple → Dutch |
| [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple → Dutch |
| [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English → Dutch |
| [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English → Dutch |
| [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch |
| [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch |
| [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch |
| [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch |
| [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch |
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="joeranbosma/dragon-bert-base-domain-specific")
unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-bert-base-domain-specific")
model = AutoModel.from_pretrained("joeranbosma/dragon-bert-base-domain-specific")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors="pt")
output = model(**encoded_input)
```
## Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining.
## Training procedure
### Pretraining
The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py).
### Pretraining hyperparameters
The following hyperparameters were used during pretraining:
- `learning_rate`: 6e-4
- `train_batch_size`: 16
- `eval_batch_size`: 16
- `seed`: 42
- `gradient_accumulation_steps`: 16
- `total_train_batch_size`: 256
- `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08
- `lr_scheduler_type`: linear
- `num_epochs`: 10.0
- `max_seq_length`: 512
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
## Evaluation results
Pending evaluation on the DRAGON benchmark.
### BibTeX entry and citation info
```bibtex
@article{PENDING}
```
| null |
BioNLP
|
# DRAGON BERT base domain-specific
Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper. The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base).
## Model description
BERT is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts.
This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
## Model variations
Multiple architectures were pretrained for the DRAGON challenge.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch → Dutch |
| [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple → Dutch |
| [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple → Dutch |
| [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English → Dutch |
| [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English → Dutch |
| [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch |
| [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch |
| [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch |
| [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch |
| [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch |
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
unmasker = pipeline("fill-mask", model="joeranbosma/dragon-bert-base-domain-specific")
unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-bert-base-domain-specific")
model = AutoModel.from_pretrained("joeranbosma/dragon-bert-base-domain-specific")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors="pt")
output = model(**encoded_input)
```
## Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining.
## Training procedure
### Pretraining
The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py).
### Pretraining hyperparameters
The following hyperparameters were used during pretraining:
- `learning_rate`: 6e-4
- `train_batch_size`: 16
- `eval_batch_size`: 16
- `seed`: 42
- `gradient_accumulation_steps`: 16
- `total_train_batch_size`: 256
- `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08
- `lr_scheduler_type`: linear
- `num_epochs`: 10.0
- `max_seq_length`: 512
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
## Evaluation results
Pending evaluation on the DRAGON benchmark.
### BibTeX entry and citation info
```bibtex
@article{PENDING}
```
|
{"license": "cc-by-nc-sa-4.0"}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,204 |
Sociovestix/lenu_LI
|
Sociovestix
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-17T07:55:48Z |
2024-06-17T08:10:52+00:00
| 6 | 0 |
---
library_name: transformers
tags: []
widget:
- text: Kapalula Stiftung
- text: Hilti Glasbau AG
- text: KASSALA PRIVATE TRUSTEE ESTABLISHMENT
- text: Scarabaeus Active Fund
- text: Spectrum Alternative Assets SICAV - Spectrum Growth Fund
- text: The Gani (IOM) Trust
- text: FONDECTA Trust Reg.
- text: M&N Holding GmbH
- text: LGT Select Equity Enhanced Minimum Variance
- text: HEC GmbH & Co. KG
- text: Liechtenstein-Institut
- text: HECTOR PLAST SE
- text: Liechtensteiner Milchverband eingetragene Genossenschaft
- text: Markus Haas MHS Makler Büro
model-index:
- name: Sociovestix/lenu_LI
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: lenu
type: Sociovestix/lenu
config: LI
split: test
revision: f4d57b8d77a49ec5c62d899c9a213d23cd9f9428
metrics:
- type: f1
value: 0.9451453647833242
name: f1
- type: f1
value: 0.8044260987610308
name: f1 macro
args:
average: macro
---
# LENU - Legal Entity Name Understanding for Liechtenstein
A Bert (multilingual uncased) model fine-tuned on Liechtenstein entity names (jurisdiction LI) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on Liechtenstein legal entities and ELF Codes within the Jurisdiction "LI".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
| null |
Non_BioNLP
|
# LENU - Legal Entity Name Understanding for Liechtenstein
A Bert (multilingual uncased) model fine-tuned on Liechtenstein entity names (jurisdiction LI) from the Global [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) System with the goal to detect [Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Model Description
<!-- Provide a longer summary of what this model is. -->
The model has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [lenu](https://github.com/Sociovestix/lenu), which supports in this task.
The model has been trained on the dataset [lenu](https://huggingface.co/datasets/Sociovestix), with a focus on Liechtenstein legal entities and ELF Codes within the Jurisdiction "LI".
- **Developed by:** [GLEIF](https://gleif.org) and [Sociovestix Labs](https://huggingface.co/Sociovestix)
- **License:** Creative Commons (CC0) license
- **Finetuned from model [optional]:** bert-base-multilingual-uncased
- **Resources for more information:** [Press Release](https://www.gleif.org/en/newsroom/press-releases/machine-learning-new-open-source-tool-developed-by-gleif-and-sociovestix-labs-enables-organizations-everywhere-to-automatically-)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An entity's legal form is a crucial component when verifying and screening organizational identity.
The wide variety of entity legal forms that exist within and between jurisdictions, however, has made it difficult for large organizations to capture legal form as structured data.
The Jurisdiction specific models of [lenu](https://github.com/Sociovestix/lenu), trained on entities from
GLEIF’s Legal Entity Identifier (LEI) database of over two million records, will allow banks,
investment firms, corporations, governments, and other large organizations to retrospectively analyze
their master data, extract the legal form from the unstructured text of the legal name and
uniformly apply an ELF code to each entity type, according to the ISO 20275 standard.
# Licensing Information
This model, which is trained on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data).
# Recommendations
Users should always consider the score of the suggested ELF Codes. For low score values it may be necessary to manually review the affected entities.
|
{"library_name": "transformers", "tags": [], "widget": [{"text": "Kapalula Stiftung"}, {"text": "Hilti Glasbau AG"}, {"text": "KASSALA PRIVATE TRUSTEE ESTABLISHMENT"}, {"text": "Scarabaeus Active Fund"}, {"text": "Spectrum Alternative Assets SICAV - Spectrum Growth Fund"}, {"text": "The Gani (IOM) Trust"}, {"text": "FONDECTA Trust Reg."}, {"text": "M&N Holding GmbH"}, {"text": "LGT Select Equity Enhanced Minimum Variance"}, {"text": "HEC GmbH & Co. KG"}, {"text": "Liechtenstein-Institut"}, {"text": "HECTOR PLAST SE"}, {"text": "Liechtensteiner Milchverband eingetragene Genossenschaft"}, {"text": "Markus Haas MHS Makler Büro"}], "model-index": [{"name": "Sociovestix/lenu_LI", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "lenu", "type": "Sociovestix/lenu", "config": "LI", "split": "test", "revision": "f4d57b8d77a49ec5c62d899c9a213d23cd9f9428"}, "metrics": [{"type": "f1", "value": 0.9451453647833242, "name": "f1"}, {"type": "f1", "value": 0.8044260987610308, "name": "f1 macro", "args": {"average": "macro"}}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,205 |
pere/nb-nn-translation
|
pere
|
translation
|
[
"transformers",
"pytorch",
"jax",
"translation",
"no",
"dataset:oscar",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-09-23T16:19:21+00:00
| 885 | 5 |
---
datasets:
- oscar
language: false
license: cc-by-4.0
tags:
- translation
widget:
- text: Skriv inn en tekst som du ønsker å oversette til en annen målform.
---
# 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴
Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish.
The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language.
Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate.
## Demo
| | |
|---|---|
| Widget | Try the widget in the top right corner |
| Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) |
| | |
## Pretraining a T5-base
There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB.
## Finetuning - BLEU-SCORE 88.17 🎉
The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers.
Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training.
## This is not a translator
We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher.
## Future work
The following Google Docs Add-on is currently pending approval.

## How to use the model
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-translation')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
| null |
Non_BioNLP
|
# 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴
Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish.
The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language.
Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate.
## Demo
| | |
|---|---|
| Widget | Try the widget in the top right corner |
| Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) |
| | |
## Pretraining a T5-base
There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB.
## Finetuning - BLEU-SCORE 88.17 🎉
The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers.
Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training.
## This is not a translator
We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher.
## Future work
The following Google Docs Add-on is currently pending approval.

## How to use the model
```python
# Set up the pipeline
from transformers import pipeline
translator = pipeline("translation", model='pere/nb-nn-translation')
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
```
|
{"datasets": ["oscar"], "language": false, "license": "cc-by-4.0", "tags": ["translation"], "widget": [{"text": "Skriv inn en tekst som du ønsker å oversette til en annen målform."}]}
|
task
|
[
"TRANSLATION"
] | 42,206 |
Finnish-NLP/ul2-tiny-nl6-finnish
|
Finnish-NLP
|
text2text-generation
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"ul2",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2205.05131",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2022-10-31T16:12:49Z |
2022-11-15T17:11:26+00:00
| 117 | 0 |
---
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
- ul2
inference: false
---
# UL2-tiny-nl6 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-tiny-nl6](https://huggingface.co/google/t5-efficient-tiny-nl6) architecture's layer depth which means both the encoder and the decoder have 6 transformer layers compared to the original T5 "tiny" model's architecture of 4 transformer layers.
In total, this model has 31 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 512 (in total 131B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
| null |
Non_BioNLP
|
# UL2-tiny-nl6 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-tiny-nl6](https://huggingface.co/google/t5-efficient-tiny-nl6) architecture's layer depth which means both the encoder and the decoder have 6 transformer layers compared to the original T5 "tiny" model's architecture of 4 transformer layers.
In total, this model has 31 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 512 (in total 131B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
{"datasets": ["Finnish-NLP/mc4_fi_cleaned", "wikipedia"], "language": ["fi"], "license": "apache-2.0", "tags": ["finnish", "t5", "t5x", "seq2seq", "ul2"], "inference": false}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,207 |
qiaoqian/my_awesome_model
|
qiaoqian
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-13T08:18:25Z |
2023-03-13T08:41:15+00:00
| 10 | 0 |
---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.93084
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2367
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2302 | 1.0 | 1563 | 0.1915 | 0.9289 |
| 0.1474 | 2.0 | 3126 | 0.2367 | 0.9308 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2367
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2302 | 1.0 | 1563 | 0.1915 | 0.9289 |
| 0.1474 | 2.0 | 3126 | 0.2367 | 0.9308 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93084, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,208 |
anas-awadalla/opt-125m-squad
|
anas-awadalla
|
text-generation
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-05-19T23:01:14Z |
2022-06-25T23:56:38+00:00
| 128 | 0 |
---
{}
---
A facebook/opt-125m model trained on SQUAD for extractive question answering.
To use the model format input in the following manner:
"(Context Text)\nQuestion:(Question Text)\nAnswer:"
| null |
Non_BioNLP
|
A facebook/opt-125m model trained on SQUAD for extractive question answering.
To use the model format input in the following manner:
"(Context Text)\nQuestion:(Question Text)\nAnswer:"
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,209 |
Ibrahim-Alam/finetuning-xlnet-base-cased-on-imdb
|
Ibrahim-Alam
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-26T18:21:35Z |
2023-06-28T23:49:30+00:00
| 28 | 0 |
---
datasets:
- imdb
license: mit
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-xlnet-base-cased-on-imdb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.95056
name: Accuracy
- type: f1
value: 0.9503813729425933
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-xlnet-base-cased-on-imdb
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1402
- Accuracy: 0.9506
- F1: 0.9504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-xlnet-base-cased-on-imdb
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1402
- Accuracy: 0.9506
- F1: 0.9504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
{"datasets": ["imdb"], "license": "mit", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-xlnet-base-cased-on-imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "test", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.95056, "name": "Accuracy"}, {"type": "f1", "value": 0.9503813729425933, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,210 |
anhtuansh/gte-multilingual-base-Matryoshka-2e-9k
|
anhtuansh
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:8259",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-24T02:57:56Z |
2024-12-24T02:58:40+00:00
| 12 | 0 |
---
base_model: Alibaba-NLP/gte-multilingual-base
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8259
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'theo quy_định tại điều 35 nghị_định số 201 / 2013 / nđ - cp thì
thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , khai_thác nước
dưới đất như sau : 1 . tiếp_nhận và kiểm_tra hồ_sơ : trong thời_hạn mười ( 10
) ngày làm_việc , kể từ ngày nhận hồ_sơ , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm
xem_xét , kiểm_tra hồ_sơ . trường_hợp hồ_sơ không hợp_lệ , cơ_quan tiếp_nhận hồ_sơ
thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để bổ_sung , hoàn_thiện hồ_sơ
theo quy_định . trường_hợp hồ_sơ sau khi đã bổ_sung mà vẫn không đáp_ứng yêu_cầu
theo quy_định thì cơ_quan tiếp_nhận hồ_sơ trả lại hồ_sơ và thông_báo rõ lý_do
cho tổ_chức , cá_nhân đề_nghị cấp phép . 2 . thẩm_định_đề_án , báo_cáo thăm_dò
, khai_thác , sử_dụng tài_nguyên nước , xả nước_thải vào nguồn nước trong hồ_sơ
đề_nghị cấp phép ( sau đây gọi chung là đề_án , báo_cáo ) : a ) trong thời_hạn
ba_mươi ( 30 ) ngày làm_việc , kể từ ngày nhận đủ hồ_sơ hợp_lệ theo quy_định tại
khoản 1 điều này , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm thẩm_định_đề_án , báo_cáo
; nếu cần_thiết thì kiểm_tra thực_tế hiện_trường , lập hội_đồng thẩm_định_đề_án
, báo_cáo . trường_hợp đủ điều_kiện cấp phép , cơ_quan tiếp_nhận hồ_sơ trình cơ_quan
có thẩm_quyền cấp giấy_phép ; trường_hợp không đủ điều_kiện để cấp phép thì trả
lại hồ_sơ cho tổ_chức , cá_nhân đề_nghị cấp phép và thông_báo lý_do không cấp
phép ; b ) trường_hợp phải bổ_sung , chỉnh_sửa để hoàn_thiện đề_án , báo_cáo thì
cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp
phép nêu rõ những nội_dung cần bổ_sung , hoàn_thiện đề_án , báo_cáo . thời_gian
bổ_sung , hoàn_thiện hoặc lập lại đề_án , báo_cáo không tính vào thời_gian thẩm_định_đề_án
, báo_cáo . thời_gian thẩm_định sau khi đề_án , báo_cáo được bổ_sung hoàn_chỉnh
là hai mươi ( 20 ) ngày làm_việc ; c ) trường_hợp phải lập lại đề_án , báo_cáo
, cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị
cấp phép nêu rõ những nội_dung đề_án , báo_cáo chưa đạt yêu_cầu , phải làm lại
và trả lại hồ_sơ đề_nghị cấp phép . 3 . trả kết_quả giải_quyết hồ_sơ_cấp phéptrong
thời_hạn năm ( 05 ) ngày làm_việc , kể từ ngày nhận được giấy_phép của cơ_quan
có thẩm_quyền , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị
cấp phép để thực_hiện nghĩa_vụ tài_chính và nhận giấy_phép .'
sentences:
- ai có thẩm_quyền giải_quyết tố_cáo hành_vi vi_phạm_pháp_luật trong thực_hiện nhiệm_vụ
, công_vụ của cán_bộ , công_chức , viên_chức ?
- thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , giấy_phép khai_thác
nước dưới đất ?
- tôi có_thể đăng_ký ngành , nghề kinh_doanh không có trong hệ_thống ngành kinh_tế
việt_nam không ?
- source_sentence: khoản 2 điều 2 thông_tư 30 quy_định , đoàn khám bệnh , chữa bệnh_nhân_đạo
là một nhóm nhân_viên y_tế trong nước , nước_ngoài do cá_nhân , tổ_chức trong
nước , nước_ngoài tổ_chức để khám bệnh , chữa bệnh_nhân_đạo cho nhân_dân .
sentences:
- tàu_thủy lưu_trú du_lịch có tiêu_chí xếp_hạng bắt_buộc không ?
- tôi muốn xin cấp lại sổ thuyền_viên thì cần những hồ_sơ gì ?
- đoàn khám bệnh , chữa bệnh_nhân_đạo là gì ?
- source_sentence: 'thành_phần hồ_sơ thực_hiện tthc sửa_đổi , bổ_sung / cấp lại giấy
chứng_nhận lưu_hành tự_do ( cfs ) đối_với hàng_hóa xuất_khẩu thuộc phạm_vi quản_lý
của bộ nông_nghiệp và phát_triển nông_thôn ( quy_định tại quyết_định số 1312 /
qđ - bnn - qlcl ngày 22 / 4 / 2019 về việc công_bố thủ_tục hành_chính được thay_thế
, tthc bị bãi_bỏ lĩnh_vực quản_lý chất_lượng nông_lâm_sản và thủy_sản thuộc phạm_vi
chức_năng quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ) : - văn_bản đề_nghị
sửa_đổi , bổ_sung / cấp lại cfs của thương_nhân : 01 bản_chính . - các giấy_tờ
liên_quan đến việc sửa_đổi , bổ_sung / cấp lại cfs.'
sentences:
- kính gửi cục quản_lý chất_lượng nông_lâm_sản và thuỷ_sản , công_ty tôi đã được
quý cục cấp giấy chứng_nhận lưu_hành tự_do cfs , nay tôi muốn sửa_đổi một_số thông_tin
trên giấy cfs , vậy hồ_sơ đề_nghị sửa_đổi cần những gì ?
- thời_gian để được cấp quyết_định chỉ_định tổ_chức đánh_giá sự phù_hợp lĩnh_vực
sản_phẩm , hàng_hóa_vật_liệu xây_dựng đối_với trường_hợp thay_đổi , bổ_sung phạm_vi
, lĩnh_vực được chỉ_định
- hồ_sơ đăng_ký dự_tuyển lao_động của người lao_động gồm những gì ?
- source_sentence: 'hồ_sơ đề_nghị gia_hạn giấy_phép cung_cấp dịch_vụ trò_chơi điện_tử
g1 trên mạng bao_gồm : - đơn đề_nghị gia_hạn giấy_phép bao_gồm : tên , địa_chỉ
của doanh_nghiệp ; số , nơi cấp , ngày cấp giấy chứng_nhận đăng_ký doanh_nghiệp
; số , ngày cấp , ngày hiệu_lực của giấy_phép đã được cấp ; lý_do đề_nghị gia_hạn
giấy_phép ; cam_kết của doanh_nghiệp về các nội_dung kê_khai trong đơn đề_nghị
.'
sentences:
- thành_phần hồ_sơ
- nhà đầu_tư phải nộp bao_nhiêu bộ hồ_sơ đề_nghị thủ_tướng chính_phủ cho phép đầu_tư
gián_tiếp ra nước_ngoài đối_với các trường_hợp đầu_tư khác quy_định tại điều 9
nghị_định 135 / 2015 / nđ - cp ?
- phần vốn của thành_viên chưa góp vốn trong thời_hạn 90 ngày , kể từ ngày được
cấp giấy chứng_nhận đăng_ký doanh_nghiệp trong công_ty tnhh hai thành_viên trở
lên được xử_lý như thế_nào ? trường_hợp có người nhận mua lại phần vốn chưa góp
này thì công_ty có phải đăng_ký thay_đổi thành_viên với cơ_quan đăng_ký kinh_doanh
không ? hồ_sơ , thủ_tục được quy_định như thế_nào ?
- source_sentence: quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra
, áp_dụng biện_pháp phòng_vệ thương_mại .
sentences:
- mục_đích của thủ_tục khai_báo là gì ?
- đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay
chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải
làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ
?
- trường_hợp nào thì được cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ , thủ_tục
cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ như thế_nào ?
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5925925925925926
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7516339869281046
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8071895424836601
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8529411764705882
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5925925925925926
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25054466230936817
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.161437908496732
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0852941176470588
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5925925925925926
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7516339869281046
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8071895424836601
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8529411764705882
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7246556077543799
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6832935643393162
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6881841341907915
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5893246187363834
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7527233115468409
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8082788671023965
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8464052287581699
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5893246187363834
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25090777051561364
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16165577342047927
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.084640522875817
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5893246187363834
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7527233115468409
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8082788671023965
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8464052287581699
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7209821829203035
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6803091607013181
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6856356097551453
name: Cosine Map@100
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision ade1467d6266ae07e6f74aae34d56bf3b8acf3f7 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anhtuansh/gte-multilingual-base-Matryoshka-2e-9k")
# Run inference
sentences = [
'quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .',
'mục_đích của thủ_tục khai_báo là gì ?',
'đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768` and `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 |
|:--------------------|:-----------|:----------|
| cosine_accuracy@1 | 0.5926 | 0.5893 |
| cosine_accuracy@3 | 0.7516 | 0.7527 |
| cosine_accuracy@5 | 0.8072 | 0.8083 |
| cosine_accuracy@10 | 0.8529 | 0.8464 |
| cosine_precision@1 | 0.5926 | 0.5893 |
| cosine_precision@3 | 0.2505 | 0.2509 |
| cosine_precision@5 | 0.1614 | 0.1617 |
| cosine_precision@10 | 0.0853 | 0.0846 |
| cosine_recall@1 | 0.5926 | 0.5893 |
| cosine_recall@3 | 0.7516 | 0.7527 |
| cosine_recall@5 | 0.8072 | 0.8083 |
| cosine_recall@10 | 0.8529 | 0.8464 |
| **cosine_ndcg@10** | **0.7247** | **0.721** |
| cosine_mrr@10 | 0.6833 | 0.6803 |
| cosine_map@100 | 0.6882 | 0.6856 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,259 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 303.9 tokens</li><li>max: 3209 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 56.17 tokens</li><li>max: 578 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>điểm a , mục 2 phần ii thông_tư số 04 / 1999 / tt - bca ( c13 ) ngày 29 / 4 / 1999 của bộ công_an hướng_dẫn một_số quy_định của nghị_định số 05 / 1999 / nđ - cp ngày 03 / 01 / 1999 của chính_phủ về chứng_minh nhân_dân quy_định các trường_hợp phải đổi cmnd như sau : - quá thời_hạn sử_dụng 15 năm kể từ ngày cấp ; - cmnd rách , nát , không rõ ảnh hoặc một trong các thông_tin đã ghi trên cmnd ; - thay_đổi họ , tên , chữ đệm , ngày , tháng , năm sinh . những thay_đổi này phải có quyết_định của cơ_quan có thẩm_quyền ; - những người đã được cấp giấy cmnd nhưng chuyển nơi đăng_ký hộ_khẩu thường_trú ngoài phạm_vi tỉnh , thành_phố trực_thuộc trung_ương . trường_hợp chuyển nơi thường_trú trong phạm_vi tỉnh , thành_phố mà công_dân có yêu_cầu thì được đổi lại cmnd ; - thay_đổi đặc_điểm nhận_dạng là những trường_hợp đã qua phẫu_thuật thẩm_mỹ , chỉnh_hình hoặc vì lý_do khác đã làm thay_đổi hình_ảnh hoặc đặc_điểm nhận_dạng của họ .</code> | <code>công_dân phải làm thủ_tục đổi chứng_minh nhân_dân khi nào ?</code> |
| <code>việc thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề được tiến_hành như sau : <br> bước 1 . vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương tiếp_nhận đề_nghị khen_thưởng của các đơn_vị thực thuộc . <br> bước 2 . thẩm_định hồ_sơ , xin ý_kiến các cơ_quan liên_quan , báo_cáo hội_đồng thi_đua khen_thưởng cùng cấp , tổng_hợp trình bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố quyết_định khen_thưởng . <br> bước 3 . khi có quyết_định của bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố trực_thuộc trung_ương ; vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương thông_báo quyết_định , viết bằng , đóng_dấu và cấp_phát cho đơn_vị trình khen . <br> bước 4 . các trường_hợp không được khen_thưởng ( không đúng đối_tượng , không đủ tiêu_chuẩn , không đủ hồ_sơ...</code> | <code>đề_nghị cho biết trình_tự thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề</code> |
| <code>thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ được quy_định tại điều 21 luật nvqs năm 2015 , cụ_thể như sau : “ điều 21 . thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh sĩ1 . thời_hạn phục_vụ tại_ngũ trong thời_bình của hạ_sĩ_quan , binh_sĩ là 24 tháng . 2 . bộ_trưởng bộ quốc_phòng được quyết_định kéo_dài thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh_sĩ nhưng không quá 06 tháng trong trường_hợp sau đây : a ) để bảo_đảm_nhiệm_vụ sẵn_sàng chiến_đấu ; b ) đang thực_hiện nhiệm_vụ phòng , chống thiên_tai , dịch_bệnh , cứu_hộ , cứu nạn . 3 . thời_hạn phục_vụ của hạ_sĩ_quan , binh_sĩ trong tình_trạng chiến_tranh hoặc tình_trạng khẩn_cấp về quốc_phòng được thực_hiện theo lệnh tổng_động_viên hoặc động_viên cục_bộ . ”</code> | <code>quy_định thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 918 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 918 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 307.44 tokens</li><li>max: 3463 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 55.94 tokens</li><li>max: 383 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>theo quy_định tại khoản 9 điều 1 nghị_định số 161 / 2018 / nđ - cpngày 29 / 11 / 2018 của chính_phủ sửa_đổi , bổ_sung một_số quy_định về tuyển_dụng công_chức , viên_chức , nâng ngạch công_chức , thăng_hạng viên_chức và thực_hiện chế_độ hợp_đồng một_số loại công_việc trong cơ_quan hành_chính nhà_nước , đơn_vị sự_nghiệp công_lập thì đối_tượng và điểm ưu_tiên trong thi_tuyển hoặc xét tuyển công_chức : <br> + anh_hùng lực_lượng vũ_trang , anh_hùng lao_động , thương_binh , người hưởng chính_sách như thương_binh , thương_binh loại b : được cộng 7,5 điểm vào kết_quả điểm thi tại vòng 2 ; <br> + người dân_tộc_thiểu_số , sĩ_quan quân_đội , sĩ_quan công_an , quân_nhân chuyên_nghiệp , người làm công_tác cơ_yếu chuyển ngành , con liệt_sĩ , con thương_binh , con bệnh_binh , con của người hưởng chính_sách như thương_binh , con của thương_binh loại b , con của người hoạt_động cách_mạng trước tổng_khởi_nghĩa ( từ ngày 19 / 8 / 1945 trở về trước ) , con_đẻ của người hoạt_động kháng_chiến bị nhiễm chất_độc_hó...</code> | <code>đề_nghị cho tôi được biết đối_tượng được hưởng ưu_tiên trong tuyển_dụng công_chức ?</code> |
| <code>1 . khi phát_hiện tổ_chức , cá_nhân kê_khai hồ_sơ , thông_báo không_trung_thực hoặc vi_phạm_quy_định tại điều 8 nghị_định số 23 / 2019 / nđ - cp , cơ_quan tiếp_nhận thông_báo yêu_cầu tạm dừng hoạt_động triển_lãm bằng văn_bản ( mẫu_số 03 tại phụ_lục ban_hành kèm theo nghị_định số 23 / 2019 / nđ - cp ) . 2 . tổ_chức , cá_nhân phải dừng hoạt_động triển_lãm ngay khi nhận được văn_bản của cơ_quan có thẩm_quyền ; kịp_thời_khắc_phục hậu_quả , đề_xuất phương_án tiếp_tục tổ_chức triển_lãm gửi cơ_quan có thẩm_quyền xem_xét , quyết_định . 3 . kết_quả xem_xét , quyết_định của cơ_quan có thẩm_quyền phải được thể_hiện bằng văn_bản và gửi cho tổ_chức , cá_nhân biết để thực_hiện . thời_hạn gửi văn_bản cho tổ_chức , cá_nhân là 03 ngày , kể từ ngày cơ_quan có thẩm_quyền nhận được văn_bản đề_xuất phương_án tiếp_tục tổ_chức triển_lãm của tổ_chức , cá_nhân .</code> | <code>những lý_do nào khiến hoạt_động triển_lãm bị tạm dừng ?</code> |
| <code>theo quy_định tại khoản 1 điều 33 luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ và điểm a khoản 4 điều 3 thông_tư số 16 / 2018 / tt - bca ngày 15 / 5 / 2018 của bộ công_an quy_định chi_tiết thi_hành một_số điều của luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ thì thủ_tục đề_nghị cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng được thực_hiện như sau : a ) hồ_sơ đề_nghị bao_gồm : văn_bản đề_nghị nêu rõ số_lượng , chủng_loại , nguồn_gốc xuất_xứ của vũ_khí cần vận_chuyển ; nơi đi , nơi đến , thời_gian và tuyến đường vận_chuyển ; họ và tên , địa_chỉ của người chịu trách_nhiệm vận_chuyển , người điều_khiển phương_tiện ; biển kiểm_soát của phương_tiện ; giấy giới_thiệu kèm theo bản_sao thẻ căn_cước công_dân , chứng_minh nhân_dân , hộ_chiếu hoặc chứng_minh công_an nhân_dân của người đến liên_hệ ; b ) hồ_sơ lập thành 01 bộ và nộp tại cục cảnh_sát qlhc về ttxh ; c ) trong thời_hạn 05 n...</code> | <code>thủ_tục cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng thực_hiện như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:----------------------:|:----------------------:|
| 0 | 0 | - | - | 0.5887 | 0.5836 |
| 0.0048 | 10 | 0.1624 | - | - | - |
| 0.0097 | 20 | 0.1758 | - | - | - |
| 0.0145 | 30 | 0.2834 | - | - | - |
| 0.0194 | 40 | 0.2292 | - | - | - |
| 0.0242 | 50 | 0.1794 | - | - | - |
| 0.0291 | 60 | 0.4504 | - | - | - |
| 0.0339 | 70 | 0.2513 | - | - | - |
| 0.0387 | 80 | 0.1179 | - | - | - |
| 0.0436 | 90 | 0.1019 | - | - | - |
| 0.0484 | 100 | 0.1249 | - | - | - |
| 0.0533 | 110 | 0.0231 | - | - | - |
| 0.0581 | 120 | 0.1119 | - | - | - |
| 0.0630 | 130 | 0.2114 | - | - | - |
| 0.0678 | 140 | 0.0779 | - | - | - |
| 0.0726 | 150 | 0.0785 | - | - | - |
| 0.0775 | 160 | 0.2304 | - | - | - |
| 0.0823 | 170 | 0.089 | - | - | - |
| 0.0872 | 180 | 0.055 | - | - | - |
| 0.0920 | 190 | 0.2606 | - | - | - |
| 0.0969 | 200 | 0.0277 | - | - | - |
| 0.1017 | 210 | 0.0455 | - | - | - |
| 0.1065 | 220 | 0.2025 | - | - | - |
| 0.1114 | 230 | 0.187 | - | - | - |
| 0.1162 | 240 | 0.036 | - | - | - |
| 0.1211 | 250 | 0.0707 | - | - | - |
| 0.1259 | 260 | 0.0741 | - | - | - |
| 0.1308 | 270 | 0.0408 | - | - | - |
| 0.1356 | 280 | 0.068 | - | - | - |
| 0.1404 | 290 | 0.0423 | - | - | - |
| 0.1453 | 300 | 0.1999 | - | - | - |
| 0.1501 | 310 | 0.0113 | - | - | - |
| 0.1550 | 320 | 0.1331 | - | - | - |
| 0.1598 | 330 | 0.3229 | - | - | - |
| 0.1646 | 340 | 0.055 | - | - | - |
| 0.1695 | 350 | 0.0381 | - | - | - |
| 0.1743 | 360 | 0.0737 | - | - | - |
| 0.1792 | 370 | 0.0232 | - | - | - |
| 0.1840 | 380 | 0.025 | - | - | - |
| 0.1889 | 390 | 0.0672 | - | - | - |
| 0.1937 | 400 | 0.1074 | - | - | - |
| 0.1985 | 410 | 0.0353 | - | - | - |
| 0.2034 | 420 | 0.0354 | - | - | - |
| 0.2082 | 430 | 0.0621 | - | - | - |
| 0.2131 | 440 | 0.0182 | - | - | - |
| 0.2179 | 450 | 0.0181 | - | - | - |
| 0.2228 | 460 | 0.0838 | - | - | - |
| 0.2276 | 470 | 0.253 | - | - | - |
| 0.2324 | 480 | 0.0571 | - | - | - |
| 0.2373 | 490 | 0.0024 | - | - | - |
| 0.2421 | 500 | 0.007 | - | - | - |
| 0.2470 | 510 | 0.009 | - | - | - |
| 0.2518 | 520 | 0.0454 | - | - | - |
| 0.2567 | 530 | 0.087 | - | - | - |
| 0.2615 | 540 | 0.0319 | - | - | - |
| 0.2663 | 550 | 0.0124 | - | - | - |
| 0.2712 | 560 | 0.0865 | - | - | - |
| 0.2760 | 570 | 0.0534 | - | - | - |
| 0.2809 | 580 | 0.09 | - | - | - |
| 0.2857 | 590 | 0.038 | - | - | - |
| 0.2906 | 600 | 0.1974 | - | - | - |
| 0.2954 | 610 | 0.0022 | - | - | - |
| 0.3002 | 620 | 0.0099 | - | - | - |
| 0.3051 | 630 | 0.0334 | - | - | - |
| 0.3099 | 640 | 0.1995 | - | - | - |
| 0.3148 | 650 | 0.0045 | - | - | - |
| 0.3196 | 660 | 0.0678 | - | - | - |
| 0.3245 | 670 | 0.0579 | - | - | - |
| 0.3293 | 680 | 0.0028 | - | - | - |
| 0.3341 | 690 | 0.0642 | - | - | - |
| 0.3390 | 700 | 0.0052 | - | - | - |
| 0.3438 | 710 | 0.0273 | - | - | - |
| 0.3487 | 720 | 0.0027 | - | - | - |
| 0.3535 | 730 | 0.0592 | - | - | - |
| 0.3584 | 740 | 0.0009 | - | - | - |
| 0.3632 | 750 | 0.0526 | - | - | - |
| 0.3680 | 760 | 0.0124 | - | - | - |
| 0.3729 | 770 | 0.0053 | - | - | - |
| 0.3777 | 780 | 0.0227 | - | - | - |
| 0.3826 | 790 | 0.3092 | - | - | - |
| 0.3874 | 800 | 0.1404 | - | - | - |
| 0.3923 | 810 | 0.1714 | - | - | - |
| 0.3971 | 820 | 0.0396 | - | - | - |
| 0.4019 | 830 | 0.0544 | - | - | - |
| 0.4068 | 840 | 0.0703 | - | - | - |
| 0.4116 | 850 | 0.0765 | - | - | - |
| 0.4165 | 860 | 0.0069 | - | - | - |
| 0.4213 | 870 | 0.0094 | - | - | - |
| 0.4262 | 880 | 0.0231 | - | - | - |
| 0.4310 | 890 | 0.0129 | - | - | - |
| 0.4358 | 900 | 0.0091 | - | - | - |
| 0.4407 | 910 | 0.0006 | - | - | - |
| 0.4455 | 920 | 0.0194 | - | - | - |
| 0.4504 | 930 | 0.009 | - | - | - |
| 0.4552 | 940 | 0.0123 | - | - | - |
| 0.4600 | 950 | 0.0325 | - | - | - |
| 0.4649 | 960 | 0.0004 | - | - | - |
| 0.4697 | 970 | 0.004 | - | - | - |
| 0.4746 | 980 | 0.078 | - | - | - |
| 0.4794 | 990 | 0.0934 | - | - | - |
| 0.4843 | 1000 | 0.0732 | - | - | - |
| 0.4891 | 1010 | 0.0888 | - | - | - |
| 0.4939 | 1020 | 0.3093 | - | - | - |
| 0.4988 | 1030 | 0.0506 | - | - | - |
| 0.5036 | 1040 | 0.0016 | - | - | - |
| 0.5085 | 1050 | 0.0022 | - | - | - |
| 0.5133 | 1060 | 0.1189 | - | - | - |
| 0.5182 | 1070 | 0.1335 | - | - | - |
| 0.5230 | 1080 | 0.0043 | - | - | - |
| 0.5278 | 1090 | 0.0099 | - | - | - |
| 0.5327 | 1100 | 0.073 | - | - | - |
| 0.5375 | 1110 | 0.001 | - | - | - |
| 0.5424 | 1120 | 0.0079 | - | - | - |
| 0.5472 | 1130 | 0.0274 | - | - | - |
| 0.5521 | 1140 | 0.0009 | - | - | - |
| 0.5569 | 1150 | 0.0132 | - | - | - |
| 0.5617 | 1160 | 0.0673 | - | - | - |
| 0.5666 | 1170 | 0.0024 | - | - | - |
| 0.5714 | 1180 | 0.0586 | - | - | - |
| 0.5763 | 1190 | 0.1613 | - | - | - |
| 0.5811 | 1200 | 0.0193 | - | - | - |
| 0.5860 | 1210 | 0.01 | - | - | - |
| 0.5908 | 1220 | 0.0268 | - | - | - |
| 0.5956 | 1230 | 0.0857 | - | - | - |
| 0.6005 | 1240 | 0.0131 | - | - | - |
| 0.6053 | 1250 | 0.1802 | - | - | - |
| 0.6102 | 1260 | 0.0209 | - | - | - |
| 0.6150 | 1270 | 0.0022 | - | - | - |
| 0.6199 | 1280 | 0.1448 | - | - | - |
| 0.6247 | 1290 | 0.0977 | - | - | - |
| 0.6295 | 1300 | 0.0018 | - | - | - |
| 0.6344 | 1310 | 0.0153 | - | - | - |
| 0.6392 | 1320 | 0.0456 | - | - | - |
| 0.6441 | 1330 | 0.011 | - | - | - |
| 0.6489 | 1340 | 0.0405 | - | - | - |
| 0.6538 | 1350 | 0.1873 | - | - | - |
| 0.6586 | 1360 | 0.0497 | - | - | - |
| 0.6634 | 1370 | 0.0007 | - | - | - |
| 0.6683 | 1380 | 0.076 | - | - | - |
| 0.6731 | 1390 | 0.0007 | - | - | - |
| 0.6780 | 1400 | 0.0126 | - | - | - |
| 0.6828 | 1410 | 0.0559 | - | - | - |
| 0.6877 | 1420 | 0.2497 | - | - | - |
| 0.6925 | 1430 | 0.011 | - | - | - |
| 0.6973 | 1440 | 0.0069 | - | - | - |
| 0.7022 | 1450 | 0.1541 | - | - | - |
| 0.7070 | 1460 | 0.0018 | - | - | - |
| 0.7119 | 1470 | 0.0186 | - | - | - |
| 0.7167 | 1480 | 0.0162 | - | - | - |
| 0.7215 | 1490 | 0.0129 | - | - | - |
| 0.7264 | 1500 | 0.0068 | - | - | - |
| 0.7312 | 1510 | 0.0015 | - | - | - |
| 0.7361 | 1520 | 0.0029 | - | - | - |
| 0.7409 | 1530 | 0.0008 | - | - | - |
| 0.7458 | 1540 | 0.0044 | - | - | - |
| 0.7506 | 1550 | 0.002 | - | - | - |
| 0.7554 | 1560 | 0.0062 | - | - | - |
| 0.7603 | 1570 | 0.0326 | - | - | - |
| 0.7651 | 1580 | 0.0936 | - | - | - |
| 0.7700 | 1590 | 0.0015 | - | - | - |
| 0.7748 | 1600 | 0.1819 | - | - | - |
| 0.7797 | 1610 | 0.0948 | - | - | - |
| 0.7845 | 1620 | 0.0659 | - | - | - |
| 0.7893 | 1630 | 0.0023 | - | - | - |
| 0.7942 | 1640 | 0.0014 | - | - | - |
| 0.7990 | 1650 | 0.0252 | - | - | - |
| 0.8039 | 1660 | 0.0067 | - | - | - |
| 0.8087 | 1670 | 0.032 | - | - | - |
| 0.8136 | 1680 | 0.001 | - | - | - |
| 0.8184 | 1690 | 0.0114 | - | - | - |
| 0.8232 | 1700 | 0.0096 | - | - | - |
| 0.8281 | 1710 | 0.0002 | - | - | - |
| 0.8329 | 1720 | 0.0894 | - | - | - |
| 0.8378 | 1730 | 0.006 | - | - | - |
| 0.8426 | 1740 | 0.0017 | - | - | - |
| 0.8475 | 1750 | 0.0215 | - | - | - |
| 0.8523 | 1760 | 0.0311 | - | - | - |
| 0.8571 | 1770 | 0.1813 | - | - | - |
| 0.8620 | 1780 | 0.0052 | - | - | - |
| 0.8668 | 1790 | 0.0016 | - | - | - |
| 0.8717 | 1800 | 0.0266 | - | - | - |
| 0.8765 | 1810 | 0.0048 | - | - | - |
| 0.8814 | 1820 | 0.0019 | - | - | - |
| 0.8862 | 1830 | 0.0689 | - | - | - |
| 0.8910 | 1840 | 0.0066 | - | - | - |
| 0.8959 | 1850 | 0.0189 | - | - | - |
| 0.9007 | 1860 | 0.0039 | - | - | - |
| 0.9056 | 1870 | 0.0109 | - | - | - |
| 0.9104 | 1880 | 0.0033 | - | - | - |
| 0.9153 | 1890 | 0.0196 | - | - | - |
| 0.9201 | 1900 | 0.0004 | - | - | - |
| 0.9249 | 1910 | 0.0061 | - | - | - |
| 0.9298 | 1920 | 0.0013 | - | - | - |
| 0.9346 | 1930 | 0.0262 | - | - | - |
| 0.9395 | 1940 | 0.0633 | - | - | - |
| 0.9443 | 1950 | 0.0068 | - | - | - |
| 0.9492 | 1960 | 0.0118 | - | - | - |
| 0.9540 | 1970 | 0.0056 | - | - | - |
| 0.9588 | 1980 | 0.0037 | - | - | - |
| 0.9637 | 1990 | 0.0238 | - | - | - |
| 0.9685 | 2000 | 0.002 | - | - | - |
| 0.9734 | 2010 | 0.002 | - | - | - |
| 0.9782 | 2020 | 0.1351 | - | - | - |
| 0.9831 | 2030 | 0.0013 | - | - | - |
| 0.9879 | 2040 | 0.0002 | - | - | - |
| 0.9927 | 2050 | 0.0647 | - | - | - |
| 0.9976 | 2060 | 0.0085 | - | - | - |
| 1.0 | 2065 | - | 0.0219 | 0.7058 | 0.7040 |
| 1.0024 | 2070 | 0.0087 | - | - | - |
| 1.0073 | 2080 | 0.0668 | - | - | - |
| 1.0121 | 2090 | 0.0118 | - | - | - |
| 1.0169 | 2100 | 0.0008 | - | - | - |
| 1.0218 | 2110 | 0.0051 | - | - | - |
| 1.0266 | 2120 | 0.0019 | - | - | - |
| 1.0315 | 2130 | 0.0074 | - | - | - |
| 1.0363 | 2140 | 0.088 | - | - | - |
| 1.0412 | 2150 | 0.0009 | - | - | - |
| 1.0460 | 2160 | 0.001 | - | - | - |
| 1.0508 | 2170 | 0.0006 | - | - | - |
| 1.0557 | 2180 | 0.0006 | - | - | - |
| 1.0605 | 2190 | 0.2754 | - | - | - |
| 1.0654 | 2200 | 0.0127 | - | - | - |
| 1.0702 | 2210 | 0.0017 | - | - | - |
| 1.0751 | 2220 | 0.0242 | - | - | - |
| 1.0799 | 2230 | 0.0393 | - | - | - |
| 1.0847 | 2240 | 0.0053 | - | - | - |
| 1.0896 | 2250 | 0.0133 | - | - | - |
| 1.0944 | 2260 | 0.0191 | - | - | - |
| 1.0993 | 2270 | 0.0446 | - | - | - |
| 1.1041 | 2280 | 0.0015 | - | - | - |
| 1.1090 | 2290 | 0.0012 | - | - | - |
| 1.1138 | 2300 | 0.0028 | - | - | - |
| 1.1186 | 2310 | 0.0244 | - | - | - |
| 1.1235 | 2320 | 0.0026 | - | - | - |
| 1.1283 | 2330 | 0.0199 | - | - | - |
| 1.1332 | 2340 | 0.0176 | - | - | - |
| 1.1380 | 2350 | 0.0029 | - | - | - |
| 1.1429 | 2360 | 0.0684 | - | - | - |
| 1.1477 | 2370 | 0.0006 | - | - | - |
| 1.1525 | 2380 | 0.0017 | - | - | - |
| 1.1574 | 2390 | 0.0189 | - | - | - |
| 1.1622 | 2400 | 0.0094 | - | - | - |
| 1.1671 | 2410 | 0.0016 | - | - | - |
| 1.1719 | 2420 | 0.0013 | - | - | - |
| 1.1768 | 2430 | 0.0579 | - | - | - |
| 1.1816 | 2440 | 0.0002 | - | - | - |
| 1.1864 | 2450 | 0.0168 | - | - | - |
| 1.1913 | 2460 | 0.044 | - | - | - |
| 1.1961 | 2470 | 0.1015 | - | - | - |
| 1.2010 | 2480 | 0.0012 | - | - | - |
| 1.2058 | 2490 | 0.0044 | - | - | - |
| 1.2107 | 2500 | 0.0003 | - | - | - |
| 1.2155 | 2510 | 0.0042 | - | - | - |
| 1.2203 | 2520 | 0.0038 | - | - | - |
| 1.2252 | 2530 | 0.002 | - | - | - |
| 1.2300 | 2540 | 0.001 | - | - | - |
| 1.2349 | 2550 | 0.0035 | - | - | - |
| 1.2397 | 2560 | 0.0001 | - | - | - |
| 1.2446 | 2570 | 0.0008 | - | - | - |
| 1.2494 | 2580 | 0.0029 | - | - | - |
| 1.2542 | 2590 | 0.0091 | - | - | - |
| 1.2591 | 2600 | 0.0011 | - | - | - |
| 1.2639 | 2610 | 0.0001 | - | - | - |
| 1.2688 | 2620 | 0.0004 | - | - | - |
| 1.2736 | 2630 | 0.0008 | - | - | - |
| 1.2785 | 2640 | 0.0057 | - | - | - |
| 1.2833 | 2650 | 0.0007 | - | - | - |
| 1.2881 | 2660 | 0.0001 | - | - | - |
| 1.2930 | 2670 | 0.0002 | - | - | - |
| 1.2978 | 2680 | 0.0001 | - | - | - |
| 1.3027 | 2690 | 0.0003 | - | - | - |
| 1.3075 | 2700 | 0.001 | - | - | - |
| 1.3123 | 2710 | 0.008 | - | - | - |
| 1.3172 | 2720 | 0.0003 | - | - | - |
| 1.3220 | 2730 | 0.0002 | - | - | - |
| 1.3269 | 2740 | 0.0005 | - | - | - |
| 1.3317 | 2750 | 0.0054 | - | - | - |
| 1.3366 | 2760 | 0.0008 | - | - | - |
| 1.3414 | 2770 | 0.0005 | - | - | - |
| 1.3462 | 2780 | 0.0002 | - | - | - |
| 1.3511 | 2790 | 0.0067 | - | - | - |
| 1.3559 | 2800 | 0.0004 | - | - | - |
| 1.3608 | 2810 | 0.0007 | - | - | - |
| 1.3656 | 2820 | 0.0003 | - | - | - |
| 1.3705 | 2830 | 0.0011 | - | - | - |
| 1.3753 | 2840 | 0.0003 | - | - | - |
| 1.3801 | 2850 | 0.002 | - | - | - |
| 1.3850 | 2860 | 0.026 | - | - | - |
| 1.3898 | 2870 | 0.0008 | - | - | - |
| 1.3947 | 2880 | 0.0003 | - | - | - |
| 1.3995 | 2890 | 0.0158 | - | - | - |
| 1.4044 | 2900 | 0.0035 | - | - | - |
| 1.4092 | 2910 | 0.0001 | - | - | - |
| 1.4140 | 2920 | 0.0002 | - | - | - |
| 1.4189 | 2930 | 0.0003 | - | - | - |
| 1.4237 | 2940 | 0.0005 | - | - | - |
| 1.4286 | 2950 | 0.0004 | - | - | - |
| 1.4334 | 2960 | 0.0003 | - | - | - |
| 1.4383 | 2970 | 0.0002 | - | - | - |
| 1.4431 | 2980 | 0.0003 | - | - | - |
| 1.4479 | 2990 | 0.0005 | - | - | - |
| 1.4528 | 3000 | 0.0005 | - | - | - |
| 1.4576 | 3010 | 0.0005 | - | - | - |
| 1.4625 | 3020 | 0.0001 | - | - | - |
| 1.4673 | 3030 | 0.0002 | - | - | - |
| 1.4722 | 3040 | 0.0001 | - | - | - |
| 1.4770 | 3050 | 0.0123 | - | - | - |
| 1.4818 | 3060 | 0.0021 | - | - | - |
| 1.4867 | 3070 | 0.0001 | - | - | - |
| 1.4915 | 3080 | 0.0029 | - | - | - |
| 1.4964 | 3090 | 0.0004 | - | - | - |
| 1.5012 | 3100 | 0.0008 | - | - | - |
| 1.5061 | 3110 | 0.0012 | - | - | - |
| 1.5109 | 3120 | 0.0003 | - | - | - |
| 1.5157 | 3130 | 0.0005 | - | - | - |
| 1.5206 | 3140 | 0.0003 | - | - | - |
| 1.5254 | 3150 | 0.0002 | - | - | - |
| 1.5303 | 3160 | 0.0008 | - | - | - |
| 1.5351 | 3170 | 0.0022 | - | - | - |
| 1.5400 | 3180 | 0.0001 | - | - | - |
| 1.5448 | 3190 | 0.0062 | - | - | - |
| 1.5496 | 3200 | 0.0011 | - | - | - |
| 1.5545 | 3210 | 0.0009 | - | - | - |
| 1.5593 | 3220 | 0.0005 | - | - | - |
| 1.5642 | 3230 | 0.0002 | - | - | - |
| 1.5690 | 3240 | 0.0001 | - | - | - |
| 1.5738 | 3250 | 0.0007 | - | - | - |
| 1.5787 | 3260 | 0.0005 | - | - | - |
| 1.5835 | 3270 | 0.0002 | - | - | - |
| 1.5884 | 3280 | 0.0002 | - | - | - |
| 1.5932 | 3290 | 0.0223 | - | - | - |
| 1.5981 | 3300 | 0.0007 | - | - | - |
| 1.6029 | 3310 | 0.0001 | - | - | - |
| 1.6077 | 3320 | 0.0015 | - | - | - |
| 1.6126 | 3330 | 0.0 | - | - | - |
| 1.6174 | 3340 | 0.0003 | - | - | - |
| 1.6223 | 3350 | 0.0023 | - | - | - |
| 1.6271 | 3360 | 0.0005 | - | - | - |
| 1.6320 | 3370 | 0.0003 | - | - | - |
| 1.6368 | 3380 | 0.0004 | - | - | - |
| 1.6416 | 3390 | 0.0004 | - | - | - |
| 1.6465 | 3400 | 0.0013 | - | - | - |
| 1.6513 | 3410 | 0.0043 | - | - | - |
| 1.6562 | 3420 | 0.0002 | - | - | - |
| 1.6610 | 3430 | 0.0002 | - | - | - |
| 1.6659 | 3440 | 0.0002 | - | - | - |
| 1.6707 | 3450 | 0.0004 | - | - | - |
| 1.6755 | 3460 | 0.0001 | - | - | - |
| 1.6804 | 3470 | 0.0002 | - | - | - |
| 1.6852 | 3480 | 0.0086 | - | - | - |
| 1.6901 | 3490 | 0.0014 | - | - | - |
| 1.6949 | 3500 | 0.0004 | - | - | - |
| 1.6998 | 3510 | 0.0001 | - | - | - |
| 1.7046 | 3520 | 0.0012 | - | - | - |
| 1.7094 | 3530 | 0.0004 | - | - | - |
| 1.7143 | 3540 | 0.0002 | - | - | - |
| 1.7191 | 3550 | 0.0001 | - | - | - |
| 1.7240 | 3560 | 0.0001 | - | - | - |
| 1.7288 | 3570 | 0.0015 | - | - | - |
| 1.7337 | 3580 | 0.0001 | - | - | - |
| 1.7385 | 3590 | 0.0004 | - | - | - |
| 1.7433 | 3600 | 0.0003 | - | - | - |
| 1.7482 | 3610 | 0.0002 | - | - | - |
| 1.7530 | 3620 | 0.0007 | - | - | - |
| 1.7579 | 3630 | 0.0011 | - | - | - |
| 1.7627 | 3640 | 0.0007 | - | - | - |
| 1.7676 | 3650 | 0.0004 | - | - | - |
| 1.7724 | 3660 | 0.0183 | - | - | - |
| 1.7772 | 3670 | 0.0043 | - | - | - |
| 1.7821 | 3680 | 0.0008 | - | - | - |
| 1.7869 | 3690 | 0.0001 | - | - | - |
| 1.7918 | 3700 | 0.0003 | - | - | - |
| 1.7966 | 3710 | 0.0004 | - | - | - |
| 1.8015 | 3720 | 0.0005 | - | - | - |
| 1.8063 | 3730 | 0.0003 | - | - | - |
| 1.8111 | 3740 | 0.0002 | - | - | - |
| 1.8160 | 3750 | 0.0009 | - | - | - |
| 1.8208 | 3760 | 0.0015 | - | - | - |
| 1.8257 | 3770 | 0.0002 | - | - | - |
| 1.8305 | 3780 | 0.0004 | - | - | - |
| 1.8354 | 3790 | 0.0001 | - | - | - |
| 1.8402 | 3800 | 0.0002 | - | - | - |
| 1.8450 | 3810 | 0.0039 | - | - | - |
| 1.8499 | 3820 | 0.0002 | - | - | - |
| 1.8547 | 3830 | 0.0015 | - | - | - |
| 1.8596 | 3840 | 0.0003 | - | - | - |
| 1.8644 | 3850 | 0.0001 | - | - | - |
| 1.8692 | 3860 | 0.0001 | - | - | - |
| 1.8741 | 3870 | 0.0048 | - | - | - |
| 1.8789 | 3880 | 0.0002 | - | - | - |
| 1.8838 | 3890 | 0.0003 | - | - | - |
| 1.8886 | 3900 | 0.0003 | - | - | - |
| 1.8935 | 3910 | 0.0003 | - | - | - |
| 1.8983 | 3920 | 0.0006 | - | - | - |
| 1.9031 | 3930 | 0.0003 | - | - | - |
| 1.9080 | 3940 | 0.0002 | - | - | - |
| 1.9128 | 3950 | 0.0003 | - | - | - |
| 1.9177 | 3960 | 0.0013 | - | - | - |
| 1.9225 | 3970 | 0.0002 | - | - | - |
| 1.9274 | 3980 | 0.0006 | - | - | - |
| 1.9322 | 3990 | 0.002 | - | - | - |
| 1.9370 | 4000 | 0.0005 | - | - | - |
| 1.9419 | 4010 | 0.0003 | - | - | - |
| 1.9467 | 4020 | 0.0003 | - | - | - |
| 1.9516 | 4030 | 0.0018 | - | - | - |
| 1.9564 | 4040 | 0.0003 | - | - | - |
| 1.9613 | 4050 | 0.0002 | - | - | - |
| 1.9661 | 4060 | 0.0062 | - | - | - |
| 1.9709 | 4070 | 0.0001 | - | - | - |
| 1.9758 | 4080 | 0.0068 | - | - | - |
| 1.9806 | 4090 | 0.0006 | - | - | - |
| 1.9855 | 4100 | 0.0002 | - | - | - |
| 1.9903 | 4110 | 0.0003 | - | - | - |
| 1.9952 | 4120 | 0.0002 | - | - | - |
| **2.0** | **4130** | **0.0028** | **0.0194** | **0.7247** | **0.721** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.4.0
- Accelerate: 0.29.3
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision ade1467d6266ae07e6f74aae34d56bf3b8acf3f7 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anhtuansh/gte-multilingual-base-Matryoshka-2e-9k")
# Run inference
sentences = [
'quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .',
'mục_đích của thủ_tục khai_báo là gì ?',
'đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768` and `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 |
|:--------------------|:-----------|:----------|
| cosine_accuracy@1 | 0.5926 | 0.5893 |
| cosine_accuracy@3 | 0.7516 | 0.7527 |
| cosine_accuracy@5 | 0.8072 | 0.8083 |
| cosine_accuracy@10 | 0.8529 | 0.8464 |
| cosine_precision@1 | 0.5926 | 0.5893 |
| cosine_precision@3 | 0.2505 | 0.2509 |
| cosine_precision@5 | 0.1614 | 0.1617 |
| cosine_precision@10 | 0.0853 | 0.0846 |
| cosine_recall@1 | 0.5926 | 0.5893 |
| cosine_recall@3 | 0.7516 | 0.7527 |
| cosine_recall@5 | 0.8072 | 0.8083 |
| cosine_recall@10 | 0.8529 | 0.8464 |
| **cosine_ndcg@10** | **0.7247** | **0.721** |
| cosine_mrr@10 | 0.6833 | 0.6803 |
| cosine_map@100 | 0.6882 | 0.6856 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 8,259 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 303.9 tokens</li><li>max: 3209 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 56.17 tokens</li><li>max: 578 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>điểm a , mục 2 phần ii thông_tư số 04 / 1999 / tt - bca ( c13 ) ngày 29 / 4 / 1999 của bộ công_an hướng_dẫn một_số quy_định của nghị_định số 05 / 1999 / nđ - cp ngày 03 / 01 / 1999 của chính_phủ về chứng_minh nhân_dân quy_định các trường_hợp phải đổi cmnd như sau : - quá thời_hạn sử_dụng 15 năm kể từ ngày cấp ; - cmnd rách , nát , không rõ ảnh hoặc một trong các thông_tin đã ghi trên cmnd ; - thay_đổi họ , tên , chữ đệm , ngày , tháng , năm sinh . những thay_đổi này phải có quyết_định của cơ_quan có thẩm_quyền ; - những người đã được cấp giấy cmnd nhưng chuyển nơi đăng_ký hộ_khẩu thường_trú ngoài phạm_vi tỉnh , thành_phố trực_thuộc trung_ương . trường_hợp chuyển nơi thường_trú trong phạm_vi tỉnh , thành_phố mà công_dân có yêu_cầu thì được đổi lại cmnd ; - thay_đổi đặc_điểm nhận_dạng là những trường_hợp đã qua phẫu_thuật thẩm_mỹ , chỉnh_hình hoặc vì lý_do khác đã làm thay_đổi hình_ảnh hoặc đặc_điểm nhận_dạng của họ .</code> | <code>công_dân phải làm thủ_tục đổi chứng_minh nhân_dân khi nào ?</code> |
| <code>việc thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề được tiến_hành như sau : <br> bước 1 . vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương tiếp_nhận đề_nghị khen_thưởng của các đơn_vị thực thuộc . <br> bước 2 . thẩm_định hồ_sơ , xin ý_kiến các cơ_quan liên_quan , báo_cáo hội_đồng thi_đua khen_thưởng cùng cấp , tổng_hợp trình bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố quyết_định khen_thưởng . <br> bước 3 . khi có quyết_định của bộ_trưởng , thủ_trưởng đơn_vị , chủ_tịch ubnd tỉnh , thành_phố trực_thuộc trung_ương ; vụ , phòng , ban thi_đua – khen_thưởng các bộ , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương thông_báo quyết_định , viết bằng , đóng_dấu và cấp_phát cho đơn_vị trình khen . <br> bước 4 . các trường_hợp không được khen_thưởng ( không đúng đối_tượng , không đủ tiêu_chuẩn , không đủ hồ_sơ...</code> | <code>đề_nghị cho biết trình_tự thực_hiện thủ_tục tặng cờ thi_đua cấp bộ , ban , ngành , đoàn_thể trung_ương , tỉnh , thành_phố trực_thuộc trung_ương theo đợt hoặc chuyên_đề</code> |
| <code>thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ được quy_định tại điều 21 luật nvqs năm 2015 , cụ_thể như sau : “ điều 21 . thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh sĩ1 . thời_hạn phục_vụ tại_ngũ trong thời_bình của hạ_sĩ_quan , binh_sĩ là 24 tháng . 2 . bộ_trưởng bộ quốc_phòng được quyết_định kéo_dài thời_hạn phục_vụ tại_ngũ của hạ_sĩ_quan , binh_sĩ nhưng không quá 06 tháng trong trường_hợp sau đây : a ) để bảo_đảm_nhiệm_vụ sẵn_sàng chiến_đấu ; b ) đang thực_hiện nhiệm_vụ phòng , chống thiên_tai , dịch_bệnh , cứu_hộ , cứu nạn . 3 . thời_hạn phục_vụ của hạ_sĩ_quan , binh_sĩ trong tình_trạng chiến_tranh hoặc tình_trạng khẩn_cấp về quốc_phòng được thực_hiện theo lệnh tổng_động_viên hoặc động_viên cục_bộ . ”</code> | <code>quy_định thời_gian phục_vụ tại_ngũ của hạ_sĩ_quan binh_sĩ như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 918 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 918 samples:
| | positive | anchor |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 307.44 tokens</li><li>max: 3463 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 55.94 tokens</li><li>max: 383 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>theo quy_định tại khoản 9 điều 1 nghị_định số 161 / 2018 / nđ - cpngày 29 / 11 / 2018 của chính_phủ sửa_đổi , bổ_sung một_số quy_định về tuyển_dụng công_chức , viên_chức , nâng ngạch công_chức , thăng_hạng viên_chức và thực_hiện chế_độ hợp_đồng một_số loại công_việc trong cơ_quan hành_chính nhà_nước , đơn_vị sự_nghiệp công_lập thì đối_tượng và điểm ưu_tiên trong thi_tuyển hoặc xét tuyển công_chức : <br> + anh_hùng lực_lượng vũ_trang , anh_hùng lao_động , thương_binh , người hưởng chính_sách như thương_binh , thương_binh loại b : được cộng 7,5 điểm vào kết_quả điểm thi tại vòng 2 ; <br> + người dân_tộc_thiểu_số , sĩ_quan quân_đội , sĩ_quan công_an , quân_nhân chuyên_nghiệp , người làm công_tác cơ_yếu chuyển ngành , con liệt_sĩ , con thương_binh , con bệnh_binh , con của người hưởng chính_sách như thương_binh , con của thương_binh loại b , con của người hoạt_động cách_mạng trước tổng_khởi_nghĩa ( từ ngày 19 / 8 / 1945 trở về trước ) , con_đẻ của người hoạt_động kháng_chiến bị nhiễm chất_độc_hó...</code> | <code>đề_nghị cho tôi được biết đối_tượng được hưởng ưu_tiên trong tuyển_dụng công_chức ?</code> |
| <code>1 . khi phát_hiện tổ_chức , cá_nhân kê_khai hồ_sơ , thông_báo không_trung_thực hoặc vi_phạm_quy_định tại điều 8 nghị_định số 23 / 2019 / nđ - cp , cơ_quan tiếp_nhận thông_báo yêu_cầu tạm dừng hoạt_động triển_lãm bằng văn_bản ( mẫu_số 03 tại phụ_lục ban_hành kèm theo nghị_định số 23 / 2019 / nđ - cp ) . 2 . tổ_chức , cá_nhân phải dừng hoạt_động triển_lãm ngay khi nhận được văn_bản của cơ_quan có thẩm_quyền ; kịp_thời_khắc_phục hậu_quả , đề_xuất phương_án tiếp_tục tổ_chức triển_lãm gửi cơ_quan có thẩm_quyền xem_xét , quyết_định . 3 . kết_quả xem_xét , quyết_định của cơ_quan có thẩm_quyền phải được thể_hiện bằng văn_bản và gửi cho tổ_chức , cá_nhân biết để thực_hiện . thời_hạn gửi văn_bản cho tổ_chức , cá_nhân là 03 ngày , kể từ ngày cơ_quan có thẩm_quyền nhận được văn_bản đề_xuất phương_án tiếp_tục tổ_chức triển_lãm của tổ_chức , cá_nhân .</code> | <code>những lý_do nào khiến hoạt_động triển_lãm bị tạm dừng ?</code> |
| <code>theo quy_định tại khoản 1 điều 33 luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ và điểm a khoản 4 điều 3 thông_tư số 16 / 2018 / tt - bca ngày 15 / 5 / 2018 của bộ công_an quy_định chi_tiết thi_hành một_số điều của luật quản_lý , sử_dụng vũ_khí , vật_liệu nổ và công_cụ hỗ_trợ thì thủ_tục đề_nghị cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng được thực_hiện như sau : a ) hồ_sơ đề_nghị bao_gồm : văn_bản đề_nghị nêu rõ số_lượng , chủng_loại , nguồn_gốc xuất_xứ của vũ_khí cần vận_chuyển ; nơi đi , nơi đến , thời_gian và tuyến đường vận_chuyển ; họ và tên , địa_chỉ của người chịu trách_nhiệm vận_chuyển , người điều_khiển phương_tiện ; biển kiểm_soát của phương_tiện ; giấy giới_thiệu kèm theo bản_sao thẻ căn_cước công_dân , chứng_minh nhân_dân , hộ_chiếu hoặc chứng_minh công_an nhân_dân của người đến liên_hệ ; b ) hồ_sơ lập thành 01 bộ và nộp tại cục cảnh_sát qlhc về ttxh ; c ) trong thời_hạn 05 n...</code> | <code>thủ_tục cấp giấy_phép vận_chuyển vũ_khí quân_dụng đối_với cơ_quan , tổ_chức ở trung_ương không thuộc phạm_vi quản_lý của bộ quốc_phòng thực_hiện như thế_nào ?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:---------------:|:----------------------:|:----------------------:|
| 0 | 0 | - | - | 0.5887 | 0.5836 |
| 0.0048 | 10 | 0.1624 | - | - | - |
| 0.0097 | 20 | 0.1758 | - | - | - |
| 0.0145 | 30 | 0.2834 | - | - | - |
| 0.0194 | 40 | 0.2292 | - | - | - |
| 0.0242 | 50 | 0.1794 | - | - | - |
| 0.0291 | 60 | 0.4504 | - | - | - |
| 0.0339 | 70 | 0.2513 | - | - | - |
| 0.0387 | 80 | 0.1179 | - | - | - |
| 0.0436 | 90 | 0.1019 | - | - | - |
| 0.0484 | 100 | 0.1249 | - | - | - |
| 0.0533 | 110 | 0.0231 | - | - | - |
| 0.0581 | 120 | 0.1119 | - | - | - |
| 0.0630 | 130 | 0.2114 | - | - | - |
| 0.0678 | 140 | 0.0779 | - | - | - |
| 0.0726 | 150 | 0.0785 | - | - | - |
| 0.0775 | 160 | 0.2304 | - | - | - |
| 0.0823 | 170 | 0.089 | - | - | - |
| 0.0872 | 180 | 0.055 | - | - | - |
| 0.0920 | 190 | 0.2606 | - | - | - |
| 0.0969 | 200 | 0.0277 | - | - | - |
| 0.1017 | 210 | 0.0455 | - | - | - |
| 0.1065 | 220 | 0.2025 | - | - | - |
| 0.1114 | 230 | 0.187 | - | - | - |
| 0.1162 | 240 | 0.036 | - | - | - |
| 0.1211 | 250 | 0.0707 | - | - | - |
| 0.1259 | 260 | 0.0741 | - | - | - |
| 0.1308 | 270 | 0.0408 | - | - | - |
| 0.1356 | 280 | 0.068 | - | - | - |
| 0.1404 | 290 | 0.0423 | - | - | - |
| 0.1453 | 300 | 0.1999 | - | - | - |
| 0.1501 | 310 | 0.0113 | - | - | - |
| 0.1550 | 320 | 0.1331 | - | - | - |
| 0.1598 | 330 | 0.3229 | - | - | - |
| 0.1646 | 340 | 0.055 | - | - | - |
| 0.1695 | 350 | 0.0381 | - | - | - |
| 0.1743 | 360 | 0.0737 | - | - | - |
| 0.1792 | 370 | 0.0232 | - | - | - |
| 0.1840 | 380 | 0.025 | - | - | - |
| 0.1889 | 390 | 0.0672 | - | - | - |
| 0.1937 | 400 | 0.1074 | - | - | - |
| 0.1985 | 410 | 0.0353 | - | - | - |
| 0.2034 | 420 | 0.0354 | - | - | - |
| 0.2082 | 430 | 0.0621 | - | - | - |
| 0.2131 | 440 | 0.0182 | - | - | - |
| 0.2179 | 450 | 0.0181 | - | - | - |
| 0.2228 | 460 | 0.0838 | - | - | - |
| 0.2276 | 470 | 0.253 | - | - | - |
| 0.2324 | 480 | 0.0571 | - | - | - |
| 0.2373 | 490 | 0.0024 | - | - | - |
| 0.2421 | 500 | 0.007 | - | - | - |
| 0.2470 | 510 | 0.009 | - | - | - |
| 0.2518 | 520 | 0.0454 | - | - | - |
| 0.2567 | 530 | 0.087 | - | - | - |
| 0.2615 | 540 | 0.0319 | - | - | - |
| 0.2663 | 550 | 0.0124 | - | - | - |
| 0.2712 | 560 | 0.0865 | - | - | - |
| 0.2760 | 570 | 0.0534 | - | - | - |
| 0.2809 | 580 | 0.09 | - | - | - |
| 0.2857 | 590 | 0.038 | - | - | - |
| 0.2906 | 600 | 0.1974 | - | - | - |
| 0.2954 | 610 | 0.0022 | - | - | - |
| 0.3002 | 620 | 0.0099 | - | - | - |
| 0.3051 | 630 | 0.0334 | - | - | - |
| 0.3099 | 640 | 0.1995 | - | - | - |
| 0.3148 | 650 | 0.0045 | - | - | - |
| 0.3196 | 660 | 0.0678 | - | - | - |
| 0.3245 | 670 | 0.0579 | - | - | - |
| 0.3293 | 680 | 0.0028 | - | - | - |
| 0.3341 | 690 | 0.0642 | - | - | - |
| 0.3390 | 700 | 0.0052 | - | - | - |
| 0.3438 | 710 | 0.0273 | - | - | - |
| 0.3487 | 720 | 0.0027 | - | - | - |
| 0.3535 | 730 | 0.0592 | - | - | - |
| 0.3584 | 740 | 0.0009 | - | - | - |
| 0.3632 | 750 | 0.0526 | - | - | - |
| 0.3680 | 760 | 0.0124 | - | - | - |
| 0.3729 | 770 | 0.0053 | - | - | - |
| 0.3777 | 780 | 0.0227 | - | - | - |
| 0.3826 | 790 | 0.3092 | - | - | - |
| 0.3874 | 800 | 0.1404 | - | - | - |
| 0.3923 | 810 | 0.1714 | - | - | - |
| 0.3971 | 820 | 0.0396 | - | - | - |
| 0.4019 | 830 | 0.0544 | - | - | - |
| 0.4068 | 840 | 0.0703 | - | - | - |
| 0.4116 | 850 | 0.0765 | - | - | - |
| 0.4165 | 860 | 0.0069 | - | - | - |
| 0.4213 | 870 | 0.0094 | - | - | - |
| 0.4262 | 880 | 0.0231 | - | - | - |
| 0.4310 | 890 | 0.0129 | - | - | - |
| 0.4358 | 900 | 0.0091 | - | - | - |
| 0.4407 | 910 | 0.0006 | - | - | - |
| 0.4455 | 920 | 0.0194 | - | - | - |
| 0.4504 | 930 | 0.009 | - | - | - |
| 0.4552 | 940 | 0.0123 | - | - | - |
| 0.4600 | 950 | 0.0325 | - | - | - |
| 0.4649 | 960 | 0.0004 | - | - | - |
| 0.4697 | 970 | 0.004 | - | - | - |
| 0.4746 | 980 | 0.078 | - | - | - |
| 0.4794 | 990 | 0.0934 | - | - | - |
| 0.4843 | 1000 | 0.0732 | - | - | - |
| 0.4891 | 1010 | 0.0888 | - | - | - |
| 0.4939 | 1020 | 0.3093 | - | - | - |
| 0.4988 | 1030 | 0.0506 | - | - | - |
| 0.5036 | 1040 | 0.0016 | - | - | - |
| 0.5085 | 1050 | 0.0022 | - | - | - |
| 0.5133 | 1060 | 0.1189 | - | - | - |
| 0.5182 | 1070 | 0.1335 | - | - | - |
| 0.5230 | 1080 | 0.0043 | - | - | - |
| 0.5278 | 1090 | 0.0099 | - | - | - |
| 0.5327 | 1100 | 0.073 | - | - | - |
| 0.5375 | 1110 | 0.001 | - | - | - |
| 0.5424 | 1120 | 0.0079 | - | - | - |
| 0.5472 | 1130 | 0.0274 | - | - | - |
| 0.5521 | 1140 | 0.0009 | - | - | - |
| 0.5569 | 1150 | 0.0132 | - | - | - |
| 0.5617 | 1160 | 0.0673 | - | - | - |
| 0.5666 | 1170 | 0.0024 | - | - | - |
| 0.5714 | 1180 | 0.0586 | - | - | - |
| 0.5763 | 1190 | 0.1613 | - | - | - |
| 0.5811 | 1200 | 0.0193 | - | - | - |
| 0.5860 | 1210 | 0.01 | - | - | - |
| 0.5908 | 1220 | 0.0268 | - | - | - |
| 0.5956 | 1230 | 0.0857 | - | - | - |
| 0.6005 | 1240 | 0.0131 | - | - | - |
| 0.6053 | 1250 | 0.1802 | - | - | - |
| 0.6102 | 1260 | 0.0209 | - | - | - |
| 0.6150 | 1270 | 0.0022 | - | - | - |
| 0.6199 | 1280 | 0.1448 | - | - | - |
| 0.6247 | 1290 | 0.0977 | - | - | - |
| 0.6295 | 1300 | 0.0018 | - | - | - |
| 0.6344 | 1310 | 0.0153 | - | - | - |
| 0.6392 | 1320 | 0.0456 | - | - | - |
| 0.6441 | 1330 | 0.011 | - | - | - |
| 0.6489 | 1340 | 0.0405 | - | - | - |
| 0.6538 | 1350 | 0.1873 | - | - | - |
| 0.6586 | 1360 | 0.0497 | - | - | - |
| 0.6634 | 1370 | 0.0007 | - | - | - |
| 0.6683 | 1380 | 0.076 | - | - | - |
| 0.6731 | 1390 | 0.0007 | - | - | - |
| 0.6780 | 1400 | 0.0126 | - | - | - |
| 0.6828 | 1410 | 0.0559 | - | - | - |
| 0.6877 | 1420 | 0.2497 | - | - | - |
| 0.6925 | 1430 | 0.011 | - | - | - |
| 0.6973 | 1440 | 0.0069 | - | - | - |
| 0.7022 | 1450 | 0.1541 | - | - | - |
| 0.7070 | 1460 | 0.0018 | - | - | - |
| 0.7119 | 1470 | 0.0186 | - | - | - |
| 0.7167 | 1480 | 0.0162 | - | - | - |
| 0.7215 | 1490 | 0.0129 | - | - | - |
| 0.7264 | 1500 | 0.0068 | - | - | - |
| 0.7312 | 1510 | 0.0015 | - | - | - |
| 0.7361 | 1520 | 0.0029 | - | - | - |
| 0.7409 | 1530 | 0.0008 | - | - | - |
| 0.7458 | 1540 | 0.0044 | - | - | - |
| 0.7506 | 1550 | 0.002 | - | - | - |
| 0.7554 | 1560 | 0.0062 | - | - | - |
| 0.7603 | 1570 | 0.0326 | - | - | - |
| 0.7651 | 1580 | 0.0936 | - | - | - |
| 0.7700 | 1590 | 0.0015 | - | - | - |
| 0.7748 | 1600 | 0.1819 | - | - | - |
| 0.7797 | 1610 | 0.0948 | - | - | - |
| 0.7845 | 1620 | 0.0659 | - | - | - |
| 0.7893 | 1630 | 0.0023 | - | - | - |
| 0.7942 | 1640 | 0.0014 | - | - | - |
| 0.7990 | 1650 | 0.0252 | - | - | - |
| 0.8039 | 1660 | 0.0067 | - | - | - |
| 0.8087 | 1670 | 0.032 | - | - | - |
| 0.8136 | 1680 | 0.001 | - | - | - |
| 0.8184 | 1690 | 0.0114 | - | - | - |
| 0.8232 | 1700 | 0.0096 | - | - | - |
| 0.8281 | 1710 | 0.0002 | - | - | - |
| 0.8329 | 1720 | 0.0894 | - | - | - |
| 0.8378 | 1730 | 0.006 | - | - | - |
| 0.8426 | 1740 | 0.0017 | - | - | - |
| 0.8475 | 1750 | 0.0215 | - | - | - |
| 0.8523 | 1760 | 0.0311 | - | - | - |
| 0.8571 | 1770 | 0.1813 | - | - | - |
| 0.8620 | 1780 | 0.0052 | - | - | - |
| 0.8668 | 1790 | 0.0016 | - | - | - |
| 0.8717 | 1800 | 0.0266 | - | - | - |
| 0.8765 | 1810 | 0.0048 | - | - | - |
| 0.8814 | 1820 | 0.0019 | - | - | - |
| 0.8862 | 1830 | 0.0689 | - | - | - |
| 0.8910 | 1840 | 0.0066 | - | - | - |
| 0.8959 | 1850 | 0.0189 | - | - | - |
| 0.9007 | 1860 | 0.0039 | - | - | - |
| 0.9056 | 1870 | 0.0109 | - | - | - |
| 0.9104 | 1880 | 0.0033 | - | - | - |
| 0.9153 | 1890 | 0.0196 | - | - | - |
| 0.9201 | 1900 | 0.0004 | - | - | - |
| 0.9249 | 1910 | 0.0061 | - | - | - |
| 0.9298 | 1920 | 0.0013 | - | - | - |
| 0.9346 | 1930 | 0.0262 | - | - | - |
| 0.9395 | 1940 | 0.0633 | - | - | - |
| 0.9443 | 1950 | 0.0068 | - | - | - |
| 0.9492 | 1960 | 0.0118 | - | - | - |
| 0.9540 | 1970 | 0.0056 | - | - | - |
| 0.9588 | 1980 | 0.0037 | - | - | - |
| 0.9637 | 1990 | 0.0238 | - | - | - |
| 0.9685 | 2000 | 0.002 | - | - | - |
| 0.9734 | 2010 | 0.002 | - | - | - |
| 0.9782 | 2020 | 0.1351 | - | - | - |
| 0.9831 | 2030 | 0.0013 | - | - | - |
| 0.9879 | 2040 | 0.0002 | - | - | - |
| 0.9927 | 2050 | 0.0647 | - | - | - |
| 0.9976 | 2060 | 0.0085 | - | - | - |
| 1.0 | 2065 | - | 0.0219 | 0.7058 | 0.7040 |
| 1.0024 | 2070 | 0.0087 | - | - | - |
| 1.0073 | 2080 | 0.0668 | - | - | - |
| 1.0121 | 2090 | 0.0118 | - | - | - |
| 1.0169 | 2100 | 0.0008 | - | - | - |
| 1.0218 | 2110 | 0.0051 | - | - | - |
| 1.0266 | 2120 | 0.0019 | - | - | - |
| 1.0315 | 2130 | 0.0074 | - | - | - |
| 1.0363 | 2140 | 0.088 | - | - | - |
| 1.0412 | 2150 | 0.0009 | - | - | - |
| 1.0460 | 2160 | 0.001 | - | - | - |
| 1.0508 | 2170 | 0.0006 | - | - | - |
| 1.0557 | 2180 | 0.0006 | - | - | - |
| 1.0605 | 2190 | 0.2754 | - | - | - |
| 1.0654 | 2200 | 0.0127 | - | - | - |
| 1.0702 | 2210 | 0.0017 | - | - | - |
| 1.0751 | 2220 | 0.0242 | - | - | - |
| 1.0799 | 2230 | 0.0393 | - | - | - |
| 1.0847 | 2240 | 0.0053 | - | - | - |
| 1.0896 | 2250 | 0.0133 | - | - | - |
| 1.0944 | 2260 | 0.0191 | - | - | - |
| 1.0993 | 2270 | 0.0446 | - | - | - |
| 1.1041 | 2280 | 0.0015 | - | - | - |
| 1.1090 | 2290 | 0.0012 | - | - | - |
| 1.1138 | 2300 | 0.0028 | - | - | - |
| 1.1186 | 2310 | 0.0244 | - | - | - |
| 1.1235 | 2320 | 0.0026 | - | - | - |
| 1.1283 | 2330 | 0.0199 | - | - | - |
| 1.1332 | 2340 | 0.0176 | - | - | - |
| 1.1380 | 2350 | 0.0029 | - | - | - |
| 1.1429 | 2360 | 0.0684 | - | - | - |
| 1.1477 | 2370 | 0.0006 | - | - | - |
| 1.1525 | 2380 | 0.0017 | - | - | - |
| 1.1574 | 2390 | 0.0189 | - | - | - |
| 1.1622 | 2400 | 0.0094 | - | - | - |
| 1.1671 | 2410 | 0.0016 | - | - | - |
| 1.1719 | 2420 | 0.0013 | - | - | - |
| 1.1768 | 2430 | 0.0579 | - | - | - |
| 1.1816 | 2440 | 0.0002 | - | - | - |
| 1.1864 | 2450 | 0.0168 | - | - | - |
| 1.1913 | 2460 | 0.044 | - | - | - |
| 1.1961 | 2470 | 0.1015 | - | - | - |
| 1.2010 | 2480 | 0.0012 | - | - | - |
| 1.2058 | 2490 | 0.0044 | - | - | - |
| 1.2107 | 2500 | 0.0003 | - | - | - |
| 1.2155 | 2510 | 0.0042 | - | - | - |
| 1.2203 | 2520 | 0.0038 | - | - | - |
| 1.2252 | 2530 | 0.002 | - | - | - |
| 1.2300 | 2540 | 0.001 | - | - | - |
| 1.2349 | 2550 | 0.0035 | - | - | - |
| 1.2397 | 2560 | 0.0001 | - | - | - |
| 1.2446 | 2570 | 0.0008 | - | - | - |
| 1.2494 | 2580 | 0.0029 | - | - | - |
| 1.2542 | 2590 | 0.0091 | - | - | - |
| 1.2591 | 2600 | 0.0011 | - | - | - |
| 1.2639 | 2610 | 0.0001 | - | - | - |
| 1.2688 | 2620 | 0.0004 | - | - | - |
| 1.2736 | 2630 | 0.0008 | - | - | - |
| 1.2785 | 2640 | 0.0057 | - | - | - |
| 1.2833 | 2650 | 0.0007 | - | - | - |
| 1.2881 | 2660 | 0.0001 | - | - | - |
| 1.2930 | 2670 | 0.0002 | - | - | - |
| 1.2978 | 2680 | 0.0001 | - | - | - |
| 1.3027 | 2690 | 0.0003 | - | - | - |
| 1.3075 | 2700 | 0.001 | - | - | - |
| 1.3123 | 2710 | 0.008 | - | - | - |
| 1.3172 | 2720 | 0.0003 | - | - | - |
| 1.3220 | 2730 | 0.0002 | - | - | - |
| 1.3269 | 2740 | 0.0005 | - | - | - |
| 1.3317 | 2750 | 0.0054 | - | - | - |
| 1.3366 | 2760 | 0.0008 | - | - | - |
| 1.3414 | 2770 | 0.0005 | - | - | - |
| 1.3462 | 2780 | 0.0002 | - | - | - |
| 1.3511 | 2790 | 0.0067 | - | - | - |
| 1.3559 | 2800 | 0.0004 | - | - | - |
| 1.3608 | 2810 | 0.0007 | - | - | - |
| 1.3656 | 2820 | 0.0003 | - | - | - |
| 1.3705 | 2830 | 0.0011 | - | - | - |
| 1.3753 | 2840 | 0.0003 | - | - | - |
| 1.3801 | 2850 | 0.002 | - | - | - |
| 1.3850 | 2860 | 0.026 | - | - | - |
| 1.3898 | 2870 | 0.0008 | - | - | - |
| 1.3947 | 2880 | 0.0003 | - | - | - |
| 1.3995 | 2890 | 0.0158 | - | - | - |
| 1.4044 | 2900 | 0.0035 | - | - | - |
| 1.4092 | 2910 | 0.0001 | - | - | - |
| 1.4140 | 2920 | 0.0002 | - | - | - |
| 1.4189 | 2930 | 0.0003 | - | - | - |
| 1.4237 | 2940 | 0.0005 | - | - | - |
| 1.4286 | 2950 | 0.0004 | - | - | - |
| 1.4334 | 2960 | 0.0003 | - | - | - |
| 1.4383 | 2970 | 0.0002 | - | - | - |
| 1.4431 | 2980 | 0.0003 | - | - | - |
| 1.4479 | 2990 | 0.0005 | - | - | - |
| 1.4528 | 3000 | 0.0005 | - | - | - |
| 1.4576 | 3010 | 0.0005 | - | - | - |
| 1.4625 | 3020 | 0.0001 | - | - | - |
| 1.4673 | 3030 | 0.0002 | - | - | - |
| 1.4722 | 3040 | 0.0001 | - | - | - |
| 1.4770 | 3050 | 0.0123 | - | - | - |
| 1.4818 | 3060 | 0.0021 | - | - | - |
| 1.4867 | 3070 | 0.0001 | - | - | - |
| 1.4915 | 3080 | 0.0029 | - | - | - |
| 1.4964 | 3090 | 0.0004 | - | - | - |
| 1.5012 | 3100 | 0.0008 | - | - | - |
| 1.5061 | 3110 | 0.0012 | - | - | - |
| 1.5109 | 3120 | 0.0003 | - | - | - |
| 1.5157 | 3130 | 0.0005 | - | - | - |
| 1.5206 | 3140 | 0.0003 | - | - | - |
| 1.5254 | 3150 | 0.0002 | - | - | - |
| 1.5303 | 3160 | 0.0008 | - | - | - |
| 1.5351 | 3170 | 0.0022 | - | - | - |
| 1.5400 | 3180 | 0.0001 | - | - | - |
| 1.5448 | 3190 | 0.0062 | - | - | - |
| 1.5496 | 3200 | 0.0011 | - | - | - |
| 1.5545 | 3210 | 0.0009 | - | - | - |
| 1.5593 | 3220 | 0.0005 | - | - | - |
| 1.5642 | 3230 | 0.0002 | - | - | - |
| 1.5690 | 3240 | 0.0001 | - | - | - |
| 1.5738 | 3250 | 0.0007 | - | - | - |
| 1.5787 | 3260 | 0.0005 | - | - | - |
| 1.5835 | 3270 | 0.0002 | - | - | - |
| 1.5884 | 3280 | 0.0002 | - | - | - |
| 1.5932 | 3290 | 0.0223 | - | - | - |
| 1.5981 | 3300 | 0.0007 | - | - | - |
| 1.6029 | 3310 | 0.0001 | - | - | - |
| 1.6077 | 3320 | 0.0015 | - | - | - |
| 1.6126 | 3330 | 0.0 | - | - | - |
| 1.6174 | 3340 | 0.0003 | - | - | - |
| 1.6223 | 3350 | 0.0023 | - | - | - |
| 1.6271 | 3360 | 0.0005 | - | - | - |
| 1.6320 | 3370 | 0.0003 | - | - | - |
| 1.6368 | 3380 | 0.0004 | - | - | - |
| 1.6416 | 3390 | 0.0004 | - | - | - |
| 1.6465 | 3400 | 0.0013 | - | - | - |
| 1.6513 | 3410 | 0.0043 | - | - | - |
| 1.6562 | 3420 | 0.0002 | - | - | - |
| 1.6610 | 3430 | 0.0002 | - | - | - |
| 1.6659 | 3440 | 0.0002 | - | - | - |
| 1.6707 | 3450 | 0.0004 | - | - | - |
| 1.6755 | 3460 | 0.0001 | - | - | - |
| 1.6804 | 3470 | 0.0002 | - | - | - |
| 1.6852 | 3480 | 0.0086 | - | - | - |
| 1.6901 | 3490 | 0.0014 | - | - | - |
| 1.6949 | 3500 | 0.0004 | - | - | - |
| 1.6998 | 3510 | 0.0001 | - | - | - |
| 1.7046 | 3520 | 0.0012 | - | - | - |
| 1.7094 | 3530 | 0.0004 | - | - | - |
| 1.7143 | 3540 | 0.0002 | - | - | - |
| 1.7191 | 3550 | 0.0001 | - | - | - |
| 1.7240 | 3560 | 0.0001 | - | - | - |
| 1.7288 | 3570 | 0.0015 | - | - | - |
| 1.7337 | 3580 | 0.0001 | - | - | - |
| 1.7385 | 3590 | 0.0004 | - | - | - |
| 1.7433 | 3600 | 0.0003 | - | - | - |
| 1.7482 | 3610 | 0.0002 | - | - | - |
| 1.7530 | 3620 | 0.0007 | - | - | - |
| 1.7579 | 3630 | 0.0011 | - | - | - |
| 1.7627 | 3640 | 0.0007 | - | - | - |
| 1.7676 | 3650 | 0.0004 | - | - | - |
| 1.7724 | 3660 | 0.0183 | - | - | - |
| 1.7772 | 3670 | 0.0043 | - | - | - |
| 1.7821 | 3680 | 0.0008 | - | - | - |
| 1.7869 | 3690 | 0.0001 | - | - | - |
| 1.7918 | 3700 | 0.0003 | - | - | - |
| 1.7966 | 3710 | 0.0004 | - | - | - |
| 1.8015 | 3720 | 0.0005 | - | - | - |
| 1.8063 | 3730 | 0.0003 | - | - | - |
| 1.8111 | 3740 | 0.0002 | - | - | - |
| 1.8160 | 3750 | 0.0009 | - | - | - |
| 1.8208 | 3760 | 0.0015 | - | - | - |
| 1.8257 | 3770 | 0.0002 | - | - | - |
| 1.8305 | 3780 | 0.0004 | - | - | - |
| 1.8354 | 3790 | 0.0001 | - | - | - |
| 1.8402 | 3800 | 0.0002 | - | - | - |
| 1.8450 | 3810 | 0.0039 | - | - | - |
| 1.8499 | 3820 | 0.0002 | - | - | - |
| 1.8547 | 3830 | 0.0015 | - | - | - |
| 1.8596 | 3840 | 0.0003 | - | - | - |
| 1.8644 | 3850 | 0.0001 | - | - | - |
| 1.8692 | 3860 | 0.0001 | - | - | - |
| 1.8741 | 3870 | 0.0048 | - | - | - |
| 1.8789 | 3880 | 0.0002 | - | - | - |
| 1.8838 | 3890 | 0.0003 | - | - | - |
| 1.8886 | 3900 | 0.0003 | - | - | - |
| 1.8935 | 3910 | 0.0003 | - | - | - |
| 1.8983 | 3920 | 0.0006 | - | - | - |
| 1.9031 | 3930 | 0.0003 | - | - | - |
| 1.9080 | 3940 | 0.0002 | - | - | - |
| 1.9128 | 3950 | 0.0003 | - | - | - |
| 1.9177 | 3960 | 0.0013 | - | - | - |
| 1.9225 | 3970 | 0.0002 | - | - | - |
| 1.9274 | 3980 | 0.0006 | - | - | - |
| 1.9322 | 3990 | 0.002 | - | - | - |
| 1.9370 | 4000 | 0.0005 | - | - | - |
| 1.9419 | 4010 | 0.0003 | - | - | - |
| 1.9467 | 4020 | 0.0003 | - | - | - |
| 1.9516 | 4030 | 0.0018 | - | - | - |
| 1.9564 | 4040 | 0.0003 | - | - | - |
| 1.9613 | 4050 | 0.0002 | - | - | - |
| 1.9661 | 4060 | 0.0062 | - | - | - |
| 1.9709 | 4070 | 0.0001 | - | - | - |
| 1.9758 | 4080 | 0.0068 | - | - | - |
| 1.9806 | 4090 | 0.0006 | - | - | - |
| 1.9855 | 4100 | 0.0002 | - | - | - |
| 1.9903 | 4110 | 0.0003 | - | - | - |
| 1.9952 | 4120 | 0.0002 | - | - | - |
| **2.0** | **4130** | **0.0028** | **0.0194** | **0.7247** | **0.721** |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.4.0
- Accelerate: 0.29.3
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Alibaba-NLP/gte-multilingual-base", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:8259", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "theo quy_định tại điều 35 nghị_định số 201 / 2013 / nđ - cp thì thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , khai_thác nước dưới đất như sau : 1 . tiếp_nhận và kiểm_tra hồ_sơ : trong thời_hạn mười ( 10 ) ngày làm_việc , kể từ ngày nhận hồ_sơ , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm xem_xét , kiểm_tra hồ_sơ . trường_hợp hồ_sơ không hợp_lệ , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để bổ_sung , hoàn_thiện hồ_sơ theo quy_định . trường_hợp hồ_sơ sau khi đã bổ_sung mà vẫn không đáp_ứng yêu_cầu theo quy_định thì cơ_quan tiếp_nhận hồ_sơ trả lại hồ_sơ và thông_báo rõ lý_do cho tổ_chức , cá_nhân đề_nghị cấp phép . 2 . thẩm_định_đề_án , báo_cáo thăm_dò , khai_thác , sử_dụng tài_nguyên nước , xả nước_thải vào nguồn nước trong hồ_sơ đề_nghị cấp phép ( sau đây gọi chung là đề_án , báo_cáo ) : a ) trong thời_hạn ba_mươi ( 30 ) ngày làm_việc , kể từ ngày nhận đủ hồ_sơ hợp_lệ theo quy_định tại khoản 1 điều này , cơ_quan tiếp_nhận hồ_sơ có trách_nhiệm thẩm_định_đề_án , báo_cáo ; nếu cần_thiết thì kiểm_tra thực_tế hiện_trường , lập hội_đồng thẩm_định_đề_án , báo_cáo . trường_hợp đủ điều_kiện cấp phép , cơ_quan tiếp_nhận hồ_sơ trình cơ_quan có thẩm_quyền cấp giấy_phép ; trường_hợp không đủ điều_kiện để cấp phép thì trả lại hồ_sơ cho tổ_chức , cá_nhân đề_nghị cấp phép và thông_báo lý_do không cấp phép ; b ) trường_hợp phải bổ_sung , chỉnh_sửa để hoàn_thiện đề_án , báo_cáo thì cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép nêu rõ những nội_dung cần bổ_sung , hoàn_thiện đề_án , báo_cáo . thời_gian bổ_sung , hoàn_thiện hoặc lập lại đề_án , báo_cáo không tính vào thời_gian thẩm_định_đề_án , báo_cáo . thời_gian thẩm_định sau khi đề_án , báo_cáo được bổ_sung hoàn_chỉnh là hai mươi ( 20 ) ngày làm_việc ; c ) trường_hợp phải lập lại đề_án , báo_cáo , cơ_quan tiếp_nhận hồ_sơ gửi văn_bản thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép nêu rõ những nội_dung đề_án , báo_cáo chưa đạt yêu_cầu , phải làm lại và trả lại hồ_sơ đề_nghị cấp phép . 3 . trả kết_quả giải_quyết hồ_sơ_cấp phéptrong thời_hạn năm ( 05 ) ngày làm_việc , kể từ ngày nhận được giấy_phép của cơ_quan có thẩm_quyền , cơ_quan tiếp_nhận hồ_sơ thông_báo cho tổ_chức , cá_nhân đề_nghị cấp phép để thực_hiện nghĩa_vụ tài_chính và nhận giấy_phép .", "sentences": ["ai có thẩm_quyền giải_quyết tố_cáo hành_vi vi_phạm_pháp_luật trong thực_hiện nhiệm_vụ , công_vụ của cán_bộ , công_chức , viên_chức ?", "thời_hạn giải_quyết thủ_tục hành_chính về cấp giấy_phép thăm_dò , giấy_phép khai_thác nước dưới đất ?", "tôi có_thể đăng_ký ngành , nghề kinh_doanh không có trong hệ_thống ngành kinh_tế việt_nam không ?"]}, {"source_sentence": "khoản 2 điều 2 thông_tư 30 quy_định , đoàn khám bệnh , chữa bệnh_nhân_đạo là một nhóm nhân_viên y_tế trong nước , nước_ngoài do cá_nhân , tổ_chức trong nước , nước_ngoài tổ_chức để khám bệnh , chữa bệnh_nhân_đạo cho nhân_dân .", "sentences": ["tàu_thủy lưu_trú du_lịch có tiêu_chí xếp_hạng bắt_buộc không ?", "tôi muốn xin cấp lại sổ thuyền_viên thì cần những hồ_sơ gì ?", "đoàn khám bệnh , chữa bệnh_nhân_đạo là gì ?"]}, {"source_sentence": "thành_phần hồ_sơ thực_hiện tthc sửa_đổi , bổ_sung / cấp lại giấy chứng_nhận lưu_hành tự_do ( cfs ) đối_với hàng_hóa xuất_khẩu thuộc phạm_vi quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ( quy_định tại quyết_định số 1312 / qđ - bnn - qlcl ngày 22 / 4 / 2019 về việc công_bố thủ_tục hành_chính được thay_thế , tthc bị bãi_bỏ lĩnh_vực quản_lý chất_lượng nông_lâm_sản và thủy_sản thuộc phạm_vi chức_năng quản_lý của bộ nông_nghiệp và phát_triển nông_thôn ) : - văn_bản đề_nghị sửa_đổi , bổ_sung / cấp lại cfs của thương_nhân : 01 bản_chính . - các giấy_tờ liên_quan đến việc sửa_đổi , bổ_sung / cấp lại cfs.", "sentences": ["kính gửi cục quản_lý chất_lượng nông_lâm_sản và thuỷ_sản , công_ty tôi đã được quý cục cấp giấy chứng_nhận lưu_hành tự_do cfs , nay tôi muốn sửa_đổi một_số thông_tin trên giấy cfs , vậy hồ_sơ đề_nghị sửa_đổi cần những gì ?", "thời_gian để được cấp quyết_định chỉ_định tổ_chức đánh_giá sự phù_hợp lĩnh_vực sản_phẩm , hàng_hóa_vật_liệu xây_dựng đối_với trường_hợp thay_đổi , bổ_sung phạm_vi , lĩnh_vực được chỉ_định", "hồ_sơ đăng_ký dự_tuyển lao_động của người lao_động gồm những gì ?"]}, {"source_sentence": "hồ_sơ đề_nghị gia_hạn giấy_phép cung_cấp dịch_vụ trò_chơi điện_tử g1 trên mạng bao_gồm : - đơn đề_nghị gia_hạn giấy_phép bao_gồm : tên , địa_chỉ của doanh_nghiệp ; số , nơi cấp , ngày cấp giấy chứng_nhận đăng_ký doanh_nghiệp ; số , ngày cấp , ngày hiệu_lực của giấy_phép đã được cấp ; lý_do đề_nghị gia_hạn giấy_phép ; cam_kết của doanh_nghiệp về các nội_dung kê_khai trong đơn đề_nghị .", "sentences": ["thành_phần hồ_sơ", "nhà đầu_tư phải nộp bao_nhiêu bộ hồ_sơ đề_nghị thủ_tướng chính_phủ cho phép đầu_tư gián_tiếp ra nước_ngoài đối_với các trường_hợp đầu_tư khác quy_định tại điều 9 nghị_định 135 / 2015 / nđ - cp ?", "phần vốn của thành_viên chưa góp vốn trong thời_hạn 90 ngày , kể từ ngày được cấp giấy chứng_nhận đăng_ký doanh_nghiệp trong công_ty tnhh hai thành_viên trở lên được xử_lý như thế_nào ? trường_hợp có người nhận mua lại phần vốn chưa góp này thì công_ty có phải đăng_ký thay_đổi thành_viên với cơ_quan đăng_ký kinh_doanh không ? hồ_sơ , thủ_tục được quy_định như thế_nào ?"]}, {"source_sentence": "quản_lý nhập_khẩu hàng_hóa bị điều_tra , phục_vụ công_tác điều_tra , áp_dụng biện_pháp phòng_vệ thương_mại .", "sentences": ["mục_đích của thủ_tục khai_báo là gì ?", "đơn_vị chúng_tôi đã được cấp chứng_chỉ năng_lực hoạt_động xây_dựng nhưng hiện_nay chúng_tôi thay_đổi người đại_diện pháp_luật của đơn_vị . vậy chúng_tôi có phải làm thủ_tục nào để thay_đổi người đại_diện theo pháp_luật của chúng_tôi trên chứng_chỉ ?", "trường_hợp nào thì được cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ , thủ_tục cấp lại giấy xác_nhận đăng_ký công_cụ hỗ_trợ như thế_nào ?"]}], "model-index": [{"name": "SentenceTransformer based on Alibaba-NLP/gte-multilingual-base", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5925925925925926, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7516339869281046, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8071895424836601, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8529411764705882, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5925925925925926, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.25054466230936817, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.161437908496732, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0852941176470588, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5925925925925926, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7516339869281046, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8071895424836601, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8529411764705882, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7246556077543799, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6832935643393162, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6881841341907915, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.5893246187363834, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.7527233115468409, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.8082788671023965, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.8464052287581699, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.5893246187363834, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.25090777051561364, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.16165577342047927, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.084640522875817, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.5893246187363834, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.7527233115468409, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.8082788671023965, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.8464052287581699, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.7209821829203035, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.6803091607013181, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.6856356097551453, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,211 |
MultiBertGunjanPatrick/multiberts-seed-1-1700k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-10-04T05:01:38+00:00
| 103 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-1
---
# MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1700k')
model = BertModel.from_pretrained("multiberts-seed-1-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| null |
Non_BioNLP
|
# MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1700k')
model = BertModel.from_pretrained("multiberts-seed-1-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,212 |
MMMahender/Summary_model
|
MMMahender
|
summarization
|
[
"summarization",
"arxiv:1910.09700",
"region:us"
] | 2023-08-11T10:58:03Z |
2023-08-11T10:59:57+00:00
| 0 | 0 |
---
pipeline_tag: summarization
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 42,213 |
spacemanidol/flan-t5-small-2-6-cnndm
|
spacemanidol
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-02-19T13:52:39Z |
2023-03-06T17:04:31+00:00
| 14 | 0 |
---
datasets:
- cnn_dailymail
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: small-2-6
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail 3.0.0
type: cnn_dailymail
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- type: rouge
value: 36.8241
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-2-6
This model is a fine-tuned version of [cnn/small-2-6/](https://huggingface.co/cnn/small-2-6/) on the cnn_dailymail 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8186
- Rouge1: 36.8241
- Rouge2: 15.5366
- Rougel: 25.7939
- Rougelsum: 33.9987
- Gen Len: 77.7747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-2-6
This model is a fine-tuned version of [cnn/small-2-6/](https://huggingface.co/cnn/small-2-6/) on the cnn_dailymail 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8186
- Rouge1: 36.8241
- Rouge2: 15.5366
- Rougel: 25.7939
- Rougelsum: 33.9987
- Gen Len: 77.7747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.12.1
|
{"datasets": ["cnn_dailymail"], "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "small-2-6", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail 3.0.0", "type": "cnn_dailymail", "config": "3.0.0", "split": "validation", "args": "3.0.0"}, "metrics": [{"type": "rouge", "value": 36.8241, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 42,214 |
sapienzanlp/relik-reader-deberta-v3-base-aida
|
sapienzanlp
|
feature-extraction
|
[
"transformers",
"pytorch",
"relik-reader",
"feature-extraction",
"custom_code",
"en",
"arxiv:2408.00103",
"region:us"
] | 2024-06-04T17:32:45Z |
2024-08-05T09:55:41+00:00
| 252 | 0 |
---
language:
- en
---
<div align="center">
<img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150">
<img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50">
</div>
<div align="center">
<h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a>
<a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a>
<a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-2408.00103-b31b1b.svg"></a>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a>
<a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a>
<a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a>
</div>
A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**.
**This repository contains the weights for the ReLiK Reader component.**
## 🛠️ Installation
Installation from PyPI
```bash
pip install relik
```
<details>
<summary>Other installation options</summary>
#### Install with optional dependencies
Install with all the optional dependencies.
```bash
pip install relik[all]
```
Install with optional dependencies for training and evaluation.
```bash
pip install relik[train]
```
Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss)
FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package.
For CPU:
```bash
pip install relik[faiss]
```
For GPU:
```bash
conda create -n relik python=3.10
conda activate relik
# install pytorch
conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# GPU
conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0
# or GPU with NVIDIA RAFT
conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0
pip install relik
```
Install with optional dependencies for serving the models with
[FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html).
```bash
pip install relik[serve]
```
#### Installation from source
```bash
git clone https://github.com/SapienzaNLP/relik.git
cd relik
pip install -e .[all]
```
</details>
## 🚀 Quick Start
[//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.)
ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**.
It is composed of two main components: a retriever and a reader.
The retriever is responsible for retrieving relevant documents from a large collection,
while the reader is responsible for extracting entities and relations from the retrieved documents.
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
Here is an example of how to use ReLiK for **Entity Linking**:
```python
from relik import Relik
from relik.inference.data.objects import RelikOutput
relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large")
relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.")
```
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
]
]
),
)
## 📊 Performance
We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base:
| Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) |
|------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------|
| GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 |
| EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
| ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128),
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
GENRE uses mention dictionaries.
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used.
## 🤖 Models
Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19).
## 💽 Cite this work
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@inproceedings{orlando-etal-2024-relik,
title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget",
author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
}
```
| null |
Non_BioNLP
|
<div align="center">
<img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150">
<img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50">
</div>
<div align="center">
<h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a>
<a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a>
<a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-2408.00103-b31b1b.svg"></a>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a>
<a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a>
<a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a>
</div>
A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**.
**This repository contains the weights for the ReLiK Reader component.**
## 🛠️ Installation
Installation from PyPI
```bash
pip install relik
```
<details>
<summary>Other installation options</summary>
#### Install with optional dependencies
Install with all the optional dependencies.
```bash
pip install relik[all]
```
Install with optional dependencies for training and evaluation.
```bash
pip install relik[train]
```
Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss)
FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package.
For CPU:
```bash
pip install relik[faiss]
```
For GPU:
```bash
conda create -n relik python=3.10
conda activate relik
# install pytorch
conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# GPU
conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0
# or GPU with NVIDIA RAFT
conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0
pip install relik
```
Install with optional dependencies for serving the models with
[FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html).
```bash
pip install relik[serve]
```
#### Installation from source
```bash
git clone https://github.com/SapienzaNLP/relik.git
cd relik
pip install -e .[all]
```
</details>
## 🚀 Quick Start
[//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.)
ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**.
It is composed of two main components: a retriever and a reader.
The retriever is responsible for retrieving relevant documents from a large collection,
while the reader is responsible for extracting entities and relations from the retrieved documents.
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
Here is an example of how to use ReLiK for **Entity Linking**:
```python
from relik import Relik
from relik.inference.data.objects import RelikOutput
relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large")
relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.")
```
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
]
]
),
)
## 📊 Performance
We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base:
| Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) |
|------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------|
| GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 |
| EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
| ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128),
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
GENRE uses mention dictionaries.
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used.
## 🤖 Models
Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19).
## 💽 Cite this work
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@inproceedings{orlando-etal-2024-relik,
title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget",
author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
}
```
|
{"language": ["en"]}
|
task
|
[
"RELATION_EXTRACTION"
] | 42,215 |
SandLogicTechnologies/Llama-3.2-1B-Instruct-Medical-GGUF
|
SandLogicTechnologies
|
text-generation
|
[
"gguf",
"medical",
"SandLogic",
"Meta",
"Conversational",
"text-generation",
"en",
"dataset:bigbio/med_qa",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-27T17:36:22Z |
2024-09-27T17:59:13+00:00
| 64 | 0 |
---
base_model:
- meta-llama/Llama-3.2-1B-Instruct
datasets:
- bigbio/med_qa
language:
- en
license: llama3.2
pipeline_tag: text-generation
tags:
- medical
- SandLogic
- Meta
- Conversational
---
# SandLogic Technology - Quantized Llama-3.2-1B-Instruct-Medical-GGUF
## Model Description
We have quantized the Llama-3.2-1B-Instruct-Medical-GGUF model into two variants:
1. Q5_KM
2. Q4_KM
These quantized models offer improved efficiency while maintaining performance in medical-related tasks.
Discover our full range of quantized language models by visiting our [SandLogic Lexicon](https://github.com/sandlogic/SandLogic-Lexicon) GitHub. To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com).
## Original Model Information
- **Base Model**: [Meta Llama 3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- **Developer**: Meta (base model)
- **Model Type**: Multilingual large language model (LLM)
- **Architecture**: Auto-regressive language model with optimized transformer architecture
- **Parameters**: 1 billion
- **Training Approach**: Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)
## Fine-tuning Details
- **Dataset**: [bigbio/med_qa](https://huggingface.co/datasets/bigbio/med_qa)
- **Languages**: English, simplified Chinese, and traditional Chinese
- **Dataset Size**:
- English: 12,723 questions
- Simplified Chinese: 34,251 questions
- Traditional Chinese: 14,123 questions
- **Data Type**: Free-form multiple-choice OpenQA for medical problems, collected from professional medical board exams
## Model Capabilities
This model is optimized for medical-related dialogue and tasks, including:
- Answering medical questions
- Summarizing medical information
- Assisting with medical problem-solving
## Intended Use in Medical Domain
1. **Medical Education**: Assisting medical students in exam preparation and learning
2. **Clinical Decision Support**: Providing quick references for healthcare professionals
3. **Patient Education**: Explaining medical concepts in simple terms for patients
4. **Medical Literature Review**: Summarizing and extracting key information from medical texts
5. **Differential Diagnosis**: Assisting in generating potential diagnoses based on symptoms
6. **Medical Coding**: Aiding in the accurate coding of medical procedures and diagnoses
7. **Drug Information**: Providing information on medications, their uses, and potential interactions
8. **Medical Translation**: Assisting with medical translations across supported languages
## Quantized Variants
1. **Q5_KM**: 5-bit quantization using the KM method
2. **Q4_KM**: 4-bit quantization using the KM method
These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible.
## Usage
```bash
pip install llama-cpp-python
```
Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.
### Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:
```bash
from llama_cpp import Llama
llm = Llama(
model_path="./models/Llama-3.2-1B-Medical_Q4_KM.gguf",
verbose=False,
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm.create_chat_completion(
messages =[
{
"role": "system",
"content": """ You are a helpful, respectful and honest medical assistant. Yu are developed by SandLogic Technologies
Always answer as helpfully as possible, while being safe.
Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.
Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.
If you don’t know the answer to a question, please don’t share false information."""
,
},
{"role": "user", "content": "I have been experiencing a persistent cough for the last two weeks, along with a mild fever and fatigue. What could be the possible causes of these symptoms?"},
]
)
print(output["choices"][0]['message']['content'])
```
## Download
You can download `Llama` models in `gguf` format directly from Hugging Face using the `from_pretrained` method. This feature requires the `huggingface-hub` package.
To install it, run: `pip install huggingface-hub`
```bash
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="SandLogicTechnologies/Llama-3.2-1B-Instruct-Medical-GGUF",
filename="*Llama-3.2-1B-Medical_Q5_KM.gguf",
verbose=False
)
```
By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool.
## Ethical Considerations and Limitations
- This model is not a substitute for professional medical advice, diagnosis, or treatment
- Users should be aware of potential biases in the training data
- The model's knowledge cutoff date may limit its awareness of recent medical developments
## Acknowledgements
We thank Meta for developing the original Llama-3.2-1B-Instruct model and the creators of the bigbio/med_qa dataset.
Special thanks to Georgi Gerganov and the entire llama.cpp development team for their outstanding contributions.
## Contact
For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).
## Explore More
For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).
| null |
BioNLP
|
# SandLogic Technology - Quantized Llama-3.2-1B-Instruct-Medical-GGUF
## Model Description
We have quantized the Llama-3.2-1B-Instruct-Medical-GGUF model into two variants:
1. Q5_KM
2. Q4_KM
These quantized models offer improved efficiency while maintaining performance in medical-related tasks.
Discover our full range of quantized language models by visiting our [SandLogic Lexicon](https://github.com/sandlogic/SandLogic-Lexicon) GitHub. To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com).
## Original Model Information
- **Base Model**: [Meta Llama 3.2 1B Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- **Developer**: Meta (base model)
- **Model Type**: Multilingual large language model (LLM)
- **Architecture**: Auto-regressive language model with optimized transformer architecture
- **Parameters**: 1 billion
- **Training Approach**: Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF)
## Fine-tuning Details
- **Dataset**: [bigbio/med_qa](https://huggingface.co/datasets/bigbio/med_qa)
- **Languages**: English, simplified Chinese, and traditional Chinese
- **Dataset Size**:
- English: 12,723 questions
- Simplified Chinese: 34,251 questions
- Traditional Chinese: 14,123 questions
- **Data Type**: Free-form multiple-choice OpenQA for medical problems, collected from professional medical board exams
## Model Capabilities
This model is optimized for medical-related dialogue and tasks, including:
- Answering medical questions
- Summarizing medical information
- Assisting with medical problem-solving
## Intended Use in Medical Domain
1. **Medical Education**: Assisting medical students in exam preparation and learning
2. **Clinical Decision Support**: Providing quick references for healthcare professionals
3. **Patient Education**: Explaining medical concepts in simple terms for patients
4. **Medical Literature Review**: Summarizing and extracting key information from medical texts
5. **Differential Diagnosis**: Assisting in generating potential diagnoses based on symptoms
6. **Medical Coding**: Aiding in the accurate coding of medical procedures and diagnoses
7. **Drug Information**: Providing information on medications, their uses, and potential interactions
8. **Medical Translation**: Assisting with medical translations across supported languages
## Quantized Variants
1. **Q5_KM**: 5-bit quantization using the KM method
2. **Q4_KM**: 4-bit quantization using the KM method
These quantized models aim to reduce model size and improve inference speed while maintaining performance as close to the original model as possible.
## Usage
```bash
pip install llama-cpp-python
```
Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.
### Basic Text Completion
Here's an example demonstrating how to use the high-level API for basic text completion:
```bash
from llama_cpp import Llama
llm = Llama(
model_path="./models/Llama-3.2-1B-Medical_Q4_KM.gguf",
verbose=False,
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm.create_chat_completion(
messages =[
{
"role": "system",
"content": """ You are a helpful, respectful and honest medical assistant. Yu are developed by SandLogic Technologies
Always answer as helpfully as possible, while being safe.
Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.
Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct.
If you don’t know the answer to a question, please don’t share false information."""
,
},
{"role": "user", "content": "I have been experiencing a persistent cough for the last two weeks, along with a mild fever and fatigue. What could be the possible causes of these symptoms?"},
]
)
print(output["choices"][0]['message']['content'])
```
## Download
You can download `Llama` models in `gguf` format directly from Hugging Face using the `from_pretrained` method. This feature requires the `huggingface-hub` package.
To install it, run: `pip install huggingface-hub`
```bash
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="SandLogicTechnologies/Llama-3.2-1B-Instruct-Medical-GGUF",
filename="*Llama-3.2-1B-Medical_Q5_KM.gguf",
verbose=False
)
```
By default, from_pretrained will download the model to the Hugging Face cache directory. You can manage installed model files using the huggingface-cli tool.
## Ethical Considerations and Limitations
- This model is not a substitute for professional medical advice, diagnosis, or treatment
- Users should be aware of potential biases in the training data
- The model's knowledge cutoff date may limit its awareness of recent medical developments
## Acknowledgements
We thank Meta for developing the original Llama-3.2-1B-Instruct model and the creators of the bigbio/med_qa dataset.
Special thanks to Georgi Gerganov and the entire llama.cpp development team for their outstanding contributions.
## Contact
For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).
## Explore More
For any inquiries or support, please contact us at [email protected] or visit our [support page](https://www.sandlogic.com/LingoForge/support).
|
{"base_model": ["meta-llama/Llama-3.2-1B-Instruct"], "datasets": ["bigbio/med_qa"], "language": ["en"], "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["medical", "SandLogic", "Meta", "Conversational"]}
|
task
|
[
"TRANSLATION"
] | 42,216 |
sometimesanotion/Lamarck-14B-v0.6
|
sometimesanotion
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2412.17498",
"base_model:Krystalan/DRT-o1-14B",
"base_model:merge:Krystalan/DRT-o1-14B",
"base_model:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"base_model:merge:huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2",
"base_model:sometimesanotion/Abliterate-Qwenvergence",
"base_model:merge:sometimesanotion/Abliterate-Qwenvergence",
"base_model:sometimesanotion/Lamarck-14B-v0.3",
"base_model:merge:sometimesanotion/Lamarck-14B-v0.3",
"base_model:sometimesanotion/Qwen2.5-14B-Vimarckoso-v3",
"base_model:merge:sometimesanotion/Qwen2.5-14B-Vimarckoso-v3",
"base_model:sometimesanotion/Qwenvergence-14B-v3-Prose",
"base_model:merge:sometimesanotion/Qwenvergence-14B-v3-Prose",
"base_model:underwoods/medius-erebus-magnum-14b",
"base_model:merge:underwoods/medius-erebus-magnum-14b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-04T22:17:38Z |
2025-01-22T20:52:48+00:00
| 368 | 14 |
---
base_model:
- sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
- sometimesanotion/Lamarck-14B-v0.3
- sometimesanotion/Qwenvergence-14B-v3-Prose
- Krystalan/DRT-o1-14B
- underwoods/medius-erebus-magnum-14b
- sometimesanotion/Abliterate-Qwenvergence
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- mergekit
- merge
new_version: sometimesanotion/Lamarck-14B-v0.7
---

---
> [!TIP] **Update:** Lamarck has, for the moment, taken the [#1 average score](https://shorturl.at/STz7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) for text-generation assistant language models under 32 billion parameters. This validates the complex merge techniques which combine the strengths of other finetunes in the community into one model.
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:

A notable contribution to the middle to upper layers of Lamarck v0.6 comes from [Krystalan/DRT-o1-14B](https://huggingface.co/Krystalan/DRT-o1-14B). It has a fascinating research paper: [DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought](https://huggingface.co/papers/2412.17498).
Lamarck 0.6 uses a custom toolchain to create the merges which target specific layers:
- **Extracted LoRA adapters from special-purpose merges**
- **Separate branches for breadcrumbs and DELLA merges**
- **Highly targeted weight/density gradients for every 2-4 layers**
- **Finalization through SLERP merges recombining the separate branches**
This approach selectively merges the strongest aspects of its ancestors. Lamarck v0.6 is my most complex merge to date. The LoRA extractions alone pushed my hardware enough to be the building's sole source of heat for several winter days! By comparison, the SLERP merge below which finalized it was a simple step.
```yaml
---
name: lamarck-14b-v0.6-005-model_stock
merge_method: model_stock
base_model: sometimesanotion/Qwenvergence-14B-Base-v2
tokenizer_source: sometimesanotion/Abliterate-Qwenvergence
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
models:
- model: arcee-ai/Virtuoso-Small-qv64
- model: Krystalan/DRT-o1-14B-qv128
- model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3-qv64
- model: sometimesanotion/Qwenvergence-14B-v3-Prose-qv256
- model: sometimesanotion/Abliterate-Qwenvergence
---
name: lamarck-14b-converge-breadcrumbs
merge_method: breadcrumbs
base_model: sometimesanotion/lamarck-14b-v0.6-005-model_stock
tokenizer_source: base
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
density: 0.95
weight: 1.00
gamma: 0.018
# Here there be dragons!
---
name: lamarck-14b-converge-della-linear
merge_method: della_linear
base_model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
density: 0.95
weight: 1.00
epsilon: 0.018
lambda: 1.20
smoothing_factor: 0.07
# Yep, dragons.
---
name: Lamarck-14B-v0.6-rc4
merge_method: slerp
base_model: sometimesanotion/lamarck-14b-converge-della-linear
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
parameters:
t:
- value: 0.30
# Not so dragon-ish.
slices:
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 0, 8 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 0, 8 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 8, 16 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 8, 16 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 16, 24 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 16, 24 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 24, 32 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 24, 32 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 32, 40 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 32, 40 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 40, 48 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 40, 48 ]
```
Lamarck's performance comes from an ancestry that goes back through careful merges to select finetuning work, upcycled and combined. Kudoes to @arcee-ai, @CultriX, @sthenno-com, @Krystalan, @underwoods, @VAGOSolutions, and @rombodawg whose models had the most influence. [Vimarckoso v3](https://huggingface.co/sometimesanotion/Qwen2.5-14B-Vimarckoso-v3) has the model card which documents its extended lineage.
| null |
Non_BioNLP
|

---
> [!TIP] **Update:** Lamarck has, for the moment, taken the [#1 average score](https://shorturl.at/STz7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) for text-generation assistant language models under 32 billion parameters. This validates the complex merge techniques which combine the strengths of other finetunes in the community into one model.
Lamarck 14B v0.6: A generalist merge focused on multi-step reasoning, prose, and multi-language ability. It is based on components that have punched above their weight in the 14 billion parameter class. Here you can see a comparison between Lamarck and other top-performing merges and finetunes:

A notable contribution to the middle to upper layers of Lamarck v0.6 comes from [Krystalan/DRT-o1-14B](https://huggingface.co/Krystalan/DRT-o1-14B). It has a fascinating research paper: [DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought](https://huggingface.co/papers/2412.17498).
Lamarck 0.6 uses a custom toolchain to create the merges which target specific layers:
- **Extracted LoRA adapters from special-purpose merges**
- **Separate branches for breadcrumbs and DELLA merges**
- **Highly targeted weight/density gradients for every 2-4 layers**
- **Finalization through SLERP merges recombining the separate branches**
This approach selectively merges the strongest aspects of its ancestors. Lamarck v0.6 is my most complex merge to date. The LoRA extractions alone pushed my hardware enough to be the building's sole source of heat for several winter days! By comparison, the SLERP merge below which finalized it was a simple step.
```yaml
---
name: lamarck-14b-v0.6-005-model_stock
merge_method: model_stock
base_model: sometimesanotion/Qwenvergence-14B-Base-v2
tokenizer_source: sometimesanotion/Abliterate-Qwenvergence
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
models:
- model: arcee-ai/Virtuoso-Small-qv64
- model: Krystalan/DRT-o1-14B-qv128
- model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3-qv64
- model: sometimesanotion/Qwenvergence-14B-v3-Prose-qv256
- model: sometimesanotion/Abliterate-Qwenvergence
---
name: lamarck-14b-converge-breadcrumbs
merge_method: breadcrumbs
base_model: sometimesanotion/lamarck-14b-v0.6-005-model_stock
tokenizer_source: base
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
density: 0.95
weight: 1.00
gamma: 0.018
# Here there be dragons!
---
name: lamarck-14b-converge-della-linear
merge_method: della_linear
base_model: sometimesanotion/Qwen2.5-14B-Vimarckoso-v3
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
density: 0.95
weight: 1.00
epsilon: 0.018
lambda: 1.20
smoothing_factor: 0.07
# Yep, dragons.
---
name: Lamarck-14B-v0.6-rc4
merge_method: slerp
base_model: sometimesanotion/lamarck-14b-converge-della-linear
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
parameters:
t:
- value: 0.30
# Not so dragon-ish.
slices:
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 0, 8 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 0, 8 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 8, 16 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 8, 16 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 16, 24 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 16, 24 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 24, 32 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 24, 32 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 32, 40 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 32, 40 ]
- sources:
- model: sometimesanotion/lamarck-14b-converge-della-linear
layer_range: [ 40, 48 ]
- model: sometimesanotion/lamarck-14b-converge-breadcrumbs
layer_range: [ 40, 48 ]
```
Lamarck's performance comes from an ancestry that goes back through careful merges to select finetuning work, upcycled and combined. Kudoes to @arcee-ai, @CultriX, @sthenno-com, @Krystalan, @underwoods, @VAGOSolutions, and @rombodawg whose models had the most influence. [Vimarckoso v3](https://huggingface.co/sometimesanotion/Qwen2.5-14B-Vimarckoso-v3) has the model card which documents its extended lineage.
|
{"base_model": ["sometimesanotion/Qwen2.5-14B-Vimarckoso-v3", "sometimesanotion/Lamarck-14B-v0.3", "sometimesanotion/Qwenvergence-14B-v3-Prose", "Krystalan/DRT-o1-14B", "underwoods/medius-erebus-magnum-14b", "sometimesanotion/Abliterate-Qwenvergence", "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-generation", "tags": ["mergekit", "merge"], "new_version": "sometimesanotion/Lamarck-14B-v0.7"}
|
task
|
[
"TRANSLATION"
] | 42,217 |
RichardErkhov/nicholasKluge_-_TeenyTinyLlama-460m-Chat-4bits
|
RichardErkhov
| null |
[
"safetensors",
"llama",
"arxiv:2401.16640",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-08-27T08:09:24Z |
2024-08-27T08:09:56+00:00
| 6 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TeenyTinyLlama-460m-Chat - bnb 4bits
- Model creator: https://huggingface.co/nicholasKluge/
- Original model: https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-Chat/
Original model description:
---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset-v2
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
widget:
- text: "<s><instruction>Cite algumas bandas de rock famosas da década de 1960.</instruction>"
example_title: Exemplo
- text: "<s><instruction>Quantos planetas existem no sistema solar?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual é o futuro do ser humano?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual o sentido da vida?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Como imprimir hello world em python?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Invente uma história sobre um encanador com poderes mágicos.</instruction>"
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 2530
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# TeenyTinyLlama-460m-Chat
TeenyTinyLlama is a pair of small foundational models trained in Brazilian Portuguese.
This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) (`TeenyTinyLlama-460m-Chat`) fine-tuned on the [Instruct-Aira Dataset version 2.0](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset-v2).
## Details
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e3, learning_rate = 1e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Carbon emissions** stats are logged in this [file](emissions.csv).
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Usage
The following special tokens are used to mark the user side of the interaction and the model's response:
`<instruction>`What is a language model?`</instruction>`A language model is a probability distribution over a vocabulary.`</s>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model = AutoModelForCausalLM.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model.eval()
model.to(device)
question = input("Entre seu prompt aqui: ")
inputs = tokenizer("<instruction>" + question + "</instruction>", return_tensors="pt").to(device)
responses = model.generate(**inputs, num_return_sequences=2)
print(f"Pergunta: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 Qual a capital do Brasil?
>>>Response 1: 🤖 A capital do Brasil é Brasília.
>>>Response 2: 🤖 A capital do Brasil é Brasília.
```
The chat template for this model is:
```bash
{{bos_token}}
{% for message in messages %}
{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{% endif %}
{% if message['role'] == 'user' %}
{{ '<instruction>' + message['content'].strip() + '</instruction>'}}
{% elif message['role'] == 'assistant' %}
{{ message['content'].strip() + eos_token}}
{% else %}
{{ raise_exception('Only user and assistant roles are supported!') }}
{% endif %}
{% endfor %}
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TeenyTinyLlama-460m-Chat - bnb 4bits
- Model creator: https://huggingface.co/nicholasKluge/
- Original model: https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-Chat/
Original model description:
---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset-v2
language:
- pt
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
widget:
- text: "<s><instruction>Cite algumas bandas de rock famosas da década de 1960.</instruction>"
example_title: Exemplo
- text: "<s><instruction>Quantos planetas existem no sistema solar?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual é o futuro do ser humano?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Qual o sentido da vida?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Como imprimir hello world em python?</instruction>"
example_title: Exemplo
- text: "<s><instruction>Invente uma história sobre um encanador com poderes mágicos.</instruction>"
example_title: Exemplo
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 2530
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# TeenyTinyLlama-460m-Chat
TeenyTinyLlama is a pair of small foundational models trained in Brazilian Portuguese.
This repository contains a version of [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) (`TeenyTinyLlama-460m-Chat`) fine-tuned on the [Instruct-Aira Dataset version 2.0](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset-v2).
## Details
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e3, learning_rate = 1e-5, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Carbon emissions** stats are logged in this [file](emissions.csv).
This repository has the [source code](https://github.com/Nkluge-correa/TeenyTinyLlama) used to train this model.
## Intended Uses
The primary intended use of TeenyTinyLlama is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt TeenyTinyLlama for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained TeenyTinyLlama as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
## Out-of-scope Use
TeenyTinyLlama is not intended for deployment. It is not a product and should not be used for human-facing interactions.
TeenyTinyLlama models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
TeenyTinyLlama has not been fine-tuned for downstream contexts in which language models are commonly deployed.
## Usage
The following special tokens are used to mark the user side of the interaction and the model's response:
`<instruction>`What is a language model?`</instruction>`A language model is a probability distribution over a vocabulary.`</s>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model = AutoModelForCausalLM.from_pretrained('nicholasKluge/TeenyTinyLlama-460m-Chat')
model.eval()
model.to(device)
question = input("Entre seu prompt aqui: ")
inputs = tokenizer("<instruction>" + question + "</instruction>", return_tensors="pt").to(device)
responses = model.generate(**inputs, num_return_sequences=2)
print(f"Pergunta: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Resposta {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 Qual a capital do Brasil?
>>>Response 1: 🤖 A capital do Brasil é Brasília.
>>>Response 2: 🤖 A capital do Brasil é Brasília.
```
The chat template for this model is:
```bash
{{bos_token}}
{% for message in messages %}
{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
{% endif %}
{% if message['role'] == 'user' %}
{{ '<instruction>' + message['content'].strip() + '</instruction>'}}
{% elif message['role'] == 'assistant' %}
{{ message['content'].strip() + eos_token}}
{% else %}
{{ raise_exception('Only user and assistant roles are supported!') }}
{% endif %}
{% endfor %}
```
## Limitations
Like almost all other language models trained on large text datasets scraped from the web, the TTL pair exhibited behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
- **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
## Evaluations
During our training runs, both models showed consistent convergence. At no point did our evaluation curves show signs of overfitting or saturation. In the case of our 460m parameter model, we intentionally trained past the optimal point by approximately 75,000 steps to assess if there were any signs of saturation, but our evaluations consistently gave better results. We hypothesize that our models are under-trained but can improve if further trained to pass the Chinchilla optimal range.
| Processed Tokens | Perplexity | Energy Consumption (kWh) | Emissions (KgCO2eq) |
|------------------|------------|---------------------------|----------------------|
| 8.1M | 20.49 | 9.40 | 3.34 |
| 1.6B | 16.90 | 18.82 | 6.70 |
| 2.4B | 15.43 | 28.59 | 10.16 |
| 3.2B | 14.64 | 38.20 | 13.57 |
| 4.0B | 14.08 | 48.04 | 17.07 |
| 4.9B | 13.61 | 57.74 | 20.52 |
| 5.7B | 13.25 | 67.32 | 23.92 |
| 6.5B | 12.87 | 76.84 | 27.30 |
| 7.3B | 12.57 | 86.40 | 30.70 |
| 8.1B | 12.27 | 96.19 | 34.18 |
| 9.0B | 11.96 | 106.06 | 37.70 |
| 9.8B | 11.77 | 115.69 | 41.31 |
## Benchmarks
Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. The results of models marked with an "*" were extracted from the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | **Average** |
|------------------|-----------|---------------|-----------|----------------|-------------|
| Pythia-410m | 24.83* | 41.29* | 25.99* | 40.95* | 33.26 |
| **TTL-460m** | 29.40 | 33.00 | 28.55 | 41.10 | 33.01 |
| Bloom-560m | 24.74* | 37.15* | 24.22* | 42.44* | 32.13 |
| Xglm-564M | 25.56 | 34.64* | 25.18* | 42.53 | 31.97 |
| OPT-350m | 23.55* | 36.73* | 26.02* | 40.83* | 31.78 |
| **TTL-160m** | 26.15 | 29.29 | 28.11 | 41.12 | 31.16 |
| Pythia-160m | 24.06* | 31.39* | 24.86* | 44.34* | 31.16 |
| OPT-125m | 22.87* | 31.47* | 26.02* | 42.87* | 30.80 |
| GPorTuguese-2 | 22.48 | 29.62 | 27.36 | 41.44 | 30.22 |
| Gpt2-small | 21.48* | 31.60* | 25.79* | 40.65* | 29.97 |
| Multilingual GPT | 23.81 | 26.37* | 25.17* | 39.62 | 28.73 |
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
| | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **OAB Exams** | **Average** |
|----------------|----------------|----------------|-----------|----------|----------------|------------|---------------|-------------|
| Qwen-1.8B | 64.83 | 19.53 | 26.15 | 30.23 | 43.97 | 33.33 | 27.20 | 35.03 |
| TinyLlama-1.1B | 58.93 | 13.57 | 22.81 | 22.25 | 43.97 | 36.92 | 23.64 | 31.72 |
| **TTL-460m** | 53.93 | 12.66 | 22.81 | 19.87 | 49.01 | 33.59 | 27.06 | 31.27 |
| XGLM-564m | 49.61 | 22.91 | 19.61 | 19.38 | 43.97 | 33.99 | 23.42 | 30.41 |
| Bloom-1b7 | 53.60 | 4.81 | 21.42 | 18.96 | 43.97 | 34.89 | 23.05 | 28.67 |
| **TTL-160m** | 53.36 | 2.58 | 21.84 | 18.75 | 43.97 | 36.88 | 22.60 | 28.56 |
| OPT-125m | 39.77 | 2.00 | 21.84 | 17.42 | 43.97 | 47.04 | 22.78 | 27.83 |
| Pythia-160 | 33.33 | 12.81 | 16.13 | 16.66 | 50.36 | 41.09 | 22.82 | 27.60 |
| OLMo-1b | 34.12 | 9.28 | 18.92 | 20.29 | 43.97 | 41.33 | 22.96 | 27.26 |
| Bloom-560m | 33.33 | 8.48 | 18.92 | 19.03 | 43.97 | 37.07 | 23.05 | 26.26 |
| Pythia-410m | 33.33 | 4.80 | 19.47 | 19.45 | 43.97 | 33.33 | 23.01 | 25.33 |
| OPT-350m | 33.33 | 3.65 | 20.72 | 17.35 | 44.71 | 33.33 | 23.01 | 25.15 |
| GPT-2 small | 33.26 | 0.00 | 10.43 | 11.20 | 43.52 | 33.68 | 13.12 | 20.74 |
| GPorTuguese | 33.33 | 3.85 | 14.74 | 3.01 | 28.81 | 33.33 | 21.23 | 19.75 |
| Samba-1.1B | 33.33 | 1.30 | 8.07 | 10.22 | 17.72 | 35.79 | 15.03 | 17.35 |
## Fine-Tuning Comparisons
To further evaluate the downstream capabilities of our models, we decided to employ a basic fine-tuning procedure for our TTL pair on a subset of tasks from the Poeta benchmark. We apply the same procedure for comparison purposes on both [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) models, given that they are also LLM trained from scratch in Brazilian Portuguese and have a similar size range to our models. We used these comparisons to assess if our pre-training runs produced LLM capable of producing good results ("good" here means "close to BERTimbau") when utilized for downstream applications.
| Models | IMDB | FaQuAD-NLI | HateBr | Assin2 | AgNews | Average |
|-----------------|-----------|------------|-----------|-----------|-----------|---------|
| BERTimbau-large | **93.58** | 92.26 | 91.57 | **88.97** | 94.11 | 92.10 |
| BERTimbau-small | 92.22 | **93.07** | 91.28 | 87.45 | 94.19 | 91.64 |
| **TTL-460m** | 91.64 | 91.18 | **92.28** | 86.43 | **94.42** | 91.19 |
| **TTL-160m** | 91.14 | 90.00 | 90.71 | 85.78 | 94.05 | 90.34 |
All the shown results are the higher accuracy scores achieved on the respective task test sets after fine-tuning the models on the training sets. All fine-tuning runs used the same hyperparameters, and the code implementation can be found in the [model cards](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-HateBR) of our fine-tuned models.
## Cite as 🤗
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
@misc{correa24ttllama,
doi = {10.1016/j.mlwa.2024.100558},
url = {https://www.sciencedirect.com/science/article/pii/S2666827024000343},
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={Machine Learning With Applications},
publisher = {Springer},
year={2024}
}
```
## Funding
This repository was built as part of the RAIES ([Rede de Inteligência Artificial Ética e Segura](https://www.raies.org/)) initiative, a project supported by FAPERGS - ([Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul](https://fapergs.rs.gov.br/inicial)), Brazil.
## License
TeenyTinyLlama-460m-Chat is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
{}
|
task
|
[
"TRANSLATION"
] | 42,218 |
caspro/mt5-base_Nepali_News_Summarization_0
|
caspro
| null |
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/mt5-base",
"base_model:adapter:google/mt5-base",
"license:apache-2.0",
"region:us"
] | 2024-07-13T05:42:20Z |
2024-07-20T11:29:47+00:00
| 15 | 0 |
---
base_model: google/mt5-base
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-base_Nepali_News_Summarization_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_Nepali_News_Summarization_0
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3685
- Rouge-1 R: 0.3321
- Rouge-1 P: 0.3218
- Rouge-1 F: 0.3186
- Rouge-2 R: 0.1761
- Rouge-2 P: 0.1703
- Rouge-2 F: 0.1677
- Rouge-l R: 0.3234
- Rouge-l P: 0.3133
- Rouge-l F: 0.3102
- Gen Len: 15.7133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 R | Rouge-1 P | Rouge-1 F | Rouge-2 R | Rouge-2 P | Rouge-2 F | Rouge-l R | Rouge-l P | Rouge-l F | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------:|
| 1.8844 | 1.0 | 10191 | 1.4867 | 0.31 | 0.3133 | 0.3024 | 0.1576 | 0.1605 | 0.1531 | 0.3015 | 0.3048 | 0.2942 | 15.2667 |
| 1.7381 | 2.0 | 20382 | 1.4401 | 0.3203 | 0.3104 | 0.3068 | 0.1675 | 0.162 | 0.1592 | 0.3121 | 0.3026 | 0.299 | 15.699 |
| 1.6401 | 3.0 | 30573 | 1.3685 | 0.3321 | 0.3218 | 0.3186 | 0.1761 | 0.1703 | 0.1677 | 0.3234 | 0.3133 | 0.3102 | 15.7133 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_Nepali_News_Summarization_0
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3685
- Rouge-1 R: 0.3321
- Rouge-1 P: 0.3218
- Rouge-1 F: 0.3186
- Rouge-2 R: 0.1761
- Rouge-2 P: 0.1703
- Rouge-2 F: 0.1677
- Rouge-l R: 0.3234
- Rouge-l P: 0.3133
- Rouge-l F: 0.3102
- Gen Len: 15.7133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 R | Rouge-1 P | Rouge-1 F | Rouge-2 R | Rouge-2 P | Rouge-2 F | Rouge-l R | Rouge-l P | Rouge-l F | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------:|
| 1.8844 | 1.0 | 10191 | 1.4867 | 0.31 | 0.3133 | 0.3024 | 0.1576 | 0.1605 | 0.1531 | 0.3015 | 0.3048 | 0.2942 | 15.2667 |
| 1.7381 | 2.0 | 20382 | 1.4401 | 0.3203 | 0.3104 | 0.3068 | 0.1675 | 0.162 | 0.1592 | 0.3121 | 0.3026 | 0.299 | 15.699 |
| 1.6401 | 3.0 | 30573 | 1.3685 | 0.3321 | 0.3218 | 0.3186 | 0.1761 | 0.1703 | 0.1677 | 0.3234 | 0.3133 | 0.3102 | 15.7133 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "google/mt5-base", "library_name": "peft", "license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "mt5-base_Nepali_News_Summarization_0", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,219 |
sshleifer/distill-pegasus-xsum-16-8
|
sshleifer
|
summarization
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"en",
"arxiv:1912.08777",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2020-10-08T03:05:56+00:00
| 146 | 1 |
---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
task
|
[
"SUMMARIZATION"
] | 42,220 |
chineidu/bert-finetuned-mrpc
|
chineidu
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-08T18:03:50Z |
2023-10-08T18:09:48+00:00
| 5 | 0 |
---
base_model: bert-base-uncased
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- type: accuracy
value: 0.8088235294117647
name: Accuracy
- type: f1
value: 0.8682432432432433
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.8088
- F1: 0.8682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4804 | 0.7770 | 0.8384 |
| 0.5792 | 2.0 | 918 | 0.5523 | 0.7966 | 0.8668 |
| 0.4419 | 3.0 | 1377 | 0.5401 | 0.8088 | 0.8682 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.8088
- F1: 0.8682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4804 | 0.7770 | 0.8384 |
| 0.5792 | 2.0 | 918 | 0.5523 | 0.7966 | 0.8668 |
| 0.4419 | 3.0 | 1377 | 0.5401 | 0.8088 | 0.8682 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "bert-base-uncased", "datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "mrpc", "split": "validation", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8088235294117647, "name": "Accuracy"}, {"type": "f1", "value": 0.8682432432432433, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,221 |
tsobolev/mt5-small-finetuned-amazon-en-es
|
tsobolev
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-04T13:21:38Z |
2023-09-04T13:57:53+00:00
| 24 | 0 |
---
base_model: google/mt5-small
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2197
- Rouge1: 23.7209
- Rouge2: 12.4611
- Rougel: 22.745
- Rougelsum: 22.8085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.2981 | 1.0 | 589 | 3.2505 | 21.3203 | 11.2834 | 20.3564 | 20.3812 |
| 3.2017 | 2.0 | 1178 | 3.2563 | 21.5285 | 10.7337 | 20.7505 | 20.5952 |
| 3.1031 | 3.0 | 1767 | 3.2636 | 23.6856 | 12.5802 | 22.6568 | 22.5633 |
| 3.0303 | 4.0 | 2356 | 3.2242 | 23.4492 | 12.3205 | 22.3518 | 22.3521 |
| 2.9808 | 5.0 | 2945 | 3.2281 | 24.2603 | 12.6434 | 23.1527 | 23.2443 |
| 2.9519 | 6.0 | 3534 | 3.2146 | 24.5639 | 13.5995 | 23.8336 | 23.8418 |
| 2.9125 | 7.0 | 4123 | 3.2226 | 23.9892 | 12.9918 | 23.3101 | 23.3465 |
| 2.9058 | 8.0 | 4712 | 3.2197 | 23.7209 | 12.4611 | 22.745 | 22.8085 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2197
- Rouge1: 23.7209
- Rouge2: 12.4611
- Rougel: 22.745
- Rougelsum: 22.8085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.2981 | 1.0 | 589 | 3.2505 | 21.3203 | 11.2834 | 20.3564 | 20.3812 |
| 3.2017 | 2.0 | 1178 | 3.2563 | 21.5285 | 10.7337 | 20.7505 | 20.5952 |
| 3.1031 | 3.0 | 1767 | 3.2636 | 23.6856 | 12.5802 | 22.6568 | 22.5633 |
| 3.0303 | 4.0 | 2356 | 3.2242 | 23.4492 | 12.3205 | 22.3518 | 22.3521 |
| 2.9808 | 5.0 | 2945 | 3.2281 | 24.2603 | 12.6434 | 23.1527 | 23.2443 |
| 2.9519 | 6.0 | 3534 | 3.2146 | 24.5639 | 13.5995 | 23.8336 | 23.8418 |
| 2.9125 | 7.0 | 4123 | 3.2226 | 23.9892 | 12.9918 | 23.3101 | 23.3465 |
| 2.9058 | 8.0 | 4712 | 3.2197 | 23.7209 | 12.4611 | 22.745 | 22.8085 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
{"base_model": "google/mt5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "mt5-small-finetuned-amazon-en-es", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 42,222 |
joelorellana/autotrain-rwlv_summarizer-55443129210
|
joelorellana
|
summarization
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:joel89/autotrain-data-rwlv_summarizer",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-04T15:34:31Z |
2023-05-04T15:38:19+00:00
| 22 | 0 |
---
datasets:
- joel89/autotrain-data-rwlv_summarizer
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.007272812398046086
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55443129210
- CO2 Emissions (in grams): 0.0073
## Validation Metrics
- Loss: 1.625
- Rouge1: 47.446
- Rouge2: 25.858
- RougeL: 43.937
- RougeLsum: 43.961
- Gen Len: 15.395
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/joel89/autotrain-rwlv_summarizer-55443129210
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 55443129210
- CO2 Emissions (in grams): 0.0073
## Validation Metrics
- Loss: 1.625
- Rouge1: 47.446
- Rouge2: 25.858
- RougeL: 43.937
- RougeLsum: 43.961
- Gen Len: 15.395
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/joel89/autotrain-rwlv_summarizer-55443129210
```
|
{"datasets": ["joel89/autotrain-data-rwlv_summarizer"], "language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.007272812398046086}}
|
task
|
[
"SUMMARIZATION"
] | 42,223 |
matevladimir/bart-large-mnli
|
matevladimir
|
zero-shot-classification
|
[
"Transformers PHP",
"onnx",
"bart",
"zero-shot-classification",
"dataset:multi_nli",
"arxiv:1910.13461",
"arxiv:1909.00161",
"license:mit",
"region:us"
] | 2024-06-23T00:03:26Z |
2024-06-23T00:06:56+00:00
| 12 | 0 |
---
datasets:
- multi_nli
library_name: Transformers PHP
license: mit
pipeline_tag: zero-shot-classification
tags:
- onnx
thumbnail: https://huggingface.co/front/thumbnails/facebook.png
---
https://huggingface.co/facebook/bart-large-mnli with ONNX weights to be compatible with Transformers PHP
# bart-large-mnli
This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset.
Additional information about this model:
- The [bart-large](https://huggingface.co/facebook/bart-large) model page
- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
](https://arxiv.org/abs/1910.13461)
- [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart)
## NLI-based Zero Shot Text Classification
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities.
This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels)
#{'labels': ['travel', 'dancing', 'cooking'],
# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],
# 'sequence': 'one day I will see the world'}
```
If more than one candidate label can be correct, pass `multi_label=True` to calculate each class independently:
```python
candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
classifier(sequence_to_classify, candidate_labels, multi_label=True)
#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],
# 'scores': [0.9945111274719238,
# 0.9383890628814697,
# 0.0057061901316046715,
# 0.0018193122232332826],
# 'sequence': 'one day I will see the world'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/facebook/bart-large-mnli with ONNX weights to be compatible with Transformers PHP\n\n\n# bart-large-mnli\n\nThis is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset.\n\nAdditional information about this model:\n- The [bart-large](https://huggingface.co/facebook/bart-large) model page\n- [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension\n](https://arxiv.org/abs/1910.13461)\n- [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart)\n\n## NLI-based Zero Shot Text Classification\n\n[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class \"politics\", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities.\n\nThis method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.\n\n#### With the zero-shot classification pipeline\n\nThe model can be loaded with the `zero-shot-classification` pipeline like so:\n\n```python\nfrom transformers import pipeline\nclassifier = pipeline(\"zero-shot-classification\",\n model=\"facebook/bart-large-mnli\")\n```\n\nYou can then use this pipeline to classify sequences into any of the class names you specify.\n\n```python\nsequence_to_classify = \"one day I will see the world\"\ncandidate_labels = ['travel', 'cooking', 'dancing']\nclassifier(sequence_to_classify, candidate_labels)\n#{'labels': ['travel', 'dancing', 'cooking'],\n# 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],\n# 'sequence': 'one day I will see the world'}\n```\n\nIf more than one candidate label can be correct, pass `multi_label=True` to calculate each class independently:\n\n```python\ncandidate_labels = ['travel', 'cooking', 'dancing', 'exploration']\nclassifier(sequence_to_classify, candidate_labels, multi_label=True)\n#{'labels': ['travel', 'exploration', 'dancing', 'cooking'],\n# 'scores': [0.9945111274719238,\n# 0.9383890628814697,\n# 0.0057061901316046715,\n# 0.0018193122232332826],\n# 'sequence': 'one day I will see the world'}\n```\n\n\n#### With manual PyTorch\n\n```python\n# pose sequence as a NLI premise and label as a hypothesis\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\nnli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')\ntokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')\n\npremise = sequence\nhypothesis = f'This example is {label}.'\n\n# run through model pre-trained on MNLI\nx = tokenizer.encode(premise, hypothesis, return_tensors='pt',\n truncation_strategy='only_first')\nlogits = nli_model(x.to(device))[0]\n\n# we throw away \"neutral\" (dim 1) and take the probability of\n# \"entailment\" (2) as the probability of the label being true \nentail_contradiction_logits = logits[:,[0,2]]\nprobs = entail_contradiction_logits.softmax(dim=1)\nprob_label_is_true = probs[:,1]\n```\n\n---\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).\n" |
{"datasets": ["multi_nli"], "library_name": "Transformers PHP", "license": "mit", "pipeline_tag": "zero-shot-classification", "tags": ["onnx"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png"}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | 42,224 |
gokuls/hBERTv1_data_aug_mrpc
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-01T10:43:24Z |
2023-03-02T10:14:18+00:00
| 37 | 0 |
---
datasets:
- glue
language:
- en
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: hBERTv1_data_aug_mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 1.0
name: Accuracy
- type: f1
value: 1.0
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_data_aug_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
- Combined Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.1151 | 1.0 | 980 | 0.0045 | 0.9975 | 0.9982 | 0.9979 |
| 0.0108 | 2.0 | 1960 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0063 | 3.0 | 2940 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0054 | 4.0 | 3920 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.004 | 5.0 | 4900 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0053 | 6.0 | 5880 | 0.0002 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 7.0 | 6860 | 0.0003 | 1.0 | 1.0 | 1.0 |
| 0.0116 | 8.0 | 7840 | 0.0150 | 0.9975 | 0.9982 | 0.9979 |
| 0.0093 | 9.0 | 8820 | 0.0015 | 1.0 | 1.0 | 1.0 |
| 0.0123 | 10.0 | 9800 | 0.0164 | 0.9975 | 0.9982 | 0.9979 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_data_aug_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1](https://huggingface.co/gokuls/bert_12_layer_model_v1) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
- Combined Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.1151 | 1.0 | 980 | 0.0045 | 0.9975 | 0.9982 | 0.9979 |
| 0.0108 | 2.0 | 1960 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0063 | 3.0 | 2940 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0054 | 4.0 | 3920 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.004 | 5.0 | 4900 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0053 | 6.0 | 5880 | 0.0002 | 1.0 | 1.0 | 1.0 |
| 0.0046 | 7.0 | 6860 | 0.0003 | 1.0 | 1.0 | 1.0 |
| 0.0116 | 8.0 | 7840 | 0.0150 | 0.9975 | 0.9982 | 0.9979 |
| 0.0093 | 9.0 | 8820 | 0.0015 | 1.0 | 1.0 | 1.0 |
| 0.0123 | 10.0 | 9800 | 0.0164 | 0.9975 | 0.9982 | 0.9979 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.1
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "language": ["en"], "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv1_data_aug_mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}, {"type": "f1", "value": 1.0, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,225 |
hopkins/eng-deu-trial2
|
hopkins
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-05T14:59:24Z |
2023-08-06T13:14:51+00:00
| 21 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-deu-trial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial2
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial2
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-deu-trial2", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 42,227 |
ve88ifz2/bge-base-en-v1.5-klej-dyk-v0.2
|
ve88ifz2
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dataset_size:1K<n<10K",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-19T13:57:43Z |
2024-06-19T14:00:30+00:00
| 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:1K<n<10K
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: USS Conyngham (DD-58)
sentences:
- '"w jakich patrolach uczestniczył USS ""Conyngham"" (DD-58)?"'
- Jest ona najstarszą skoczkinią w kadrze norweskiej.
- kto uczył malarstwa olimpijczyka Bronisława Czecha?
- source_sentence: Danae (obraz Tycjana)
sentences:
- jakie różnice występują pomiędzy kolejnymi wersjami obrazu Tycjana Danae?
- z czego wykonana jest rzeźba Robotnik i kołchoźnica?
- z jakiego powodu zwołano synod w Whitby?
- source_sentence: dlaczego zapominamy?
sentences:
- Zamek w Haapsalu
- kto był tłumaczem języka angielskiego u Mao Zedonga?
- Najstarszy z trzech synów Hong Xiuquana; jego matką była Lai Lianying.
- source_sentence: kim był Steve Yzerman?
sentences:
- która hala ma najmniejszą widownię w NHL?
- za co krytykowany był papieski wykład ratyzboński?
- ' W 1867 oddano do użytku Kolej Warszawsko-Terespolską (całą linię).'
- source_sentence: Herkules na rozstajach
sentences:
- jak zinterpretować wymowę obrazu Herkules na rozstajach?
- Dowódcą grupy był Wiaczesław Razumowicz ps. „Chmara”.
- z jakiego powodu zwołano synod w Whitby?
model-index:
- name: bge-base-en-v1.5-klej-dyk
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.17307692307692307
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46153846153846156
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6225961538461539
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7355769230769231
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.17307692307692307
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15384615384615385
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12451923076923076
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0735576923076923
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17307692307692307
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.46153846153846156
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6225961538461539
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7355769230769231
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4433646681639308
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.35053323412698395
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3573926265146405
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.16826923076923078
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4519230769230769
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6009615384615384
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7091346153846154
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.16826923076923078
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15064102564102563
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1201923076923077
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07091346153846154
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16826923076923078
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4519230769230769
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6009615384615384
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7091346153846154
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.42955891948336516
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3405992445054941
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3484580834493777
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.19230769230769232
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4543269230769231
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5913461538461539
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6899038461538461
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.19230769230769232
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15144230769230768
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11826923076923078
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0689903846153846
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19230769230769232
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4543269230769231
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5913461538461539
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6899038461538461
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4311008111471328
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3488247863247859
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3560982492053804
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.16346153846153846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.41586538461538464
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5168269230769231
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5985576923076923
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.16346153846153846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.13862179487179488
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10336538461538461
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.059855769230769226
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16346153846153846
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.41586538461538464
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5168269230769231
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5985576923076923
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.37641559536404157
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3052140567765567
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3151839890893904
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.1658653846153846
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.35096153846153844
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.43990384615384615
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5288461538461539
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1658653846153846
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.11698717948717949
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08798076923076924
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.052884615384615384
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1658653846153846
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.35096153846153844
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.43990384615384615
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5288461538461539
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.33823482580826353
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.27800194597069605
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2876731521968676
name: Cosine Map@100
---
# bge-base-en-v1.5-klej-dyk
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Herkules na rozstajach',
'jak zinterpretować wymowę obrazu Herkules na rozstajach?',
'Dowódcą grupy był Wiaczesław Razumowicz ps. „Chmara”.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1731 |
| cosine_accuracy@3 | 0.4615 |
| cosine_accuracy@5 | 0.6226 |
| cosine_accuracy@10 | 0.7356 |
| cosine_precision@1 | 0.1731 |
| cosine_precision@3 | 0.1538 |
| cosine_precision@5 | 0.1245 |
| cosine_precision@10 | 0.0736 |
| cosine_recall@1 | 0.1731 |
| cosine_recall@3 | 0.4615 |
| cosine_recall@5 | 0.6226 |
| cosine_recall@10 | 0.7356 |
| cosine_ndcg@10 | 0.4434 |
| cosine_mrr@10 | 0.3505 |
| **cosine_map@100** | **0.3574** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1683 |
| cosine_accuracy@3 | 0.4519 |
| cosine_accuracy@5 | 0.601 |
| cosine_accuracy@10 | 0.7091 |
| cosine_precision@1 | 0.1683 |
| cosine_precision@3 | 0.1506 |
| cosine_precision@5 | 0.1202 |
| cosine_precision@10 | 0.0709 |
| cosine_recall@1 | 0.1683 |
| cosine_recall@3 | 0.4519 |
| cosine_recall@5 | 0.601 |
| cosine_recall@10 | 0.7091 |
| cosine_ndcg@10 | 0.4296 |
| cosine_mrr@10 | 0.3406 |
| **cosine_map@100** | **0.3485** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1923 |
| cosine_accuracy@3 | 0.4543 |
| cosine_accuracy@5 | 0.5913 |
| cosine_accuracy@10 | 0.6899 |
| cosine_precision@1 | 0.1923 |
| cosine_precision@3 | 0.1514 |
| cosine_precision@5 | 0.1183 |
| cosine_precision@10 | 0.069 |
| cosine_recall@1 | 0.1923 |
| cosine_recall@3 | 0.4543 |
| cosine_recall@5 | 0.5913 |
| cosine_recall@10 | 0.6899 |
| cosine_ndcg@10 | 0.4311 |
| cosine_mrr@10 | 0.3488 |
| **cosine_map@100** | **0.3561** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1635 |
| cosine_accuracy@3 | 0.4159 |
| cosine_accuracy@5 | 0.5168 |
| cosine_accuracy@10 | 0.5986 |
| cosine_precision@1 | 0.1635 |
| cosine_precision@3 | 0.1386 |
| cosine_precision@5 | 0.1034 |
| cosine_precision@10 | 0.0599 |
| cosine_recall@1 | 0.1635 |
| cosine_recall@3 | 0.4159 |
| cosine_recall@5 | 0.5168 |
| cosine_recall@10 | 0.5986 |
| cosine_ndcg@10 | 0.3764 |
| cosine_mrr@10 | 0.3052 |
| **cosine_map@100** | **0.3152** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1659 |
| cosine_accuracy@3 | 0.351 |
| cosine_accuracy@5 | 0.4399 |
| cosine_accuracy@10 | 0.5288 |
| cosine_precision@1 | 0.1659 |
| cosine_precision@3 | 0.117 |
| cosine_precision@5 | 0.088 |
| cosine_precision@10 | 0.0529 |
| cosine_recall@1 | 0.1659 |
| cosine_recall@3 | 0.351 |
| cosine_recall@5 | 0.4399 |
| cosine_recall@10 | 0.5288 |
| cosine_ndcg@10 | 0.3382 |
| cosine_mrr@10 | 0.278 |
| **cosine_map@100** | **0.2877** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,738 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 90.01 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.82 tokens</li><li>max: 76 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|
| <code>Londyńska premiera w Ambassadors Theatre na londyńskim West Endzie miała miejsce 25 listopada 1952 roku, a przedstawione grane jest do dziś (od 1974 r.) w sąsiednim St Martin's Theatre. W Polsce była wystawiana m.in. w Teatrze Nowym w Zabrzu.</code> | <code>w którym londyńskim muzeum wystawiana była instalacja My Bed?</code> |
| <code>Theridion grallator osiąga długość 5 mm. U niektórych postaci na żółtym odwłoku występuje wzór przypominający uśmiechniętą lub śmiejącą się twarz klowna.</code> | <code>które pająki noszą na grzbiecie wzór przypominający uśmiechniętego klauna?</code> |
| <code>W 1998 w wyniku sporów o wytyczenie granicy między dwoma państwami wybuchła wojna erytrejsko-etiopska. Zakończyła się porozumieniem zawartym w Algierze 12 grudnia 2000. Od tego czasu strefa graniczna jest patrolowana przez siły pokojowe ONZ.</code> | <code>jakie były skutki wojny erytrejsko-etiopskiej?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.0684 | 1 | 7.2706 | - | - | - | - | - |
| 0.1368 | 2 | 8.2776 | - | - | - | - | - |
| 0.2051 | 3 | 7.1399 | - | - | - | - | - |
| 0.2735 | 4 | 6.6905 | - | - | - | - | - |
| 0.3419 | 5 | 6.735 | - | - | - | - | - |
| 0.4103 | 6 | 7.0537 | - | - | - | - | - |
| 0.4786 | 7 | 6.871 | - | - | - | - | - |
| 0.5470 | 8 | 6.7277 | - | - | - | - | - |
| 0.6154 | 9 | 5.9853 | - | - | - | - | - |
| 0.6838 | 10 | 6.0518 | - | - | - | - | - |
| 0.7521 | 11 | 5.8291 | - | - | - | - | - |
| 0.8205 | 12 | 5.0064 | - | - | - | - | - |
| 0.8889 | 13 | 4.8572 | - | - | - | - | - |
| 0.9573 | 14 | 5.1899 | 0.2812 | 0.3335 | 0.3486 | 0.2115 | 0.3639 |
| 1.0256 | 15 | 4.2996 | - | - | - | - | - |
| 1.0940 | 16 | 4.1475 | - | - | - | - | - |
| 1.1624 | 17 | 4.6174 | - | - | - | - | - |
| 1.2308 | 18 | 4.394 | - | - | - | - | - |
| 1.2991 | 19 | 4.0255 | - | - | - | - | - |
| 1.3675 | 20 | 3.9722 | - | - | - | - | - |
| 1.4359 | 21 | 3.9509 | - | - | - | - | - |
| 1.5043 | 22 | 3.7674 | - | - | - | - | - |
| 1.5726 | 23 | 3.7572 | - | - | - | - | - |
| 1.6410 | 24 | 3.9463 | - | - | - | - | - |
| 1.7094 | 25 | 3.7151 | - | - | - | - | - |
| 1.7778 | 26 | 3.7771 | - | - | - | - | - |
| 1.8462 | 27 | 3.5228 | - | - | - | - | - |
| 1.9145 | 28 | 2.7906 | - | - | - | - | - |
| 1.9829 | 29 | 3.4555 | 0.3164 | 0.3529 | 0.3641 | 0.2636 | 0.3681 |
| 2.0513 | 30 | 2.737 | - | - | - | - | - |
| 2.1197 | 31 | 3.1976 | - | - | - | - | - |
| 2.1880 | 32 | 3.1363 | - | - | - | - | - |
| 2.2564 | 33 | 2.9706 | - | - | - | - | - |
| 2.3248 | 34 | 2.9629 | - | - | - | - | - |
| 2.3932 | 35 | 2.7226 | - | - | - | - | - |
| 2.4615 | 36 | 2.4378 | - | - | - | - | - |
| 2.5299 | 37 | 2.7201 | - | - | - | - | - |
| 2.5983 | 38 | 2.6802 | - | - | - | - | - |
| 2.6667 | 39 | 3.1613 | - | - | - | - | - |
| 2.7350 | 40 | 2.9344 | - | - | - | - | - |
| 2.8034 | 41 | 2.5254 | - | - | - | - | - |
| 2.8718 | 42 | 2.5617 | - | - | - | - | - |
| 2.9402 | 43 | 2.459 | 0.3197 | 0.3571 | 0.3640 | 0.2739 | 0.3733 |
| 3.0085 | 44 | 2.3785 | - | - | - | - | - |
| 3.0769 | 45 | 1.9408 | - | - | - | - | - |
| 3.1453 | 46 | 2.7095 | - | - | - | - | - |
| 3.2137 | 47 | 2.4774 | - | - | - | - | - |
| 3.2821 | 48 | 2.2178 | - | - | - | - | - |
| 3.3504 | 49 | 2.0884 | - | - | - | - | - |
| 3.4188 | 50 | 2.1044 | - | - | - | - | - |
| 3.4872 | 51 | 2.1504 | - | - | - | - | - |
| 3.5556 | 52 | 2.1177 | - | - | - | - | - |
| 3.6239 | 53 | 2.2283 | - | - | - | - | - |
| 3.6923 | 54 | 2.3964 | - | - | - | - | - |
| 3.7607 | 55 | 2.0972 | - | - | - | - | - |
| 3.8291 | 56 | 2.0961 | - | - | - | - | - |
| 3.8974 | 57 | 1.783 | - | - | - | - | - |
| **3.9658** | **58** | **2.1031** | **0.3246** | **0.3533** | **0.3603** | **0.2829** | **0.3687** |
| 4.0342 | 59 | 1.6699 | - | - | - | - | - |
| 4.1026 | 60 | 1.6675 | - | - | - | - | - |
| 4.1709 | 61 | 2.1672 | - | - | - | - | - |
| 4.2393 | 62 | 1.8881 | - | - | - | - | - |
| 4.3077 | 63 | 1.701 | - | - | - | - | - |
| 4.3761 | 64 | 1.9154 | - | - | - | - | - |
| 4.4444 | 65 | 1.4549 | - | - | - | - | - |
| 4.5128 | 66 | 1.5444 | - | - | - | - | - |
| 4.5812 | 67 | 1.8352 | - | - | - | - | - |
| 4.6496 | 68 | 1.7908 | - | - | - | - | - |
| 4.7179 | 69 | 1.6876 | - | - | - | - | - |
| 4.7863 | 70 | 1.7366 | - | - | - | - | - |
| 4.8547 | 71 | 1.8689 | - | - | - | - | - |
| 4.9231 | 72 | 1.4676 | - | - | - | - | - |
| 4.9915 | 73 | 1.5045 | 0.3170 | 0.3538 | 0.3606 | 0.2829 | 0.3675 |
| 5.0598 | 74 | 1.2155 | - | - | - | - | - |
| 5.1282 | 75 | 1.4365 | - | - | - | - | - |
| 5.1966 | 76 | 1.7451 | - | - | - | - | - |
| 5.2650 | 77 | 1.4537 | - | - | - | - | - |
| 5.3333 | 78 | 1.3813 | - | - | - | - | - |
| 5.4017 | 79 | 1.4035 | - | - | - | - | - |
| 5.4701 | 80 | 1.3912 | - | - | - | - | - |
| 5.5385 | 81 | 1.3286 | - | - | - | - | - |
| 5.6068 | 82 | 1.5153 | - | - | - | - | - |
| 5.6752 | 83 | 1.6745 | - | - | - | - | - |
| 5.7436 | 84 | 1.4323 | - | - | - | - | - |
| 5.8120 | 85 | 1.5299 | - | - | - | - | - |
| 5.8803 | 86 | 1.488 | - | - | - | - | - |
| 5.9487 | 87 | 1.5195 | 0.3206 | 0.3556 | 0.3530 | 0.2878 | 0.3605 |
| 6.0171 | 88 | 1.2999 | - | - | - | - | - |
| 6.0855 | 89 | 1.1511 | - | - | - | - | - |
| 6.1538 | 90 | 1.552 | - | - | - | - | - |
| 6.2222 | 91 | 1.35 | - | - | - | - | - |
| 6.2906 | 92 | 1.218 | - | - | - | - | - |
| 6.3590 | 93 | 1.1712 | - | - | - | - | - |
| 6.4274 | 94 | 1.3381 | - | - | - | - | - |
| 6.4957 | 95 | 1.1716 | - | - | - | - | - |
| 6.5641 | 96 | 1.2117 | - | - | - | - | - |
| 6.6325 | 97 | 1.5349 | - | - | - | - | - |
| 6.7009 | 98 | 1.4564 | - | - | - | - | - |
| 6.7692 | 99 | 1.3541 | - | - | - | - | - |
| 6.8376 | 100 | 1.2468 | - | - | - | - | - |
| 6.9060 | 101 | 1.1519 | - | - | - | - | - |
| 6.9744 | 102 | 1.2421 | 0.3150 | 0.3555 | 0.3501 | 0.2858 | 0.3575 |
| 7.0427 | 103 | 1.0096 | - | - | - | - | - |
| 7.1111 | 104 | 1.1405 | - | - | - | - | - |
| 7.1795 | 105 | 1.2958 | - | - | - | - | - |
| 7.2479 | 106 | 1.35 | - | - | - | - | - |
| 7.3162 | 107 | 1.1291 | - | - | - | - | - |
| 7.3846 | 108 | 0.9968 | - | - | - | - | - |
| 7.4530 | 109 | 1.0454 | - | - | - | - | - |
| 7.5214 | 110 | 1.102 | - | - | - | - | - |
| 7.5897 | 111 | 1.1328 | - | - | - | - | - |
| 7.6581 | 112 | 1.5988 | - | - | - | - | - |
| 7.7265 | 113 | 1.2992 | - | - | - | - | - |
| 7.7949 | 114 | 1.2572 | - | - | - | - | - |
| 7.8632 | 115 | 1.1414 | - | - | - | - | - |
| 7.9316 | 116 | 1.1432 | - | - | - | - | - |
| 8.0 | 117 | 1.1181 | 0.3154 | 0.3545 | 0.3509 | 0.2884 | 0.3578 |
| 8.0684 | 118 | 0.9365 | - | - | - | - | - |
| 8.1368 | 119 | 1.3286 | - | - | - | - | - |
| 8.2051 | 120 | 1.3711 | - | - | - | - | - |
| 8.2735 | 121 | 1.2001 | - | - | - | - | - |
| 8.3419 | 122 | 1.165 | - | - | - | - | - |
| 8.4103 | 123 | 1.0575 | - | - | - | - | - |
| 8.4786 | 124 | 1.105 | - | - | - | - | - |
| 8.5470 | 125 | 1.077 | - | - | - | - | - |
| 8.6154 | 126 | 1.2217 | - | - | - | - | - |
| 8.6838 | 127 | 1.3254 | - | - | - | - | - |
| 8.7521 | 128 | 1.2165 | - | - | - | - | - |
| 8.8205 | 129 | 1.3021 | - | - | - | - | - |
| 8.8889 | 130 | 1.0927 | - | - | - | - | - |
| 8.9573 | 131 | 1.3961 | 0.3150 | 0.3540 | 0.3490 | 0.2882 | 0.3588 |
| 9.0256 | 132 | 1.0779 | - | - | - | - | - |
| 9.0940 | 133 | 0.901 | - | - | - | - | - |
| 9.1624 | 134 | 1.313 | - | - | - | - | - |
| 9.2308 | 135 | 1.1409 | - | - | - | - | - |
| 9.2991 | 136 | 1.1635 | - | - | - | - | - |
| 9.3675 | 137 | 1.0244 | - | - | - | - | - |
| 9.4359 | 138 | 1.0576 | - | - | - | - | - |
| 9.5043 | 139 | 1.0101 | - | - | - | - | - |
| 9.5726 | 140 | 1.1516 | 0.3152 | 0.3561 | 0.3485 | 0.2877 | 0.3574 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.12.2
- Sentence Transformers: 3.0.0
- Transformers: 4.41.2
- PyTorch: 2.3.1
- Accelerate: 0.27.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# bge-base-en-v1.5-klej-dyk
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Herkules na rozstajach',
'jak zinterpretować wymowę obrazu Herkules na rozstajach?',
'Dowódcą grupy był Wiaczesław Razumowicz ps. „Chmara”.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1731 |
| cosine_accuracy@3 | 0.4615 |
| cosine_accuracy@5 | 0.6226 |
| cosine_accuracy@10 | 0.7356 |
| cosine_precision@1 | 0.1731 |
| cosine_precision@3 | 0.1538 |
| cosine_precision@5 | 0.1245 |
| cosine_precision@10 | 0.0736 |
| cosine_recall@1 | 0.1731 |
| cosine_recall@3 | 0.4615 |
| cosine_recall@5 | 0.6226 |
| cosine_recall@10 | 0.7356 |
| cosine_ndcg@10 | 0.4434 |
| cosine_mrr@10 | 0.3505 |
| **cosine_map@100** | **0.3574** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1683 |
| cosine_accuracy@3 | 0.4519 |
| cosine_accuracy@5 | 0.601 |
| cosine_accuracy@10 | 0.7091 |
| cosine_precision@1 | 0.1683 |
| cosine_precision@3 | 0.1506 |
| cosine_precision@5 | 0.1202 |
| cosine_precision@10 | 0.0709 |
| cosine_recall@1 | 0.1683 |
| cosine_recall@3 | 0.4519 |
| cosine_recall@5 | 0.601 |
| cosine_recall@10 | 0.7091 |
| cosine_ndcg@10 | 0.4296 |
| cosine_mrr@10 | 0.3406 |
| **cosine_map@100** | **0.3485** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1923 |
| cosine_accuracy@3 | 0.4543 |
| cosine_accuracy@5 | 0.5913 |
| cosine_accuracy@10 | 0.6899 |
| cosine_precision@1 | 0.1923 |
| cosine_precision@3 | 0.1514 |
| cosine_precision@5 | 0.1183 |
| cosine_precision@10 | 0.069 |
| cosine_recall@1 | 0.1923 |
| cosine_recall@3 | 0.4543 |
| cosine_recall@5 | 0.5913 |
| cosine_recall@10 | 0.6899 |
| cosine_ndcg@10 | 0.4311 |
| cosine_mrr@10 | 0.3488 |
| **cosine_map@100** | **0.3561** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1635 |
| cosine_accuracy@3 | 0.4159 |
| cosine_accuracy@5 | 0.5168 |
| cosine_accuracy@10 | 0.5986 |
| cosine_precision@1 | 0.1635 |
| cosine_precision@3 | 0.1386 |
| cosine_precision@5 | 0.1034 |
| cosine_precision@10 | 0.0599 |
| cosine_recall@1 | 0.1635 |
| cosine_recall@3 | 0.4159 |
| cosine_recall@5 | 0.5168 |
| cosine_recall@10 | 0.5986 |
| cosine_ndcg@10 | 0.3764 |
| cosine_mrr@10 | 0.3052 |
| **cosine_map@100** | **0.3152** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1659 |
| cosine_accuracy@3 | 0.351 |
| cosine_accuracy@5 | 0.4399 |
| cosine_accuracy@10 | 0.5288 |
| cosine_precision@1 | 0.1659 |
| cosine_precision@3 | 0.117 |
| cosine_precision@5 | 0.088 |
| cosine_precision@10 | 0.0529 |
| cosine_recall@1 | 0.1659 |
| cosine_recall@3 | 0.351 |
| cosine_recall@5 | 0.4399 |
| cosine_recall@10 | 0.5288 |
| cosine_ndcg@10 | 0.3382 |
| cosine_mrr@10 | 0.278 |
| **cosine_map@100** | **0.2877** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,738 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 90.01 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.82 tokens</li><li>max: 76 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|
| <code>Londyńska premiera w Ambassadors Theatre na londyńskim West Endzie miała miejsce 25 listopada 1952 roku, a przedstawione grane jest do dziś (od 1974 r.) w sąsiednim St Martin's Theatre. W Polsce była wystawiana m.in. w Teatrze Nowym w Zabrzu.</code> | <code>w którym londyńskim muzeum wystawiana była instalacja My Bed?</code> |
| <code>Theridion grallator osiąga długość 5 mm. U niektórych postaci na żółtym odwłoku występuje wzór przypominający uśmiechniętą lub śmiejącą się twarz klowna.</code> | <code>które pająki noszą na grzbiecie wzór przypominający uśmiechniętego klauna?</code> |
| <code>W 1998 w wyniku sporów o wytyczenie granicy między dwoma państwami wybuchła wojna erytrejsko-etiopska. Zakończyła się porozumieniem zawartym w Algierze 12 grudnia 2000. Od tego czasu strefa graniczna jest patrolowana przez siły pokojowe ONZ.</code> | <code>jakie były skutki wojny erytrejsko-etiopskiej?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.0684 | 1 | 7.2706 | - | - | - | - | - |
| 0.1368 | 2 | 8.2776 | - | - | - | - | - |
| 0.2051 | 3 | 7.1399 | - | - | - | - | - |
| 0.2735 | 4 | 6.6905 | - | - | - | - | - |
| 0.3419 | 5 | 6.735 | - | - | - | - | - |
| 0.4103 | 6 | 7.0537 | - | - | - | - | - |
| 0.4786 | 7 | 6.871 | - | - | - | - | - |
| 0.5470 | 8 | 6.7277 | - | - | - | - | - |
| 0.6154 | 9 | 5.9853 | - | - | - | - | - |
| 0.6838 | 10 | 6.0518 | - | - | - | - | - |
| 0.7521 | 11 | 5.8291 | - | - | - | - | - |
| 0.8205 | 12 | 5.0064 | - | - | - | - | - |
| 0.8889 | 13 | 4.8572 | - | - | - | - | - |
| 0.9573 | 14 | 5.1899 | 0.2812 | 0.3335 | 0.3486 | 0.2115 | 0.3639 |
| 1.0256 | 15 | 4.2996 | - | - | - | - | - |
| 1.0940 | 16 | 4.1475 | - | - | - | - | - |
| 1.1624 | 17 | 4.6174 | - | - | - | - | - |
| 1.2308 | 18 | 4.394 | - | - | - | - | - |
| 1.2991 | 19 | 4.0255 | - | - | - | - | - |
| 1.3675 | 20 | 3.9722 | - | - | - | - | - |
| 1.4359 | 21 | 3.9509 | - | - | - | - | - |
| 1.5043 | 22 | 3.7674 | - | - | - | - | - |
| 1.5726 | 23 | 3.7572 | - | - | - | - | - |
| 1.6410 | 24 | 3.9463 | - | - | - | - | - |
| 1.7094 | 25 | 3.7151 | - | - | - | - | - |
| 1.7778 | 26 | 3.7771 | - | - | - | - | - |
| 1.8462 | 27 | 3.5228 | - | - | - | - | - |
| 1.9145 | 28 | 2.7906 | - | - | - | - | - |
| 1.9829 | 29 | 3.4555 | 0.3164 | 0.3529 | 0.3641 | 0.2636 | 0.3681 |
| 2.0513 | 30 | 2.737 | - | - | - | - | - |
| 2.1197 | 31 | 3.1976 | - | - | - | - | - |
| 2.1880 | 32 | 3.1363 | - | - | - | - | - |
| 2.2564 | 33 | 2.9706 | - | - | - | - | - |
| 2.3248 | 34 | 2.9629 | - | - | - | - | - |
| 2.3932 | 35 | 2.7226 | - | - | - | - | - |
| 2.4615 | 36 | 2.4378 | - | - | - | - | - |
| 2.5299 | 37 | 2.7201 | - | - | - | - | - |
| 2.5983 | 38 | 2.6802 | - | - | - | - | - |
| 2.6667 | 39 | 3.1613 | - | - | - | - | - |
| 2.7350 | 40 | 2.9344 | - | - | - | - | - |
| 2.8034 | 41 | 2.5254 | - | - | - | - | - |
| 2.8718 | 42 | 2.5617 | - | - | - | - | - |
| 2.9402 | 43 | 2.459 | 0.3197 | 0.3571 | 0.3640 | 0.2739 | 0.3733 |
| 3.0085 | 44 | 2.3785 | - | - | - | - | - |
| 3.0769 | 45 | 1.9408 | - | - | - | - | - |
| 3.1453 | 46 | 2.7095 | - | - | - | - | - |
| 3.2137 | 47 | 2.4774 | - | - | - | - | - |
| 3.2821 | 48 | 2.2178 | - | - | - | - | - |
| 3.3504 | 49 | 2.0884 | - | - | - | - | - |
| 3.4188 | 50 | 2.1044 | - | - | - | - | - |
| 3.4872 | 51 | 2.1504 | - | - | - | - | - |
| 3.5556 | 52 | 2.1177 | - | - | - | - | - |
| 3.6239 | 53 | 2.2283 | - | - | - | - | - |
| 3.6923 | 54 | 2.3964 | - | - | - | - | - |
| 3.7607 | 55 | 2.0972 | - | - | - | - | - |
| 3.8291 | 56 | 2.0961 | - | - | - | - | - |
| 3.8974 | 57 | 1.783 | - | - | - | - | - |
| **3.9658** | **58** | **2.1031** | **0.3246** | **0.3533** | **0.3603** | **0.2829** | **0.3687** |
| 4.0342 | 59 | 1.6699 | - | - | - | - | - |
| 4.1026 | 60 | 1.6675 | - | - | - | - | - |
| 4.1709 | 61 | 2.1672 | - | - | - | - | - |
| 4.2393 | 62 | 1.8881 | - | - | - | - | - |
| 4.3077 | 63 | 1.701 | - | - | - | - | - |
| 4.3761 | 64 | 1.9154 | - | - | - | - | - |
| 4.4444 | 65 | 1.4549 | - | - | - | - | - |
| 4.5128 | 66 | 1.5444 | - | - | - | - | - |
| 4.5812 | 67 | 1.8352 | - | - | - | - | - |
| 4.6496 | 68 | 1.7908 | - | - | - | - | - |
| 4.7179 | 69 | 1.6876 | - | - | - | - | - |
| 4.7863 | 70 | 1.7366 | - | - | - | - | - |
| 4.8547 | 71 | 1.8689 | - | - | - | - | - |
| 4.9231 | 72 | 1.4676 | - | - | - | - | - |
| 4.9915 | 73 | 1.5045 | 0.3170 | 0.3538 | 0.3606 | 0.2829 | 0.3675 |
| 5.0598 | 74 | 1.2155 | - | - | - | - | - |
| 5.1282 | 75 | 1.4365 | - | - | - | - | - |
| 5.1966 | 76 | 1.7451 | - | - | - | - | - |
| 5.2650 | 77 | 1.4537 | - | - | - | - | - |
| 5.3333 | 78 | 1.3813 | - | - | - | - | - |
| 5.4017 | 79 | 1.4035 | - | - | - | - | - |
| 5.4701 | 80 | 1.3912 | - | - | - | - | - |
| 5.5385 | 81 | 1.3286 | - | - | - | - | - |
| 5.6068 | 82 | 1.5153 | - | - | - | - | - |
| 5.6752 | 83 | 1.6745 | - | - | - | - | - |
| 5.7436 | 84 | 1.4323 | - | - | - | - | - |
| 5.8120 | 85 | 1.5299 | - | - | - | - | - |
| 5.8803 | 86 | 1.488 | - | - | - | - | - |
| 5.9487 | 87 | 1.5195 | 0.3206 | 0.3556 | 0.3530 | 0.2878 | 0.3605 |
| 6.0171 | 88 | 1.2999 | - | - | - | - | - |
| 6.0855 | 89 | 1.1511 | - | - | - | - | - |
| 6.1538 | 90 | 1.552 | - | - | - | - | - |
| 6.2222 | 91 | 1.35 | - | - | - | - | - |
| 6.2906 | 92 | 1.218 | - | - | - | - | - |
| 6.3590 | 93 | 1.1712 | - | - | - | - | - |
| 6.4274 | 94 | 1.3381 | - | - | - | - | - |
| 6.4957 | 95 | 1.1716 | - | - | - | - | - |
| 6.5641 | 96 | 1.2117 | - | - | - | - | - |
| 6.6325 | 97 | 1.5349 | - | - | - | - | - |
| 6.7009 | 98 | 1.4564 | - | - | - | - | - |
| 6.7692 | 99 | 1.3541 | - | - | - | - | - |
| 6.8376 | 100 | 1.2468 | - | - | - | - | - |
| 6.9060 | 101 | 1.1519 | - | - | - | - | - |
| 6.9744 | 102 | 1.2421 | 0.3150 | 0.3555 | 0.3501 | 0.2858 | 0.3575 |
| 7.0427 | 103 | 1.0096 | - | - | - | - | - |
| 7.1111 | 104 | 1.1405 | - | - | - | - | - |
| 7.1795 | 105 | 1.2958 | - | - | - | - | - |
| 7.2479 | 106 | 1.35 | - | - | - | - | - |
| 7.3162 | 107 | 1.1291 | - | - | - | - | - |
| 7.3846 | 108 | 0.9968 | - | - | - | - | - |
| 7.4530 | 109 | 1.0454 | - | - | - | - | - |
| 7.5214 | 110 | 1.102 | - | - | - | - | - |
| 7.5897 | 111 | 1.1328 | - | - | - | - | - |
| 7.6581 | 112 | 1.5988 | - | - | - | - | - |
| 7.7265 | 113 | 1.2992 | - | - | - | - | - |
| 7.7949 | 114 | 1.2572 | - | - | - | - | - |
| 7.8632 | 115 | 1.1414 | - | - | - | - | - |
| 7.9316 | 116 | 1.1432 | - | - | - | - | - |
| 8.0 | 117 | 1.1181 | 0.3154 | 0.3545 | 0.3509 | 0.2884 | 0.3578 |
| 8.0684 | 118 | 0.9365 | - | - | - | - | - |
| 8.1368 | 119 | 1.3286 | - | - | - | - | - |
| 8.2051 | 120 | 1.3711 | - | - | - | - | - |
| 8.2735 | 121 | 1.2001 | - | - | - | - | - |
| 8.3419 | 122 | 1.165 | - | - | - | - | - |
| 8.4103 | 123 | 1.0575 | - | - | - | - | - |
| 8.4786 | 124 | 1.105 | - | - | - | - | - |
| 8.5470 | 125 | 1.077 | - | - | - | - | - |
| 8.6154 | 126 | 1.2217 | - | - | - | - | - |
| 8.6838 | 127 | 1.3254 | - | - | - | - | - |
| 8.7521 | 128 | 1.2165 | - | - | - | - | - |
| 8.8205 | 129 | 1.3021 | - | - | - | - | - |
| 8.8889 | 130 | 1.0927 | - | - | - | - | - |
| 8.9573 | 131 | 1.3961 | 0.3150 | 0.3540 | 0.3490 | 0.2882 | 0.3588 |
| 9.0256 | 132 | 1.0779 | - | - | - | - | - |
| 9.0940 | 133 | 0.901 | - | - | - | - | - |
| 9.1624 | 134 | 1.313 | - | - | - | - | - |
| 9.2308 | 135 | 1.1409 | - | - | - | - | - |
| 9.2991 | 136 | 1.1635 | - | - | - | - | - |
| 9.3675 | 137 | 1.0244 | - | - | - | - | - |
| 9.4359 | 138 | 1.0576 | - | - | - | - | - |
| 9.5043 | 139 | 1.0101 | - | - | - | - | - |
| 9.5726 | 140 | 1.1516 | 0.3152 | 0.3561 | 0.3485 | 0.2877 | 0.3574 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.12.2
- Sentence Transformers: 3.0.0
- Transformers: 4.41.2
- PyTorch: 2.3.1
- Accelerate: 0.27.2
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-base-en-v1.5", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "dataset_size:1K<n<10K", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "USS Conyngham (DD-58)", "sentences": ["\"w jakich patrolach uczestniczył USS \"\"Conyngham\"\" (DD-58)?\"", "Jest ona najstarszą skoczkinią w kadrze norweskiej.", "kto uczył malarstwa olimpijczyka Bronisława Czecha?"]}, {"source_sentence": "Danae (obraz Tycjana)", "sentences": ["jakie różnice występują pomiędzy kolejnymi wersjami obrazu Tycjana Danae?", "z czego wykonana jest rzeźba Robotnik i kołchoźnica?", "z jakiego powodu zwołano synod w Whitby?"]}, {"source_sentence": "dlaczego zapominamy?", "sentences": ["Zamek w Haapsalu", "kto był tłumaczem języka angielskiego u Mao Zedonga?", "Najstarszy z trzech synów Hong Xiuquana; jego matką była Lai Lianying."]}, {"source_sentence": "kim był Steve Yzerman?", "sentences": ["która hala ma najmniejszą widownię w NHL?", "za co krytykowany był papieski wykład ratyzboński?", " W 1867 oddano do użytku Kolej Warszawsko-Terespolską (całą linię)."]}, {"source_sentence": "Herkules na rozstajach", "sentences": ["jak zinterpretować wymowę obrazu Herkules na rozstajach?", "Dowódcą grupy był Wiaczesław Razumowicz ps. „Chmara”.", "z jakiego powodu zwołano synod w Whitby?"]}], "model-index": [{"name": "bge-base-en-v1.5-klej-dyk", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.17307692307692307, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.46153846153846156, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.6225961538461539, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7355769230769231, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.17307692307692307, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15384615384615385, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.12451923076923076, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0735576923076923, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.17307692307692307, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.46153846153846156, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.6225961538461539, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7355769230769231, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4433646681639308, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.35053323412698395, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3573926265146405, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.16826923076923078, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4519230769230769, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.6009615384615384, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7091346153846154, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.16826923076923078, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15064102564102563, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.1201923076923077, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07091346153846154, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.16826923076923078, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4519230769230769, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.6009615384615384, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7091346153846154, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.42955891948336516, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3405992445054941, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3484580834493777, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.19230769230769232, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4543269230769231, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.5913461538461539, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.6899038461538461, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.19230769230769232, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15144230769230768, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.11826923076923078, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0689903846153846, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.19230769230769232, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4543269230769231, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.5913461538461539, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.6899038461538461, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4311008111471328, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3488247863247859, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3560982492053804, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 128", "type": "dim_128"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.16346153846153846, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.41586538461538464, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.5168269230769231, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.5985576923076923, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.16346153846153846, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.13862179487179488, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.10336538461538461, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.059855769230769226, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.16346153846153846, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.41586538461538464, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.5168269230769231, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.5985576923076923, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.37641559536404157, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.3052140567765567, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.3151839890893904, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 64", "type": "dim_64"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.1658653846153846, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.35096153846153844, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.43990384615384615, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.5288461538461539, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.1658653846153846, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.11698717948717949, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.08798076923076924, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.052884615384615384, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.1658653846153846, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.35096153846153844, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.43990384615384615, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.5288461538461539, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.33823482580826353, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.27800194597069605, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.2876731521968676, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,228 |
nikcheerla/amd-full-phonetree-v1
|
nikcheerla
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"region:us"
] | 2024-01-08T19:49:15Z |
2024-01-08T19:49:30+00:00
| 4,822 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'I''m sorry. The person you are trying to reach has a voice mailbox that has
not been set up yet. Please try your call '
- text: 'For calling WL Gore and Associates Incorporated. Please wait '
- text: 'Hello. Please state your name after the tone, and Google Voice will try '
- text: 'Thank you for calling Stanley Black and Decker. For the company directory,
press 1. For investor relations, press 2. '
- text: 'Sorry. Chris Trent is not available. Record your message at the tone. When
you are finished, hang up or press pound for more options. '
inference: true
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| voicemail | <ul><li>'Your call has been forwarded to an automated voice messaging system. 6 '</li><li>'Please leave your message for 8083526996. '</li><li>"This is Bart Jumper. I'm sorry I missed your call. Please leave your name and number, and I'll return your call as soon as I "</li></ul> |
| phone_tree | <ul><li>'Thank you for calling Periton. A next '</li><li>'Thank you for calling Signifide. Our main number has changed. The new number is eight six six two '</li><li>'Thank you for calling Icahn Health and Fitness. If you know the extension you wish to reach, '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nikcheerla/amd-full-phonetree-v1")
# Run inference
preds = model("For calling WL Gore and Associates Incorporated. Please wait ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.7789 | 214 |
| Label | Training Sample Count |
|:-----------|:----------------------|
| phone_tree | 4979 |
| voicemail | 5519 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2196 | - |
| 1.0 | 13123 | 0.0001 | 0.1209 |
| **2.0** | **26246** | **0.0** | **0.1101** |
| 3.0 | 39369 | 0.0446 | 0.1108 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| voicemail | <ul><li>'Your call has been forwarded to an automated voice messaging system. 6 '</li><li>'Please leave your message for 8083526996. '</li><li>"This is Bart Jumper. I'm sorry I missed your call. Please leave your name and number, and I'll return your call as soon as I "</li></ul> |
| phone_tree | <ul><li>'Thank you for calling Periton. A next '</li><li>'Thank you for calling Signifide. Our main number has changed. The new number is eight six six two '</li><li>'Thank you for calling Icahn Health and Fitness. If you know the extension you wish to reach, '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nikcheerla/amd-full-phonetree-v1")
# Run inference
preds = model("For calling WL Gore and Associates Incorporated. Please wait ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 14.7789 | 214 |
| Label | Training Sample Count |
|:-----------|:----------------------|
| phone_tree | 4979 |
| voicemail | 5519 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2196 | - |
| 1.0 | 13123 | 0.0001 | 0.1209 |
| **2.0** | **26246** | **0.0** | **0.1101** |
| 3.0 | 39369 | 0.0446 | 0.1108 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.0.1+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "I'm sorry. The person you are trying to reach has a voice mailbox that has not been set up yet. Please try your call "}, {"text": "For calling WL Gore and Associates Incorporated. Please wait "}, {"text": "Hello. Please state your name after the tone, and Google Voice will try "}, {"text": "Thank you for calling Stanley Black and Decker. For the company directory, press 1. For investor relations, press 2. "}, {"text": "Sorry. Chris Trent is not available. Record your message at the tone. When you are finished, hang up or press pound for more options. "}], "inference": true}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,229 |
TransferGraph/uygarkurt_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:uygarkurt/distilbert-base-uncased-finetuned-emotion",
"base_model:adapter:uygarkurt/distilbert-base-uncased-finetuned-emotion",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T17:07:34Z |
2024-02-29T13:30:14+00:00
| 0 | 0 |
---
base_model: uygarkurt/distilbert-base-uncased-finetuned-emotion
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: uygarkurt_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.6366492146596858
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uygarkurt_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [uygarkurt/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/uygarkurt/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4754 | None | 0 |
| 0.5937 | 0.6963 | 0 |
| 0.6199 | 0.6397 | 1 |
| 0.6220 | 0.6117 | 2 |
| 0.6408 | 0.5825 | 3 |
| 0.6251 | 0.5585 | 4 |
| 0.6283 | 0.5442 | 5 |
| 0.6398 | 0.5230 | 6 |
| 0.6366 | 0.5215 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uygarkurt_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [uygarkurt/distilbert-base-uncased-finetuned-emotion](https://huggingface.co/uygarkurt/distilbert-base-uncased-finetuned-emotion) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.6366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.4754 | None | 0 |
| 0.5937 | 0.6963 | 0 |
| 0.6199 | 0.6397 | 1 |
| 0.6220 | 0.6117 | 2 |
| 0.6408 | 0.5825 | 3 |
| 0.6251 | 0.5585 | 4 |
| 0.6283 | 0.5442 | 5 |
| 0.6398 | 0.5230 | 6 |
| 0.6366 | 0.5215 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "uygarkurt/distilbert-base-uncased-finetuned-emotion", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "uygarkurt_distilbert-base-uncased-finetuned-emotion-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "irony", "split": "validation", "args": "irony"}, "metrics": [{"type": "accuracy", "value": 0.6366492146596858, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,230 |
wisdomik/QuiltNet-B-16
|
wisdomik
|
zero-shot-image-classification
|
[
"open_clip",
"pytorch",
"clip",
"zero-shot-image-classification",
"histopathology",
"histology",
"medical",
"arxiv:2306.11207",
"license:mit",
"region:us"
] | 2023-06-19T22:20:59Z |
2023-11-07T16:39:47+00:00
| 640 | 4 |
---
license: mit
pipeline_tag: zero-shot-image-classification
tags:
- zero-shot-image-classification
- clip
- histopathology
- histology
- medical
library_tag: open_clip
widget:
- src: https://quilt1m.github.io/img/BREST092.jpg
candidate_labels: adipose tissue, debris tissue, lymphocytes tissue, mucus tissue,
smooth muscle tissue, normal colon mucosa tissue, cancer-associated stroma tissue,
colorectal adenocarcinoma epithelium tissue
example_title: Tissue phenotyping
- src: https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
example_title: squamous cell carcinoma histopathology
- src: https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg
candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
example_title: adenocarcinoma histopathology
---
## QuiltNet-B-16 Description
[QuiltNet-B-16](https://github.com/wisdomikezogwo/quilt1m/) is a CLIP ViT-B/16 vision-language foundation model trained on the [Quilt-1M](https://quilt1m.github.io/) dataset curated from representative histopathology videos.
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
QuiltNet establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:

# Citation
```bibtex
@misc{ikezogwo2023quilt1m,
title={Quilt-1M: One Million Image-Text Pairs for Histopathology},
author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro},
year={2023},
eprint={2306.11207},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision histopathology models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the Quilt-1M dataset used in training of these models has additional considerations, see below.
## Training Data
This model was trained with [QUILT-1M](https://quilt1m.github.io/) is an image-text dataset for histopathology.
Curated from educational videos on Youtube QUILT-1M contributes the largest dataset for vision language modeling in histopathology.
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale histopathology datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes.
# Evaluation
Evaluation done with code in the [CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark) and results can be found in the paper on a list of varying histology tasks and datasets.
# Disclaimer
It is important to note that the results obtained from this function are not intended to constitute medical advice or replace consultation with a qualified medical professional. The use of this function is solely at your own risk and should be consistent with applicable laws, regulations, and ethical considerations. We do not warrant or guarantee the accuracy, completeness, suitability, or usefulness of this function for any particular purpose, and we hereby disclaim any liability arising from any reliance placed on this function or any results obtained from its use.
# Privacy
In accordance with the privacy policy of Youtube, only Video IDs data is redistributed by us.
It is strictly prohibited to redistribute any content apart from the Video IDs.
Any distribution carried out must adhere to the laws and regulations applicable in your jurisdiction, including export control laws and embargoes.'
| null |
Non_BioNLP
|
## QuiltNet-B-16 Description
[QuiltNet-B-16](https://github.com/wisdomikezogwo/quilt1m/) is a CLIP ViT-B/16 vision-language foundation model trained on the [Quilt-1M](https://quilt1m.github.io/) dataset curated from representative histopathology videos.
It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
QuiltNet establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:

# Citation
```bibtex
@misc{ikezogwo2023quilt1m,
title={Quilt-1M: One Million Image-Text Pairs for Histopathology},
author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro},
year={2023},
eprint={2306.11207},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision histopathology models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the Quilt-1M dataset used in training of these models has additional considerations, see below.
## Training Data
This model was trained with [QUILT-1M](https://quilt1m.github.io/) is an image-text dataset for histopathology.
Curated from educational videos on Youtube QUILT-1M contributes the largest dataset for vision language modeling in histopathology.
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale histopathology datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes.
# Evaluation
Evaluation done with code in the [CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark) and results can be found in the paper on a list of varying histology tasks and datasets.
# Disclaimer
It is important to note that the results obtained from this function are not intended to constitute medical advice or replace consultation with a qualified medical professional. The use of this function is solely at your own risk and should be consistent with applicable laws, regulations, and ethical considerations. We do not warrant or guarantee the accuracy, completeness, suitability, or usefulness of this function for any particular purpose, and we hereby disclaim any liability arising from any reliance placed on this function or any results obtained from its use.
# Privacy
In accordance with the privacy policy of Youtube, only Video IDs data is redistributed by us.
It is strictly prohibited to redistribute any content apart from the Video IDs.
Any distribution carried out must adhere to the laws and regulations applicable in your jurisdiction, including export control laws and embargoes.'
|
{"license": "mit", "pipeline_tag": "zero-shot-image-classification", "tags": ["zero-shot-image-classification", "clip", "histopathology", "histology", "medical"], "library_tag": "open_clip", "widget": [{"src": "https://quilt1m.github.io/img/BREST092.jpg", "candidate_labels": "adipose tissue, debris tissue, lymphocytes tissue, mucus tissue, smooth muscle tissue, normal colon mucosa tissue, cancer-associated stroma tissue, colorectal adenocarcinoma epithelium tissue", "example_title": "Tissue phenotyping"}, {"src": "https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg", "candidate_labels": "adenocarcinoma histopathology, squamous cell carcinoma histopathology", "example_title": "squamous cell carcinoma histopathology"}, {"src": "https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg", "candidate_labels": "adenocarcinoma histopathology, squamous cell carcinoma histopathology", "example_title": "adenocarcinoma histopathology"}]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,231 |
RichardErkhov/MarcusCosta_-_Sumarizadors-4bits
|
RichardErkhov
| null |
[
"safetensors",
"phi3",
"custom_code",
"arxiv:2404.14219",
"arxiv:2407.13833",
"arxiv:2403.06412",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-01-23T05:21:34Z |
2025-01-23T05:23:56+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Sumarizadors - bnb 4bits
- Model creator: https://huggingface.co/MarcusCosta/
- Original model: https://huggingface.co/MarcusCosta/Sumarizadors/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
---
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-3.5-mini-instruct-onnx)]; [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct); [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
## Appendix B: Korean benchmarks
The prompt is the same as the [CLIcK paper](https://arxiv.org/abs/2403.06412) prompt. The experimental results below were given with max_tokens=512 (zero-shot), max_tokens=1024 (5-shot), temperature=0.01. No system prompt used.
- GPT-4o: 2024-05-13 version
- GPT-4o-mini: 2024-07-18 version
- GPT-4-turbo: 2024-04-09 version
- GPT-3.5-turbo: 2023-06-13 version
The overall Korean benchmarks show that the Phi-3.5-Mini-Instruct with only 3.8B params outperforms Llama-3.1-8B-Instruct.
| Benchmarks | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:-------------------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| CLIcK | 42.99 | 29.12 | 47.82 | 80.46 | 68.5 | 72.82 | 50.98 |
| HAERAE 1.0 | 44.21 | 36.41 | 53.9 | 85.7 | 76.4 | 77.76 | 52.67 |
| KMMLU (0-shot, CoT) | 35.87 | 30.82 | 38.54 | 64.26 | 52.63 | 58.75 | 40.3 |
| KMMLU (5-shot) | 37.35 | 29.98 | 20.21 | 64.28 | 51.62 | 59.29 | 42.28 |
| KMMLU-HARD (0-shot, CoT) | 24 | 25.68 | 24.03 | 39.62 | 24.56 | 30.56 | 20.97 |
| KMMLU-HARD (5-shot) | 24.76 | 25.73 | 15.81 | 40.94 | 24.63 | 31.12 | 21.19 |
| **Average** | **35.62** | **29.99** | **29.29** | **62.54** | **50.08** | **56.74** | **39.61** |
#### CLIcK (Cultural and Linguistic Intelligence in Korean)
##### Accuracy by supercategory
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Culture | 43.77 | 29.74 | 51.15 | 81.89 | 70.95 | 73.61 | 53.38 |
| Language | 41.38 | 27.85 | 40.92 | 77.54 | 63.54 | 71.23 | 46 |
| **Overall** | 42.99 | 29.12 | 47.82 | 80.46 | 68.5 | 72.82 | 50.98 |
##### Accuracy by category
| supercategory | category | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|:------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Culture | Economy | 61.02 | 28.81 | 66.1 | 94.92 | 83.05 | 89.83 | 64.41 |
| Culture | Geography | 45.8 | 29.01 | 54.2 | 80.15 | 77.86 | 82.44 | 53.44 |
| Culture | History | 26.15 | 30 | 29.64 | 66.92 | 48.4 | 46.4 | 31.79 |
| Culture | Law | 32.42 | 22.83 | 44.29 | 70.78 | 57.53 | 61.19 | 41.55 |
| Culture | Politics | 54.76 | 33.33 | 59.52 | 88.1 | 83.33 | 89.29 | 65.48 |
| Culture | Pop Culture | 60.98 | 34.15 | 60.98 | 97.56 | 85.37 | 92.68 | 75.61 |
| Culture | Society | 54.37 | 31.72 | 65.05 | 92.88 | 85.44 | 86.73 | 71.2 |
| Culture | Tradition | 47.75 | 31.98 | 54.95 | 87.39 | 74.77 | 79.28 | 55.86 |
| Language | Functional | 37.6 | 24 | 32.8 | 84.8 | 64.8 | 80 | 40 |
| Language | Grammar | 27.5 | 23.33 | 22.92 | 57.08 | 42.5 | 47.5 | 30 |
| Language | Textual | 54.74 | 33.33 | 59.65 | 91.58 | 80.7 | 87.37 | 62.11 |
#### HAERAE
| category | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| General Knowledge | 31.25 | 28.41 | 34.66 | 77.27 | 53.41 | 66.48 | 40.91 |
| History | 32.45 | 22.34 | 44.15 | 92.02 | 84.57 | 78.72 | 30.32 |
| Loan Words | 47.93 | 35.5 | 63.31 | 79.88 | 76.33 | 78.11 | 59.17 |
| Rare Words | 55.06 | 42.96 | 63.21 | 87.9 | 81.98 | 79.01 | 61.23 |
| Reading Comprehension | 42.95 | 41.16 | 51.9 | 85.46 | 77.18 | 80.09 | 56.15 |
| Standard Nomenclature | 44.44 | 32.68 | 58.82 | 88.89 | 75.82 | 79.08 | 53.59 |
| **Overall** | 44.21 | 36.41 | 53.9 | 85.7 | 76.4 | 77.76 | 52.67 |
#### KMMLU (0-shot, CoT)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 35.8 | 31.68 | 37.03 | 61.52 | 49.29 | 55.98 | 38.47 |
| HUMSS | 31.56 | 26.47 | 37.29 | 69.45 | 56.59 | 63 | 40.9 |
| Other | 35.45 | 31.01 | 39.15 | 63.79 | 52.35 | 57.53 | 40.19 |
| STEM | 38.54 | 31.9 | 40.42 | 65.16 | 54.74 | 60.84 | 42.24 |
| **Overall** | 35.87 | 30.82 | 38.54 | 64.26 | 52.63 | 58.75 | 40.3 |
#### KMMLU (5-shot)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 37.42 | 29.98 | 19.24 | 61.47 | 48.66 | 56.85 | 40.22 |
| HUMSS | 34.72 | 27.27 | 22.5 | 68.79 | 55.95 | 63.68 | 43.35 |
| Other | 37.04 | 30.76 | 20.95 | 64.21 | 51.1 | 57.85 | 41.92 |
| STEM | 38.9 | 30.73 | 19.55 | 65.28 | 53.29 | 61.08 | 44.43 |
| **Overall** | 37.35 | 29.98 | 20.21 | 64.28 | 51.62 | 59.29 | 42.28 |
#### KMMLU-HARD (0-shot, CoT)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 27.08 | 26.17 | 26.25 | 37.12 | 22.25 | 29.17 | 21.07 |
| HUMSS | 20.21 | 24.38 | 20.21 | 41.97 | 23.31 | 31.51 | 19.44 |
| Other | 23.05 | 24.82 | 23.88 | 40.39 | 26.48 | 29.59 | 22.22 |
| STEM | 24.36 | 26.91 | 24.64 | 39.82 | 26.36 | 32.18 | 20.91 |
| **Overall** | 24 | 25.68 | 24.03 | 39.62 | 24.56 | 30.56 | 20.97 |
#### KMMLU-HARD (5-shot)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 25 | 29 | 12 | 31 | 21 | 25 | 20 |
| HUMSS | 21.89 | 19.92 | 14 | 43.98 | 23.47 | 33.53 | 19.53 |
| Other | 23.26 | 27.27 | 12.83 | 39.84 | 28.34 | 29.68 | 23.22 |
| STEM | 20.5 | 25.25 | 12.75 | 40.25 | 23.25 | 27.25 | 19.75 |
| **Overall** | 24.76 | 25.73 | 15.81 | 40.94 | 24.63 | 31.12 | 21.19 |
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Sumarizadors - bnb 4bits
- Model creator: https://huggingface.co/MarcusCosta/
- Original model: https://huggingface.co/MarcusCosta/Sumarizadors/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
---
## Model Summary
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/phi3.5-techblog) <br>
📖 [Phi-3 Technical Report](https://arxiv.org/abs/2404.14219) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3.5mini) <br>
**Phi-3.5**: [[mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-3.5-mini-instruct-onnx)]; [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct); [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Intended Uses
### Primary Use Cases
The model is intended for commercial and research use in multiple languages. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
### Use Case Considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
***Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.***
## Release Notes
This is an update over the June 2024 instruction-tuned Phi-3 Mini release based on valuable user feedback. The model used additional post-training data leading to substantial gains on multilingual, multi-turn conversation quality, and reasoning capability. We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications. We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
### Multilingual
The table below highlights multilingual capability of the Phi-3.5 Mini on multilingual MMLU, MEGA, and multilingual MMLU-pro datasets. Overall, we observed that even with just 3.8B active parameters, the model is competitive on multilingual tasks in comparison to other models with a much bigger active parameters.
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Multilingual MMLU | 55.4 | 51.08 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| Multilingual MMLU-Pro | 30.9 | 30.21 | 15.0 | 34.0 | 21.4 | 43.0 | 57.9 | 53.2 |
| MGSM | 47.9 | 41.56 | 31.8 | 63.3 | 56.7 | 75.1 | 75.8 | 81.7 |
| MEGA MLQA | 61.7 | 55.5 | 43.9 | 61.2 | 45.2 | 54.4 | 61.6 | 70.0 |
| MEGA TyDi QA | 62.2 | 55.9 | 54.0 | 63.7 | 54.5 | 65.6 | 63.6 | 81.8 |
| MEGA UDPOS | 46.5 | 48.1 | 57.2 | 58.2 | 54.1 | 56.6 | 62.4 | 66.0 |
| MEGA XCOPA | 63.1 | 62.4 | 58.8 | 10.8 | 21.1 | 31.2 | 95.0 | 90.3 |
| MEGA XStoryCloze | 73.5 | 73.6 | 75.5 | 92.3 | 71.0 | 87.0 | 20.7 | 96.6 |
| **Average** | **55.2** | **52.3** | **47.9** | **55.3** | **47.5** | **59.6** | **64.3** | **76.6** |
The table below shows Multilingual MMLU scores in some of the supported languages. For more multi-lingual benchmarks and details, see [Appendix A](#appendix-a).
| Benchmark | Phi-3.5 Mini-Ins | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------|-----------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 44.2 | 35.4 | 33.7 | 45.3 | 49.1 | 56.3 | 73.6 | 67.1 |
| Chinese | 52.6 | 46.9 | 45.9 | 58.2 | 54.4 | 62.7 | 66.7 | 70.8 |
| Dutch | 57.7 | 48.0 | 51.3 | 60.1 | 55.9 | 66.7 | 80.6 | 74.2 |
| French | 61.1 | 61.7 | 53.0 | 63.8 | 62.8 | 67.0 | 82.9 | 75.6 |
| German | 62.4 | 61.3 | 50.1 | 64.5 | 59.9 | 65.7 | 79.5 | 74.3 |
| Italian | 62.8 | 63.1 | 52.5 | 64.1 | 55.9 | 65.7 | 82.6 | 75.9 |
| Russian | 50.4 | 45.3 | 48.9 | 59.0 | 57.4 | 63.2 | 78.7 | 72.6 |
| Spanish | 62.6 | 61.3 | 53.9 | 64.3 | 62.6 | 66.0 | 80.0 | 75.5 |
| Ukrainian | 45.2 | 36.7 | 46.9 | 56.6 | 52.9 | 62.0 | 77.4 | 72.6 |
### Long Context
Phi-3.5-mini supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA, long document information retrieval. We see that Phi-3.5-mini is clearly better than Gemma-2 family which only supports 8K context length. Phi-3.5-mini is competitive with other much larger open-weight models such as Llama-3.1-8B-instruct, Mistral-7B-instruct-v0.3, and Mistral-Nemo-12B-instruct-2407.
| Benchmark | Phi-3.5-mini-instruct | Llama-3.1-8B-instruct | Mistral-7B-instruct-v0.3 | Mistral-Nemo-12B-instruct-2407 | Gemini-1.5-Flash | GPT-4o-mini-2024-07-18 (Chat) |
|--|--|--|--|--|--|--|
| GovReport | 25.9 | 25.1 | 26.0 | 25.6 | 27.8 | 24.8 |
| QMSum | 21.3 | 21.6 | 21.3 | 22.1 | 24.0 | 21.7 |
| Qasper | 41.9 | 37.2 | 31.4 | 30.7 | 43.5 | 39.8 |
| SQuALITY | 25.3 | 26.2 | 25.9 | 25.8 | 23.5 | 23.8 |
| SummScreenFD | 16.0 | 17.6 | 17.5 | 18.2 | 16.3 | 17.0 |
| **Average** | **26.1** | **25.5** | **24.4** | **24.5** | **27.0** | **25.4** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
|--|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 94.3 | 91.1 | 90.7 | 87.1 | 78.0 | 63.6 | **84.1** |
| **Llama-3.1-8B-instruct** | 95.5 | 93.8 | 91.6 | 87.4 | 84.7 | 77.0 | **88.3** |
| **Mistral-Nemo-12B-instruct-2407** | 87.8 | 87.2 | 87.7 | 69.0 | 46.8 | 19.0 | **66.2** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
|--|--|--|--|--|--|--|
| **Phi-3.5-mini-instruct** | 86 | 67 | 73 | 77 | 82 | **77** |
| **Llama-3.1-8B-instruct** | 80 | 65 | 73 | 76 | 63 | **71** |
| **Mistral-7B-instruct-v0.3** | 61 | 57 | 51 | 61 | 80 | **62** |
## Usage
### Requirements
Phi-3 family has been integrated in the `4.43.0` version of `transformers`. The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.43.0
```
Phi-3.5-mini-instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3.5mini)
### Tokenizer
Phi-3.5-mini-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Input Formats
Given the nature of the training data, the Phi-3.5-mini-instruct model is best suited for prompts using the chat format as follows:
```
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
### Loading the model locally
After obtaining the Phi-3.5-mini-instruct model checkpoint, users can use this sample code for inference.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3.5-mini-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi family of models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: The Phi models are trained primarily on English text and some additional multilingual text. Languages other than English will experience worse performance as well as performance disparities across non-English. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Multilingual performance and safety gaps: We believe it is important to make language models more widely available across different languages, but the Phi 3 models still exhibit challenges common across multilingual releases. As with any deployment of LLMs, developers will be better positioned to test for performance or safety gaps for their linguistic and cultural context and customize the model with additional fine-tuning and appropriate safeguards.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups, cultural contexts, or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
+ Long Conversation: Phi-3 models, like other models, can in some cases generate responses that are repetitive, unhelpful, or inconsistent in very long chat sessions in both English and non-English languages. Developers are encouraged to place appropriate mitigations, like limiting conversation turns to account for the possible conversational drift
Developers should apply responsible AI best practices, including mapping, measuring, and mitigating risks associated with their specific use case and cultural, linguistic context. Phi-3 family of models are general purpose models. As developers plan to deploy these models for specific use cases, they are encouraged to fine-tune the models for their use case and leverage the models as part of broader AI systems with language-specific safeguards in place. Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess the suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
**Architecture:** Phi-3.5-mini has 3.8B parameters and is a dense decoder-only Transformer model using the same tokenizer as Phi-3 Mini.<br>
**Inputs:** Text. It is best suited for prompts using chat format.<br>
**Context length:** 128K tokens<br>
**GPUs:** 512 H100-80G<br>
**Training time:** 10 days<br>
**Training data:** 3.4T tokens<br>
**Outputs:** Generated text in response to the input<br>
**Dates:** Trained between June and August 2024<br>
**Status:** This is a static model trained on an offline dataset with cutoff date October 2023 for publicly available data. Future versions of the tuned models may be released as we improve models.<br>
**Supported languages:** Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian<br>
**Release date:** August 2024<br>
### Training Datasets
Our training data includes a wide variety of sources, totaling 3.4 trillion tokens, and is a combination of
1) publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) high quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://arxiv.org/pdf/2404.14219).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3.5-mini on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7B-Instruct-v0.3, Mistral-Nemo-12B-Ins-2407, Llama-3.1-8B-Ins, Gemma-2-9B-Ins, Gemini 1.5 Flash, and GPT-4o-mini-2024-07-18 (Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark. At the high-level overview of the model quality on representative benchmarks:
| Category | Benchmark | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------|--------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | Arena Hard | 37 | 18.1 | 39.4 | 25.7 | 42 | 55.2 | 75 |
| | BigBench Hard CoT (0-shot) | 69 | 33.4 | 60.2 | 63.4 | 63.5 | 66.7 | 80.4 |
| | MMLU (5-shot) | 69 | 60.3 | 67.2 | 68.1 | 71.3 | 78.7 | 77.2 |
| | MMLU-Pro (0-shot, CoT) | 47.4 | 18 | 40.7 | 44 | 50.1 | 57.2 | 62.8 |
| Reasoning | ARC Challenge (10-shot) | 84.6 | 77.9 | 84.8 | 83.1 | 89.8 | 92.8 | 93.5 |
| | BoolQ (2-shot) | 78 | 80.5 | 82.5 | 82.8 | 85.7 | 85.8 | 88.7 |
| | GPQA (0-shot, CoT) | 30.4 | 15.6 | 28.6 | 26.3 | 29.2 | 37.5 | 41.1 |
| | HellaSwag (5-shot) | 69.4 | 71.6 | 76.7 | 73.5 | 80.9 | 67.5 | 87.1 |
| | OpenBookQA (10-shot) | 79.2 | 78 | 84.4 | 84.8 | 89.6 | 89 | 90 |
| | PIQA (5-shot) | 81 | 73.4 | 83.5 | 81.2 | 83.7 | 87.5 | 88.7 |
| | Social IQA (5-shot) | 74.7 | 73 | 75.3 | 71.8 | 74.7 | 77.8 | 82.9 |
| | TruthfulQA (MC2) (10-shot) | 64 | 64.7 | 68.1 | 69.2 | 76.6 | 76.6 | 78.2 |
| | WinoGrande (5-shot) | 68.5 | 58.1 | 70.4 | 64.7 | 74 | 74.7 | 76.9 |
| Multilingual | Multilingual MMLU (5-shot) | 55.4 | 47.4 | 58.9 | 56.2 | 63.8 | 77.2 | 72.9 |
| | MGSM (0-shot CoT) | 47.9 | 31.8 | 63.3 | 56.7 | 76.4 | 75.8 | 81.7 |
| Math | GSM8K (8-shot, CoT) | 86.2 | 54.4 | 84.2 | 82.4 | 84.9 | 82.4 | 91.3 |
| | MATH (0-shot, CoT) | 48.5 | 19 | 31.2 | 47.6 | 50.9 | 38 | 70.2 |
| Long context | Qasper | 41.9 | 31.4 | 30.7 | 37.2 | 13.9 | 43.5 | 39.8 |
| | SQuALITY | 24.3 | 25.9 | 25.8 | 26.2 | 0 | 23.5 | 23.8 |
| Code Generation| HumanEval (0-shot) | 62.8 | 35.4 | 63.4 | 66.5 | 61 | 74.4 | 86.6 |
| | MBPP (3-shot) | 69.6 | 50.4 | 68.1 | 69.4 | 69.3 | 77.5 | 84.1 |
| **Average** | | **61.4** | **48.5** | **61.3** | **61.0** | **63.3** | **68.5** | **74.9** |
We take a closer look at different categories across public benchmark datasets at the table below:
| Category | Phi-3.5 Mini-Ins | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|----------------------------|------------------|--------------------------|---------------------------|------------------|----------------|------------------|------------------------------|
| Popular aggregated benchmark | 55.6 | 32.5 | 51.9 | 50.3 | 56.7 | 64.5 | 73.9 |
| Reasoning | 70.1 | 65.2 | 72.2 | 70.5 | 75.4 | 77.7 | 80 |
| Language understanding | 62.6 | 62.8 | 67 | 62.9 | 72.8 | 66.6 | 76.8 |
| Robustness | 59.7 | 53.4 | 65.2 | 59.8 | 64.7 | 68.9 | 77.5 |
| Long context | 26.1 | 25.5 | 24.4 | 24.5 | 0 | 27 | 25.4 |
| Math | 67.4 | 36.7 | 57.7 | 65 | 67.9 | 60.2 | 80.8 |
| Code generation | 62 | 43.1 | 56.9 | 65.8 | 58.3 | 66.8 | 69.9 |
| Multilingual | 55.2 | 47.9 | 55.3 | 47.5 | 59.6 | 64.3 | 76.6 |
Overall, the model with only 3.8B-param achieves a similar level of multilingual language understanding and reasoning ability as much larger models.
However, it is still fundamentally limited by its size for certain tasks.
The model simply does not have the capacity to store too much factual knowledge, therefore, users may experience factual incorrectness.
However, we believe such weakness can be resolved by augmenting Phi-3.5 with a search engine, particularly when using the model under RAG settings.
## Safety Evaluation and Red-Teaming
We leveraged various evaluation techniques including red teaming, adversarial conversation simulations, and multilingual safety evaluation benchmark datasets to
evaluate Phi-3.5 models' propensity to produce undesirable outputs across multiple languages and risk categories.
Several approaches were used to compensate for the limitations of one approach alone. Findings across the various evaluation methods indicate that safety
post-training that was done as detailed in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833) had a positive impact across multiple languages and risk categories as observed by
refusal rates (refusal to output undesirable outputs) and robustness to jailbreak techniques. Note, however, while comprehensive red team evaluations were conducted
across all models in the prior release of Phi models, red teaming was largely focused on Phi-3.5 MOE across multiple languages and risk categories for this release as
it is the largest and more capable model of the three models. Details on prior red team evaluations across Phi models can be found in the [Phi-3 Safety Post-Training paper](https://arxiv.org/pdf/2407.13833).
For this release, insights from red teaming indicate that the models may refuse to generate undesirable outputs in English, even when the request for undesirable output
is in another language. Models may also be more susceptible to longer multi-turn jailbreak techniques across both English and non-English languages. These findings
highlight the need for industry-wide investment in the development of high-quality safety evaluation datasets across multiple languages, including low resource languages,
and risk areas that account for cultural nuances where those languages are spoken.
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3.5-mini-instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
## License
The model is licensed under the [MIT license](./LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
## Appendix A
#### MGSM
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|------------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| German | 69.6 | 65.2 | 42.4 | 74.4 | 68.4 | 76.8 | 81.6 | 82.8 |
| English | 85.2 | 83.2 | 60.0 | 86.0 | 81.2 | 88.8 | 90.8 | 90.8 |
| Spanish | 79.2 | 77.6 | 46.4 | 75.6 | 66.4 | 82.4 | 84.8 | 86.8 |
| French | 71.6 | 72.8 | 47.2 | 70.4 | 66.8 | 74.4 | 77.2 | 81.6 |
| Japanese | 50.0 | 35.2 | 22.8 | 62.4 | 49.2 | 67.6 | 77.6 | 80.4 |
| Russian | 67.2 | 51.6 | 43.2 | 73.6 | 67.2 | 78.4 | 84.8 | 86.4 |
| Thai | 29.6 | 6.4 | 18.4 | 53.2 | 56.0 | 76.8 | 87.6 | 81.6 |
| Chinese | 60.0 | 52.8 | 42.4 | 66.4 | 68.0 | 72.8 | 82.0 | 82.0 |
#### Multilingual MMLU-pro
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|------------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Czech | 24.9 | 26.3 | 14.6 | 30.6 | 23.0 | 40.5 | 59.0 | 40.9 |
| English | 47.7 | 46.2 | 17.7 | 39.8 | 43.1 | 49.0 | 66.1 | 62.7 |
| Finnish | 22.3 | 20.5 | 11.5 | 30.4 | 9.7 | 37.5 | 54.5 | 50.1 |
| Norwegian | 29.9 | 27.8 | 14.4 | 33.2 | 22.2 | 44.4 | 60.7 | 59.1 |
| Polish | 25.7 | 26.4 | 16.3 | 33.6 | 9.2 | 41.7 | 53.9 | 42.8 |
| Portuguese | 38.7 | 37.6 | 15.3 | 36.0 | 29.3 | 43.5 | 54.0 | 56.9 |
| Swedish | 30.7 | 28.1 | 15.5 | 34.3 | 16.9 | 42.6 | 57.7 | 55.5 |
#### MEGA
##### MLQA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 54.3 | 32.7 | 23.5 | 31.4 | 31.5 | 57.4 | 63.8 | 64.0 |
| Chinese | 36.1 | 31.8 | 22.4 | 27.4 | 18.6 | 45.4 | 38.1 | 38.9 |
| English | 80.3 | 78.9 | 68.2 | 75.5 | 67.2 | 82.9 | 69.5 | 82.2 |
| German | 61.8 | 59.1 | 49.0 | 57.8 | 38.9 | 63.8 | 55.9 | 64.1 |
| Spanish | 68.8 | 67.0 | 50.3 | 63.6 | 52.7 | 72.8 | 59.6 | 70.1 |
##### TyDi QA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| Arabic | 69.7 | 54.4 | 52.5 | 49.8 | 33.7 | 81.1 | 78.8 | 84.9 |
| English | 82.0 | 82.0 | 60.5 | 77.3 | 65.1 | 82.4 | 60.9 | 81.8 |
| Finnish | 70.3 | 64.3 | 68.6 | 57.1 | 74.4 | 85.7 | 73.5 | 84.8 |
| Japanese | 65.4 | 56.7 | 45.3 | 54.8 | 34.1 | 74.6 | 59.7 | 73.3 |
| Korean | 74.0 | 60.4 | 54.5 | 54.2 | 54.9 | 83.8 | 60.7 | 82.3 |
| Russian | 63.5 | 62.7 | 52.3 | 55.7 | 27.4 | 69.8 | 60.1 | 72.5 |
| Thai | 64.4 | 49.0 | 51.8 | 43.5 | 48.5 | 81.4 | 71.6 | 78.2 |
##### XCOPA
| Languages | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Mistral-7B-Instruct-v0.3 | Mistral-Nemo-12B-Ins-2407 | Llama-3.1-8B-Ins | Gemma-2-9B-Ins | Gemini 1.5 Flash | GPT-4o-mini-2024-07-18 (Chat) |
|-----------|-----------------------|---------------------------------------|--------------------------|---------------------------|------------------|----------------|------------------|-------------------------------|
| English | 94.6 | 94.6 | 85.6 | 94.4 | 37.6 | 63.8 | 92.0 | 98.2 |
| Italian | 86.8 | 84.8 | 76.8 | 83.2 | 16.2 | 37.2 | 85.6 | 97.6 |
| Turkish | 58.6 | 57.2 | 61.6 | 56.6 | 38.4 | 60.2 | 91.4 | 94.6 |
## Appendix B: Korean benchmarks
The prompt is the same as the [CLIcK paper](https://arxiv.org/abs/2403.06412) prompt. The experimental results below were given with max_tokens=512 (zero-shot), max_tokens=1024 (5-shot), temperature=0.01. No system prompt used.
- GPT-4o: 2024-05-13 version
- GPT-4o-mini: 2024-07-18 version
- GPT-4-turbo: 2024-04-09 version
- GPT-3.5-turbo: 2023-06-13 version
The overall Korean benchmarks show that the Phi-3.5-Mini-Instruct with only 3.8B params outperforms Llama-3.1-8B-Instruct.
| Benchmarks | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:-------------------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| CLIcK | 42.99 | 29.12 | 47.82 | 80.46 | 68.5 | 72.82 | 50.98 |
| HAERAE 1.0 | 44.21 | 36.41 | 53.9 | 85.7 | 76.4 | 77.76 | 52.67 |
| KMMLU (0-shot, CoT) | 35.87 | 30.82 | 38.54 | 64.26 | 52.63 | 58.75 | 40.3 |
| KMMLU (5-shot) | 37.35 | 29.98 | 20.21 | 64.28 | 51.62 | 59.29 | 42.28 |
| KMMLU-HARD (0-shot, CoT) | 24 | 25.68 | 24.03 | 39.62 | 24.56 | 30.56 | 20.97 |
| KMMLU-HARD (5-shot) | 24.76 | 25.73 | 15.81 | 40.94 | 24.63 | 31.12 | 21.19 |
| **Average** | **35.62** | **29.99** | **29.29** | **62.54** | **50.08** | **56.74** | **39.61** |
#### CLIcK (Cultural and Linguistic Intelligence in Korean)
##### Accuracy by supercategory
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Culture | 43.77 | 29.74 | 51.15 | 81.89 | 70.95 | 73.61 | 53.38 |
| Language | 41.38 | 27.85 | 40.92 | 77.54 | 63.54 | 71.23 | 46 |
| **Overall** | 42.99 | 29.12 | 47.82 | 80.46 | 68.5 | 72.82 | 50.98 |
##### Accuracy by category
| supercategory | category | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|:------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Culture | Economy | 61.02 | 28.81 | 66.1 | 94.92 | 83.05 | 89.83 | 64.41 |
| Culture | Geography | 45.8 | 29.01 | 54.2 | 80.15 | 77.86 | 82.44 | 53.44 |
| Culture | History | 26.15 | 30 | 29.64 | 66.92 | 48.4 | 46.4 | 31.79 |
| Culture | Law | 32.42 | 22.83 | 44.29 | 70.78 | 57.53 | 61.19 | 41.55 |
| Culture | Politics | 54.76 | 33.33 | 59.52 | 88.1 | 83.33 | 89.29 | 65.48 |
| Culture | Pop Culture | 60.98 | 34.15 | 60.98 | 97.56 | 85.37 | 92.68 | 75.61 |
| Culture | Society | 54.37 | 31.72 | 65.05 | 92.88 | 85.44 | 86.73 | 71.2 |
| Culture | Tradition | 47.75 | 31.98 | 54.95 | 87.39 | 74.77 | 79.28 | 55.86 |
| Language | Functional | 37.6 | 24 | 32.8 | 84.8 | 64.8 | 80 | 40 |
| Language | Grammar | 27.5 | 23.33 | 22.92 | 57.08 | 42.5 | 47.5 | 30 |
| Language | Textual | 54.74 | 33.33 | 59.65 | 91.58 | 80.7 | 87.37 | 62.11 |
#### HAERAE
| category | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| General Knowledge | 31.25 | 28.41 | 34.66 | 77.27 | 53.41 | 66.48 | 40.91 |
| History | 32.45 | 22.34 | 44.15 | 92.02 | 84.57 | 78.72 | 30.32 |
| Loan Words | 47.93 | 35.5 | 63.31 | 79.88 | 76.33 | 78.11 | 59.17 |
| Rare Words | 55.06 | 42.96 | 63.21 | 87.9 | 81.98 | 79.01 | 61.23 |
| Reading Comprehension | 42.95 | 41.16 | 51.9 | 85.46 | 77.18 | 80.09 | 56.15 |
| Standard Nomenclature | 44.44 | 32.68 | 58.82 | 88.89 | 75.82 | 79.08 | 53.59 |
| **Overall** | 44.21 | 36.41 | 53.9 | 85.7 | 76.4 | 77.76 | 52.67 |
#### KMMLU (0-shot, CoT)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 35.8 | 31.68 | 37.03 | 61.52 | 49.29 | 55.98 | 38.47 |
| HUMSS | 31.56 | 26.47 | 37.29 | 69.45 | 56.59 | 63 | 40.9 |
| Other | 35.45 | 31.01 | 39.15 | 63.79 | 52.35 | 57.53 | 40.19 |
| STEM | 38.54 | 31.9 | 40.42 | 65.16 | 54.74 | 60.84 | 42.24 |
| **Overall** | 35.87 | 30.82 | 38.54 | 64.26 | 52.63 | 58.75 | 40.3 |
#### KMMLU (5-shot)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 37.42 | 29.98 | 19.24 | 61.47 | 48.66 | 56.85 | 40.22 |
| HUMSS | 34.72 | 27.27 | 22.5 | 68.79 | 55.95 | 63.68 | 43.35 |
| Other | 37.04 | 30.76 | 20.95 | 64.21 | 51.1 | 57.85 | 41.92 |
| STEM | 38.9 | 30.73 | 19.55 | 65.28 | 53.29 | 61.08 | 44.43 |
| **Overall** | 37.35 | 29.98 | 20.21 | 64.28 | 51.62 | 59.29 | 42.28 |
#### KMMLU-HARD (0-shot, CoT)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 27.08 | 26.17 | 26.25 | 37.12 | 22.25 | 29.17 | 21.07 |
| HUMSS | 20.21 | 24.38 | 20.21 | 41.97 | 23.31 | 31.51 | 19.44 |
| Other | 23.05 | 24.82 | 23.88 | 40.39 | 26.48 | 29.59 | 22.22 |
| STEM | 24.36 | 26.91 | 24.64 | 39.82 | 26.36 | 32.18 | 20.91 |
| **Overall** | 24 | 25.68 | 24.03 | 39.62 | 24.56 | 30.56 | 20.97 |
#### KMMLU-HARD (5-shot)
| supercategory | Phi-3.5-Mini-Instruct | Phi-3.0-Mini-128k-Instruct (June2024) | Llama-3.1-8B-Instruct | GPT-4o | GPT-4o-mini | GPT-4-turbo | GPT-3.5-turbo |
|:----------------|------------------------:|--------------------------------:|------------------------:|---------:|--------------:|--------------:|----------------:|
| Applied Science | 25 | 29 | 12 | 31 | 21 | 25 | 20 |
| HUMSS | 21.89 | 19.92 | 14 | 43.98 | 23.47 | 33.53 | 19.53 |
| Other | 23.26 | 27.27 | 12.83 | 39.84 | 28.34 | 29.68 | 23.22 |
| STEM | 20.5 | 25.25 | 12.75 | 40.25 | 23.25 | 27.25 | 19.75 |
| **Overall** | 24.76 | 25.73 | 15.81 | 40.94 | 24.63 | 31.12 | 21.19 |
|
{}
|
task
|
[
"SUMMARIZATION"
] | 42,232 |
llmware/bling-qwen-nano-tool
|
llmware
| null |
[
"transformers",
"gguf",
"qwen2",
"license:apache-2.0",
"region:us",
"conversational"
] | 2024-07-24T15:42:35Z |
2024-08-22T17:48:16+00:00
| 23 | 1 |
---
license: apache-2.0
inference: false
---
BLING-QWEN-NANO-TOOL
**bling-qwen-nano-tool** is a RAG-finetuned version on Qwen2-0.5B for use in fact-based context question-answering, packaged with 4_K_M GGUF quantization, providing a very fast, very small inference implementation for use on CPUs.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-qwen-nano-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
## Benchmark Tests
Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester
1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--Accuracy Score: **81.0** correct out of 100
--Not Found Classification: 65.0%
--Boolean: 62.5%
--Math/Logic: 42.5%
--Complex Questions (1-5): 3 (Average for ~1B model)
--Summarization Quality (1-5): 3 (Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("bling-qwen-nano-tool")
response = model.inference(query, add_context=text_sample)
Note: please review [**config.json**](https://huggingface.co/llmware/bling-qwen-nano-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GGUF
- **Language(s) (NLP):** English
- **License:** Apache 2.0
## Model Card Contact
Darren Oberst & llmware team
| null |
Non_BioNLP
|
BLING-QWEN-NANO-TOOL
**bling-qwen-nano-tool** is a RAG-finetuned version on Qwen2-0.5B for use in fact-based context question-answering, packaged with 4_K_M GGUF quantization, providing a very fast, very small inference implementation for use on CPUs.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-qwen-nano-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
## Benchmark Tests
Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester
1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--Accuracy Score: **81.0** correct out of 100
--Not Found Classification: 65.0%
--Boolean: 62.5%
--Math/Logic: 42.5%
--Complex Questions (1-5): 3 (Average for ~1B model)
--Summarization Quality (1-5): 3 (Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("bling-qwen-nano-tool")
response = model.inference(query, add_context=text_sample)
Note: please review [**config.json**](https://huggingface.co/llmware/bling-qwen-nano-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GGUF
- **Language(s) (NLP):** English
- **License:** Apache 2.0
## Model Card Contact
Darren Oberst & llmware team
|
{"license": "apache-2.0", "inference": false}
|
task
|
[
"SUMMARIZATION"
] | 42,233 |
Mikey8943/marian-finetuned-kde4-en-to-fr
|
Mikey8943
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-30T16:14:03Z |
2022-05-30T17:16:08+00:00
| 10 | 0 |
---
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- type: bleu
value: 50.16950271131339
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.1695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.1695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 50.16950271131339, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 42,234 |
siruivian/setfit-risk-all-freeze-head
|
siruivian
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-05-31T20:59:17Z |
2023-05-31T21:11:14+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# siruivian/setfit-risk-all-freeze-head
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("siruivian/setfit-risk-all-freeze-head")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# siruivian/setfit-risk-all-freeze-head
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("siruivian/setfit-risk-all-freeze-head")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 42,235 |
Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit
|
Agnuxo
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit",
"base_model:finetune:Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-08-26T21:09:29Z |
2024-08-27T10:29:32+00:00
| 4 | 0 |
---
base_model: Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## How the MOE System Works
This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality:
1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use.
2. **Expert Routing:** When a user asks a question, the system either:
- Uses keyword matching to identify the relevant domain.
- Consults the director LLM to classify the question's category.
3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert.
4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer.
5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system.
This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM.
Repository and Additional Information
Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py
GitHub Repository: https://github.com/Agnuxo1/NEBULA
## Code Example
The following code demonstrates the implementation of the Multi-Expert Question Answering System:
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
MODEL_CONFIG = {
"director": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit",
"task": "text-generation",
},
"programming": {
"name": "Qwen/Qwen2-1.5B-Instruct",
"task": "text-generation",
},
"biology": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit",
"task": "text-generation",
},
"mathematics": {
"name": "Qwen/Qwen2-Math-1.5B-Instruct",
"task": "text-generation",
}
}
KEYWORDS = {
"biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"],
"mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"],
"programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"]
}
class MOELLM:
def __init__(self):
self.current_expert = None
self.current_model = None
self.current_tokenizer = None
self.device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {self.device}")
self.load_director_model()
def load_director_model(self):
"""Loads the director model."""
print("Loading director model...")
model_name = MODEL_CONFIG["director"]["name"]
self.director_tokenizer = AutoTokenizer.from_pretrained(model_name)
self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device)
self.director_pipeline = pipeline(
MODEL_CONFIG["director"]["task"],
model=self.director_model,
tokenizer=self.director_tokenizer,
device=self.device
)
print("Director model loaded.")
def load_expert_model(self, expert):
"""Dynamically loads an expert model, releasing memory from the previous model."""
if expert not in MODEL_CONFIG:
raise ValueError(f"Unknown expert: {expert}")
if self.current_expert != expert:
print(f"Loading expert model: {expert}...")
# Free memory from the current model if it exists
if self.current_model:
del self.current_model
del self.current_tokenizer
torch.cuda.empty_cache()
model_config = MODEL_CONFIG[expert]
self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"])
self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device)
self.current_expert = expert
print(f"{expert.capitalize()} model loaded.")
return pipeline(
MODEL_CONFIG[expert]["task"],
model=self.current_model,
tokenizer=self.current_tokenizer,
device=self.device
)
def determine_expert_by_keywords(self, question):
"""Determines the expert based on keywords in the question."""
question_lower = question.lower()
for expert, keywords in KEYWORDS.items():
if any(keyword in question_lower for keyword in keywords):
return expert
return None
def determine_expert(self, question):
"""Determines which expert should answer the question."""
expert = self.determine_expert_by_keywords(question)
if expert:
print(f"Expert determined by keyword: {expert}")
return expert
prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:"
response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text']
expert = response.split(":")[-1].strip().lower()
if expert not in MODEL_CONFIG:
expert = "director"
print(f"Redirecting question to: {expert}")
return expert
def generate_response(self, question, expert):
"""Generates a response using the appropriate model."""
try:
model = self.load_expert_model(expert)
prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:"
response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text']
return response.split("Answer:")[-1].strip()
except Exception as e:
print(f"Error generating response: {str(e)}")
return "Sorry, there was an error processing your request. Please try again."
def chat_interface(self):
"""Simple chat interface."""
print("Welcome to the MOE-LLM chat. Type 'exit' to quit.")
while True:
question = input("\nYou: ")
if question.lower() in ['exit', 'quit']:
break
try:
expert = self.determine_expert(question)
response = self.generate_response(question, expert)
print(f"\n{expert.capitalize()}: {response}")
except Exception as e:
print(f"Error in chat: {str(e)}")
print("Please try asking another question.")
if __name__ == "__main__":
moe_llm = MOELLM()
moe_llm.chat_interface()
| null |
Non_BioNLP
|
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## How the MOE System Works
This model is a core component of a larger Multi-Expert Question Answering System. Here's a breakdown of the system's functionality:
1. **Model Loading:** The system loads the "director" LLM and keeps other expert LLMs (e.g., for programming, biology, mathematics) ready for use.
2. **Expert Routing:** When a user asks a question, the system either:
- Uses keyword matching to identify the relevant domain.
- Consults the director LLM to classify the question's category.
3. **Dynamic Expert Loading:** The system loads the chosen expert LLM into memory, optimizing resource usage by releasing any previously active expert.
4. **Response Generation:** The selected expert LLM receives the question and generates a tailored answer.
5. **Chat Interface:** A user-friendly chat interface facilitates interaction with the MOE system.
This MOE approach enhances efficiency and accuracy compared to relying on a single, general-purpose LLM.
Repository and Additional Information
Full Code: https://huggingface.co/Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit/resolve/main/MOE-LLMs3.py
GitHub Repository: https://github.com/Agnuxo1/NEBULA
## Code Example
The following code demonstrates the implementation of the Multi-Expert Question Answering System:
```python
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
MODEL_CONFIG = {
"director": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_Director_16bit",
"task": "text-generation",
},
"programming": {
"name": "Qwen/Qwen2-1.5B-Instruct",
"task": "text-generation",
},
"biology": {
"name": "Agnuxo/Qwen2-1.5B-Instruct_MOE_BIOLOGY_assistant_16bit",
"task": "text-generation",
},
"mathematics": {
"name": "Qwen/Qwen2-Math-1.5B-Instruct",
"task": "text-generation",
}
}
KEYWORDS = {
"biology": ["cell", "DNA", "protein", "evolution", "genetics", "ecosystem", "organism", "metabolism", "photosynthesis", "microbiology", "célula", "ADN", "proteína", "evolución", "genética", "ecosistema", "organismo", "metabolismo", "fotosíntesis", "microbiología"],
"mathematics": ["Math" "mathematics", "equation", "integral", "derivative", "function", "geometry", "algebra", "statistics", "probability", "ecuación", "integral", "derivada", "función", "geometría", "álgebra", "estadística", "probabilidad"],
"programming": ["python", "java", "C++", "HTML", "scrip", "code", "Dataset", "API", "framework", "debugging", "algorithm", "compiler", "database", "CSS", "JSON", "XML", "encryption", "IDE", "repository", "Git", "version control", "front-end", "back-end", "API", "stack trace", "REST", "machine learning"]
}
class MOELLM:
def __init__(self):
self.current_expert = None
self.current_model = None
self.current_tokenizer = None
self.device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {self.device}")
self.load_director_model()
def load_director_model(self):
"""Loads the director model."""
print("Loading director model...")
model_name = MODEL_CONFIG["director"]["name"]
self.director_tokenizer = AutoTokenizer.from_pretrained(model_name)
self.director_model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device)
self.director_pipeline = pipeline(
MODEL_CONFIG["director"]["task"],
model=self.director_model,
tokenizer=self.director_tokenizer,
device=self.device
)
print("Director model loaded.")
def load_expert_model(self, expert):
"""Dynamically loads an expert model, releasing memory from the previous model."""
if expert not in MODEL_CONFIG:
raise ValueError(f"Unknown expert: {expert}")
if self.current_expert != expert:
print(f"Loading expert model: {expert}...")
# Free memory from the current model if it exists
if self.current_model:
del self.current_model
del self.current_tokenizer
torch.cuda.empty_cache()
model_config = MODEL_CONFIG[expert]
self.current_tokenizer = AutoTokenizer.from_pretrained(model_config["name"])
self.current_model = AutoModelForCausalLM.from_pretrained(model_config["name"], torch_dtype=torch.float16).to(self.device)
self.current_expert = expert
print(f"{expert.capitalize()} model loaded.")
return pipeline(
MODEL_CONFIG[expert]["task"],
model=self.current_model,
tokenizer=self.current_tokenizer,
device=self.device
)
def determine_expert_by_keywords(self, question):
"""Determines the expert based on keywords in the question."""
question_lower = question.lower()
for expert, keywords in KEYWORDS.items():
if any(keyword in question_lower for keyword in keywords):
return expert
return None
def determine_expert(self, question):
"""Determines which expert should answer the question."""
expert = self.determine_expert_by_keywords(question)
if expert:
print(f"Expert determined by keyword: {expert}")
return expert
prompt = f"Classify the following question into one of these categories: programming, biology, mathematics. Question: {question}\nCategory:"
response = self.director_pipeline(prompt, max_length=100, num_return_sequences=1)[0]['generated_text']
expert = response.split(":")[-1].strip().lower()
if expert not in MODEL_CONFIG:
expert = "director"
print(f"Redirecting question to: {expert}")
return expert
def generate_response(self, question, expert):
"""Generates a response using the appropriate model."""
try:
model = self.load_expert_model(expert)
prompt = f"Answer the following question as an expert in {expert}: {question}\nAnswer:"
response = model(prompt, max_length=200, num_return_sequences=1)[0]['generated_text']
return response.split("Answer:")[-1].strip()
except Exception as e:
print(f"Error generating response: {str(e)}")
return "Sorry, there was an error processing your request. Please try again."
def chat_interface(self):
"""Simple chat interface."""
print("Welcome to the MOE-LLM chat. Type 'exit' to quit.")
while True:
question = input("\nYou: ")
if question.lower() in ['exit', 'quit']:
break
try:
expert = self.determine_expert(question)
response = self.generate_response(question, expert)
print(f"\n{expert.capitalize()}: {response}")
except Exception as e:
print(f"Error in chat: {str(e)}")
print("Please try asking another question.")
if __name__ == "__main__":
moe_llm = MOELLM()
moe_llm.chat_interface()
|
{"base_model": "Agnuxo/Qwen2-1.5B-Instruct_MOE_CODE_assistant_16bit", "language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "qwen2", "trl", "sft"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 42,236 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.