id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 42
values | tags
listlengths 1
4.05k
| created_time
timestamp[ns, tz=UTC]date 2022-03-02 23:29:04
2025-04-10 08:38:38
| last_modified
stringdate 2020-05-14 13:13:12
2025-04-19 04:15:39
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_bigbio_names
listlengths 1
8
⌀ | is_bionlp
stringclasses 3
values | model_cards
stringlengths 0
1M
| metadata
stringlengths 2
698k
| source
stringclasses 2
values | matched_task
listlengths 1
10
⌀ | __index_level_0__
int64 0
46.9k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
breadlicker45/yahoo-answers-test-model
|
breadlicker45
| null |
[
"transformers",
"pytorch",
"en",
"dataset:breadlicker45/autotrain-data-test2",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | 2022-12-16T13:16:44Z |
2022-12-16T13:20:45+00:00
| 105 | 0 |
---
datasets:
- breadlicker45/autotrain-data-test2
language:
- en
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 3.128325675589278
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2496476946
- CO2 Emissions (in grams): 3.1283
## Validation Metrics
- Loss: 3.511
- Rouge1: 14.002
- Rouge2: 2.968
- RougeL: 11.022
- RougeLsum: 12.335
- Gen Len: 18.900
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/breadlicker45/autotrain-test2-2496476946
```
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2496476946
- CO2 Emissions (in grams): 3.1283
## Validation Metrics
- Loss: 3.511
- Rouge1: 14.002
- Rouge2: 2.968
- RougeL: 11.022
- RougeLsum: 12.335
- Gen Len: 18.900
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/breadlicker45/autotrain-test2-2496476946
```
|
{"datasets": ["breadlicker45/autotrain-data-test2"], "language": ["en"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 3.128325675589278}}
|
task
|
[
"SUMMARIZATION"
] | 45,048 |
navteca/roberta-large-squad2
|
navteca
|
question-answering
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-04-06T16:31:09+00:00
| 105 | 0 |
---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- roberta
- question-answering
---
# Roberta large model for QA (SQuAD 2.0)
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
| null |
Non_BioNLP
|
# Roberta large model for QA (SQuAD 2.0)
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
|
{"datasets": ["squad_v2"], "language": "en", "license": "mit", "pipeline_tag": "question-answering", "tags": ["roberta", "question-answering"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,049 |
DrishtiSharma/llama2-7bb-tweet-summarization-gradnorm-0.3
|
DrishtiSharma
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:dialogstudio",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | 2024-02-01T08:16:54Z |
2024-02-01T08:17:42+00:00
| 0 | 0 |
---
base_model: NousResearch/Llama-2-7b-hf
datasets:
- dialogstudio
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama2-7bb-tweet-summarization-gradnorm-0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7bb-tweet-summarization-gradnorm-0.3
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8160
- Rouge Scores: {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103}
- Bleu Scores: [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885]
- Gen Len: 463.0182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:|
| 1.9246 | 1.0 | 220 | 1.8384 | {'rouge1': 92.78080137059182, 'rouge2': 78.71532643138437, 'rougeL': 68.0616149947273, 'rougeLsum': 92.78702835703021} | [0.9079275318266272, 0.8970741286020552, 0.8736002135507472, 0.8466150307526832] | 463.0182 |
| 1.6564 | 2.0 | 440 | 1.8335 | {'rouge1': 93.62527163754612, 'rouge2': 79.14899366889107, 'rougeL': 68.02122989340602, 'rougeLsum': 93.62676386700348} | [0.9282164809556785, 0.9171615801879893, 0.892709310950969, 0.8645188775345913] | 463.0182 |
| 1.3403 | 3.0 | 660 | 1.9481 | {'rouge1': 93.70688850262614, 'rouge2': 78.96026100012381, 'rougeL': 67.37638965440908, 'rougeLsum': 93.70399692691778} | [0.9342903619020663, 0.9225682522334384, 0.8972845918789121, 0.8681853449069523] | 463.0182 |
| 0.9984 | 4.0 | 880 | 2.1537 | {'rouge1': 93.77800041953847, 'rouge2': 78.72204799373465, 'rougeL': 66.56763131340682, 'rougeLsum': 93.77100407824561} | [0.9425931953005738, 0.9302863494509406, 0.9040669212466305, 0.8739193334758137] | 463.0182 |
| 0.7 | 5.0 | 1100 | 2.3692 | {'rouge1': 93.74639046979189, 'rouge2': 78.51569240275262, 'rougeL': 65.93032986525995, 'rougeLsum': 93.73745084400457} | [0.9440175755443134, 0.93171453625075, 0.9052208696375351, 0.8747208115562404] | 463.0182 |
| 0.4947 | 6.0 | 1320 | 2.6590 | {'rouge1': 93.75661844384149, 'rouge2': 78.18805763398609, 'rougeL': 65.29243896759789, 'rougeLsum': 93.75034348574664} | [0.9470358425741272, 0.9342995624545122, 0.9070823690393129, 0.8757451333358709] | 463.0182 |
| 0.3922 | 7.0 | 1540 | 2.8160 | {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103} | [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885] | 463.0182 |
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7bb-tweet-summarization-gradnorm-0.3
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the dialogstudio dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8160
- Rouge Scores: {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103}
- Bleu Scores: [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885]
- Gen Len: 463.0182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:--------:|
| 1.9246 | 1.0 | 220 | 1.8384 | {'rouge1': 92.78080137059182, 'rouge2': 78.71532643138437, 'rougeL': 68.0616149947273, 'rougeLsum': 92.78702835703021} | [0.9079275318266272, 0.8970741286020552, 0.8736002135507472, 0.8466150307526832] | 463.0182 |
| 1.6564 | 2.0 | 440 | 1.8335 | {'rouge1': 93.62527163754612, 'rouge2': 79.14899366889107, 'rougeL': 68.02122989340602, 'rougeLsum': 93.62676386700348} | [0.9282164809556785, 0.9171615801879893, 0.892709310950969, 0.8645188775345913] | 463.0182 |
| 1.3403 | 3.0 | 660 | 1.9481 | {'rouge1': 93.70688850262614, 'rouge2': 78.96026100012381, 'rougeL': 67.37638965440908, 'rougeLsum': 93.70399692691778} | [0.9342903619020663, 0.9225682522334384, 0.8972845918789121, 0.8681853449069523] | 463.0182 |
| 0.9984 | 4.0 | 880 | 2.1537 | {'rouge1': 93.77800041953847, 'rouge2': 78.72204799373465, 'rougeL': 66.56763131340682, 'rougeLsum': 93.77100407824561} | [0.9425931953005738, 0.9302863494509406, 0.9040669212466305, 0.8739193334758137] | 463.0182 |
| 0.7 | 5.0 | 1100 | 2.3692 | {'rouge1': 93.74639046979189, 'rouge2': 78.51569240275262, 'rougeL': 65.93032986525995, 'rougeLsum': 93.73745084400457} | [0.9440175755443134, 0.93171453625075, 0.9052208696375351, 0.8747208115562404] | 463.0182 |
| 0.4947 | 6.0 | 1320 | 2.6590 | {'rouge1': 93.75661844384149, 'rouge2': 78.18805763398609, 'rougeL': 65.29243896759789, 'rougeLsum': 93.75034348574664} | [0.9470358425741272, 0.9342995624545122, 0.9070823690393129, 0.8757451333358709] | 463.0182 |
| 0.3922 | 7.0 | 1540 | 2.8160 | {'rouge1': 93.719779910895, 'rouge2': 78.0799701185797, 'rougeL': 64.91384075272471, 'rougeLsum': 93.71249369436103} | [0.9468715981421053, 0.9340571158071639, 0.906767913949756, 0.8753561378232885] | 463.0182 |
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
|
{"base_model": "NousResearch/Llama-2-7b-hf", "datasets": ["dialogstudio"], "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "model-index": [{"name": "llama2-7bb-tweet-summarization-gradnorm-0.3", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,050 |
ccdv/lsg-pegasus-large-4096
|
ccdv
|
fill-mask
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"summarization",
"long context",
"fill-mask",
"custom_code",
"en",
"arxiv:2210.15497",
"arxiv:1912.08777",
"autotrain_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2023-12-17T21:11:17+00:00
| 35 | 0 |
---
language:
- en
pipeline_tag: fill-mask
tags:
- summarization
- pegasus
- long context
---
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [Pegasus-large](https://huggingface.co/google/pegasus-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**Pegasus**
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| null |
Non_BioNLP
|
# LSG model
**Transformers >= 4.36.1**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
This model is adapted from [Pegasus-large](https://huggingface.co/google/pegasus-large) for encoder-decoder tasks without additional pretraining. It uses the same number of parameters/layers and the same tokenizer.
This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 6 different sparse selection patterns. The best type is task dependent. \
If `sparse_block_size=0` or `sparsity_type="none"`, only local attention is considered. \
Note that for sequences with length < 2*block_size, the type has no effect.
* `sparsity_type="bos_pooling"` (new)
* weighted average pooling using the BOS token
* Works best in general, especially with a rather large sparsity_factor (8, 16, 32)
* Additional parameters:
* None
* `sparsity_type="norm"`, select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="pooling"`, use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* `sparsity_type="lsh"`, use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* `sparsity_type="stride"`, use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* `sparsity_type="block_stride"`, use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Seq2Seq example for summarization:
```python:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-pegasus-large-4096",
trust_remote_code=True,
pass_global_tokens_to_decoder=True, # Pass encoder global tokens to decoder
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-pegasus-large-4096")
SENTENCE = "This is a test sequence to test the model. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
padding="max_length", # Optional but recommended
truncation=True # Optional but recommended
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**Pegasus**
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en"], "pipeline_tag": "fill-mask", "tags": ["summarization", "pegasus", "long context"]}
|
task
|
[
"SUMMARIZATION"
] | 45,051 |
e1879/marian-finetuned-kde4-en-to-zh-tw
|
e1879
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-19T12:29:51Z |
2024-07-19T13:33:43+00:00
| 107 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-zh
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-zh-tw
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-zh_TW
split: train
args: en-zh_TW
metrics:
- type: bleu
value: 40.065781493415884
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/e1805/huggingface/runs/nee38wft)
# marian-finetuned-kde4-en-to-zh-tw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9680
- Bleu: 40.0658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/e1805/huggingface/runs/nee38wft)
# marian-finetuned-kde4-en-to-zh-tw
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9680
- Bleu: 40.0658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1
|
{"base_model": "Helsinki-NLP/opus-mt-en-zh", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-zh-tw", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-zh_TW", "split": "train", "args": "en-zh_TW"}, "metrics": [{"type": "bleu", "value": 40.065781493415884, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,052 |
auhide/bert-base-ner-bulgarian
|
auhide
|
token-classification
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"bg",
"dataset:wikiann",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-04T16:07:49Z |
2024-03-04T16:30:43+00:00
| 71 | 0 |
---
datasets:
- wikiann
language:
- bg
license: cc-by-4.0
metrics:
- f1
pipeline_tag: token-classification
widget:
- text: Философът Барух Спиноза е роден в Амстердам.
model-index:
- name: bert-base-ner-bulgarian
results: []
---
# 🇧🇬 BERT - Bulgarian Named Entity Recognition
The model [rmihaylov/bert-base-bg](https://huggingface.co/rmihaylov/bert-base-bg) fine-tuned on a Bulgarian subset of [wikiann](https://huggingface.co/datasets/wikiann).
It achieves *0.99* F1-score on that dataset.
## Usage
Import the libraries:
```python
from pprint import pprint
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
```
Load the model:
```python
MODEL_ID = "auhide/bert-base-ner-bulgarian"
model = AutoModelForTokenClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
ner = pipeline(task="ner", model=model, tokenizer=tokenizer)
```
Do inference:
```python
text = "Философът Барух Спиноза е роден в Амстердам."
pprint(ner(text))
```
```sh
[{'end': 13,
'entity': 'B-PER',
'index': 3,
'score': 0.9954899,
'start': 9,
'word': '▁Бар'},
{'end': 15,
'entity': 'I-PER',
'index': 4,
'score': 0.9660787,
'start': 13,
'word': 'ух'},
{'end': 23,
'entity': 'I-PER',
'index': 5,
'score': 0.99728084,
'start': 15,
'word': '▁Спиноза'},
{'end': 43,
'entity': 'B-LOC',
'index': 9,
'score': 0.8990479,
'start': 33,
'word': '▁Амстердам'}]
```
Note: There are three types of entities - `PER`, `ORG`, `LOC`.
| null |
Non_BioNLP
|
# 🇧🇬 BERT - Bulgarian Named Entity Recognition
The model [rmihaylov/bert-base-bg](https://huggingface.co/rmihaylov/bert-base-bg) fine-tuned on a Bulgarian subset of [wikiann](https://huggingface.co/datasets/wikiann).
It achieves *0.99* F1-score on that dataset.
## Usage
Import the libraries:
```python
from pprint import pprint
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
```
Load the model:
```python
MODEL_ID = "auhide/bert-base-ner-bulgarian"
model = AutoModelForTokenClassification.from_pretrained(MODEL_ID)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
ner = pipeline(task="ner", model=model, tokenizer=tokenizer)
```
Do inference:
```python
text = "Философът Барух Спиноза е роден в Амстердам."
pprint(ner(text))
```
```sh
[{'end': 13,
'entity': 'B-PER',
'index': 3,
'score': 0.9954899,
'start': 9,
'word': '▁Бар'},
{'end': 15,
'entity': 'I-PER',
'index': 4,
'score': 0.9660787,
'start': 13,
'word': 'ух'},
{'end': 23,
'entity': 'I-PER',
'index': 5,
'score': 0.99728084,
'start': 15,
'word': '▁Спиноза'},
{'end': 43,
'entity': 'B-LOC',
'index': 9,
'score': 0.8990479,
'start': 33,
'word': '▁Амстердам'}]
```
Note: There are three types of entities - `PER`, `ORG`, `LOC`.
|
{"datasets": ["wikiann"], "language": ["bg"], "license": "cc-by-4.0", "metrics": ["f1"], "pipeline_tag": "token-classification", "widget": [{"text": "Философът Барух Спиноза е роден в Амстердам."}], "model-index": [{"name": "bert-base-ner-bulgarian", "results": []}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,053 |
sail/Sailor-1.8B-Chat-gguf
|
sail
| null |
[
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"dataset:cerebras/SlimPajama-627B",
"dataset:Skywork/SkyPile-150B",
"dataset:allenai/MADLAD-400",
"dataset:cc100",
"dataset:CohereForAI/aya_dataset",
"dataset:CohereForAI/aya_collection",
"dataset:Open-Orca/OpenOrca",
"arxiv:2404.03608",
"base_model:sail/Sailor-1.8B",
"base_model:quantized:sail/Sailor-1.8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-03-03T05:22:24Z |
2024-12-21T10:40:51+00:00
| 312 | 3 |
---
base_model: sail/Sailor-1.8B
datasets:
- cerebras/SlimPajama-627B
- Skywork/SkyPile-150B
- allenai/MADLAD-400
- cc100
- CohereForAI/aya_dataset
- CohereForAI/aya_collection
- Open-Orca/OpenOrca
language:
- en
- zh
- id
- th
- vi
- ms
- lo
license: apache-2.0
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
---
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
### GGUF model list
| Name | Quant method | Bits | Size | Use case |
| ------------------------------------------------------------ | ------------ | ---- | ------- | ------------------------------------------------------------ |
| [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 847 MB | very small, significant quality loss ❗️ not recommended for most purposes |
| [ggml-model-Q3_K_L.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 1.06 GB | small, substantial quality loss |
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 1.02 GB | small, balanced quality |
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 954 MB | very small, high quality loss |
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 1.22 GB | small, balanced quality |
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 1.16 GB | small, greater quality loss |
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 1.38 GB | small, balanced quality |
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 1.33 GB | small, very low quality loss |
| [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 1.58 GB | small, extremely low quality loss |
| [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 1.96 GB | small, extremely low quality loss |
| [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 3.68 GB | medium, no quality loss |
### How to run with `llama.cpp`
```shell
# install llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
pip install -r requirements.txt
# generate with llama.cpp
./main -ngl 24 -m ggml-model-Q4_K_M.gguf -p "<|im_start|>question\nCara memanggang ikan?\n<|im_start|>answer\n" --temp 0.7 --repeat_penalty 1.1 -n 400 -e
```
> Change `-ngl 24` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
### How to run with `llama-cpp-python`
```shell
pip install llama-cpp-python
```
```python
import llama_cpp
import llama_cpp.llama_tokenizer
# load model
llama = llama_cpp.Llama.from_pretrained(
repo_id="sail/Sailor-4B-Chat-gguf",
filename="ggml-model-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained("sail/Sailor-4B-Chat"),
n_gpu_layers=40,
n_threads=8,
verbose=False,
)
system_role= 'system'
user_role = 'question'
assistant_role = "answer"
system_prompt= \
'You are an AI assistant named Sailor created by Sea AI Lab. \
Your answer should be friendly, unbiased, faithful, informative and detailed.'
system_prompt = f"<|im_start|>{system_role}\n{system_prompt}<|im_end|>"
# inference example
output = llama(
system_prompt + '\n' + f"<|im_start|>{user_role}\nCara memanggang ikan?\n<|im_start|>{assistant_role}\n",
max_tokens=256,
temperature=0.7,
top_p=0.75,
top_k=60,
stop=["<|im_end|>", "<|endoftext|>"]
)
print(output['choices'][0]['text'])
```
### How to build demo
Install `llama-cpp-python` and `gradio`, then run [script](https://github.com/sail-sg/sailor-llm/blob/main/demo/llamacpp_demo.py).
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@inproceedings{dou-etal-2024-sailor,
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
year = "2024",
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
| null |
Non_BioNLP
|
<div align="center">
<img src="banner_sailor.jpg" width="700"/>
</div>
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements.
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
> The logo was generated by MidJourney
## Model Summary
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
- **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/)
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
- **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf)
## Training details
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
The pre-training corpus heavily leverages the publicly available corpus, including
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
The instruction tuning corpus are all publicly available including
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
### GGUF model list
| Name | Quant method | Bits | Size | Use case |
| ------------------------------------------------------------ | ------------ | ---- | ------- | ------------------------------------------------------------ |
| [ggml-model-Q2_K.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q2_K.gguf) | Q2_K | 2 | 847 MB | very small, significant quality loss ❗️ not recommended for most purposes |
| [ggml-model-Q3_K_L.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_L.gguf) | Q3_K_L | 3 | 1.06 GB | small, substantial quality loss |
| [ggml-model-Q3_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_M.gguf) | Q3_K_M | 3 | 1.02 GB | small, balanced quality |
| [ggml-model-Q3_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q3_K_S.gguf) | Q3_K_S | 3 | 954 MB | very small, high quality loss |
| [ggml-model-Q4_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q4_K_M.gguf) | Q4_K_M | 4 | 1.22 GB | small, balanced quality |
| [ggml-model-Q4_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q4_K_S.gguf) | Q4_K_S | 4 | 1.16 GB | small, greater quality loss |
| [ggml-model-Q5_K_M.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q5_K_M.gguf) | Q5_K_M | 5 | 1.38 GB | small, balanced quality |
| [ggml-model-Q5_K_S.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q5_K_S.gguf) | Q5_K_S | 5 | 1.33 GB | small, very low quality loss |
| [ggml-model-Q6_K.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q6_K.gguf) | Q6_K | 6 | 1.58 GB | small, extremely low quality loss |
| [ggml-model-Q8_0.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-Q8_0.gguf) | Q8_0 | 8 | 1.96 GB | small, extremely low quality loss |
| [ggml-model-f16.gguf](https://huggingface.co/sail/Sailor-1.8B-Chat-gguf/blob/main/ggml-model-f16.gguf) | f16 | 16 | 3.68 GB | medium, no quality loss |
### How to run with `llama.cpp`
```shell
# install llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make
pip install -r requirements.txt
# generate with llama.cpp
./main -ngl 24 -m ggml-model-Q4_K_M.gguf -p "<|im_start|>question\nCara memanggang ikan?\n<|im_start|>answer\n" --temp 0.7 --repeat_penalty 1.1 -n 400 -e
```
> Change `-ngl 24` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
### How to run with `llama-cpp-python`
```shell
pip install llama-cpp-python
```
```python
import llama_cpp
import llama_cpp.llama_tokenizer
# load model
llama = llama_cpp.Llama.from_pretrained(
repo_id="sail/Sailor-4B-Chat-gguf",
filename="ggml-model-Q4_K_M.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained("sail/Sailor-4B-Chat"),
n_gpu_layers=40,
n_threads=8,
verbose=False,
)
system_role= 'system'
user_role = 'question'
assistant_role = "answer"
system_prompt= \
'You are an AI assistant named Sailor created by Sea AI Lab. \
Your answer should be friendly, unbiased, faithful, informative and detailed.'
system_prompt = f"<|im_start|>{system_role}\n{system_prompt}<|im_end|>"
# inference example
output = llama(
system_prompt + '\n' + f"<|im_start|>{user_role}\nCara memanggang ikan?\n<|im_start|>{assistant_role}\n",
max_tokens=256,
temperature=0.7,
top_p=0.75,
top_k=60,
stop=["<|im_end|>", "<|endoftext|>"]
)
print(output['choices'][0]['text'])
```
### How to build demo
Install `llama-cpp-python` and `gradio`, then run [script](https://github.com/sail-sg/sailor-llm/blob/main/demo/llamacpp_demo.py).
# License
Sailor is distributed under the terms of the Apache License 2.0.
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
## Citation
If you find sailor useful, please cite our work as follows:
```
@inproceedings{dou-etal-2024-sailor,
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
year = "2024",
}
```
# Contact Us
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
|
{"base_model": "sail/Sailor-1.8B", "datasets": ["cerebras/SlimPajama-627B", "Skywork/SkyPile-150B", "allenai/MADLAD-400", "cc100", "CohereForAI/aya_dataset", "CohereForAI/aya_collection", "Open-Orca/OpenOrca"], "language": ["en", "zh", "id", "th", "vi", "ms", "lo"], "license": "apache-2.0", "tags": ["multilingual", "sea", "sailor", "sft", "chat", "instruction"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,055 |
qihoo360/360Zhinao-7B-Chat-32K
|
qihoo360
|
text-generation
|
[
"transformers",
"safetensors",
"zhinao",
"text-generation",
"qihoo360",
"奇虎360",
"360Zhinao",
"pretrain",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2311.09198",
"arxiv:2309.16039",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | 2024-04-10T07:31:30Z |
2024-04-28T03:14:12+00:00
| 65 | 1 |
---
language:
- zh
- en
library_name: transformers
license: apache-2.0
tags:
- qihoo360
- 奇虎360
- zhinao
- 360Zhinao
- pretrain
---
<div align="center">
<h1>
360Zhinao (360智脑)
</h1>
</div>
<div align="center">
🤗 <a href="https://huggingface.co/qihoo360">HuggingFace</a>   |   
🤖 <a href="https://www.modelscope.cn/profile/qihoo360">ModelScope</a>   |   
💬 <a href="./assets/WeChat.png">WeChat (微信)</a>  
</div>
<br>
<p align="center">
Feel free to visit 360Zhinao's official website<a href="https://ai.360.com"> https://ai.360.com</a> for more experience.
</p>
<br>
# Introduction
🎉🎉🎉 We released the 360Zhinao model series:
- **360Zhinao-7B-Base**
- **360Zhinao-7B-Chat-4K**
- **360Zhinao-7B-Chat-32K**
- **360Zhinao-7B-Chat-360K**
Notable features of our 360Zhinao models are:
- **Base Model:** Leveraging a high-quality corpus of 3.4 trillion tokens consisting of mainly Chinese, English and code, we achieved competitive performance on relevant benchmarks against other 7B models.
- **Chat Models:** Powerful chat capabilities and three context lengths of 4K, 32K and 360K. 360K (around 500k Chinese characters) is the longest context length among Chinese open-sourced models upon release (Apr. 11, 2024).
<br>
# News and Updates
- [2024.04.12] We released **360Zhinao-7B** v1.0, including the base model and three chat models with context lengths 4K, 32K and 360K.
<br>
# Table of contents
- [Download URL](#Download-URL)
- [Model Evaluation](#Model-Evaluation)
- [Quickstart](#Quickstart)
- [Model Inference](#Model-Inference)
- [Model Finetune](#Model-Finetune)
- [License](#License)
<br>
# Download URL
| Size | Model | BF16 | Int4|
|-|-|-|-|
| 7B | 360Zhinao-7B-Base | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Base/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Base">🤗</a> | |
| 7B | 360Zhinao-7B-Chat-4K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K-Int4">🤗</a> |
| 7B | 360Zhinao-7B-Chat-32K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K-Int4">🤗</a> |
| 7B | 360Zhinao-7B-Chat-360K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K-Int4">🤗</a> |
<br>
# Model Evaluation
## Base Model
We evaluate our model on [OpenCompass](https://opencompass.org.cn/home), more specifically on C-Eval, AGIEval, MMLU, CMMLU, HellaSwag, MATH, GSM8K, HumanEval, MBPP, BBH and LAMBADA.
These benchmarks test the model on
natural language understanding, knowledge, mathematics, code generation and logical reasoning, etc.
Results are listed as follows and could be viewed or reproduced on [OpenCompass leaderboard](https://rank.opencompass.org.cn/leaderboard-llm).
| <div style="width: 100pt">Model</div> | AVG | CEval | AGIEval | MMLU | CMMLU | HellaSwag | MATH | GSM8K | HumanEval | MBPP | BBH | LAMBADA |
|:----------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Baichuan2-7B | 41.49 | 56.3 | 34.6 | 54.7 | 57 | 67 | 5.4 | 24.6 | 17.7 | 24 | 41.8 | 73.3 |
| Baichuan-7B | 31.94 | 44.7 | 24.6 | 41.5 | 44.6 | 68.4 | 2.5 | 9.6 | 9.1 | 6.4 | 32.8 | 67.1 |
| ChatGLM3-6B | **58.67** | 67 | 47.4 | 62.8 | 66.5 | 76.5 | 19.2 | 61 | 44.5 | **57.2** | **66.2** | 77.1 |
| DeepSeek-7B | 39.8 | 45 | 24 | 49.3 | 46.8 | 73.4 | 4.2 | 18.3 | 25 | 36.4 | 42.8 | 72.6 |
| InternLM2-7B | 58.01 | 65.7 | 50.2 | 65.5 | 66.2 | 79.6 | 19.9 | **70.6** | 41.5 | 42.4 | 64.4 | 72.1 |
| InternLM-7B | 39.33 | 53.4 | 36.9 | 51 | 51.8 | 70.6 | 6.3 | 31.2 | 13.4 | 14 | 37 | 67 |
| LLaMA-2-7B | 33.27 | 32.5 | 21.8 | 46.8 | 31.8 | 74 | 3.3 | 16.7 | 12.8 | 14.8 | 38.2 | 73.3 |
| LLaMA-7B | 30.35 | 27.3 | 20.6 | 35.6 | 26.8 | 74.3 | 2.9 | 10 | 12.8 | 16.8 | 33.5 | 73.3 |
| Mistral-7B-v0.1 | 47.67 | 47.4 | 32.8 | 64.1 | 44.7 | 78.9 | 11.3 | 47.5 | 27.4 | 38.6 | 56.7 | 75 |
| MPT-7B | 30.06 | 23.5 | 21.3 | 27.5 | 25.9 | 75 | 2.9 | 9.1 | 17.1 | 22.8 | 35.6 | 70 |
| Qwen1.5-7B | 55.12 | 73.57 | **50.8** | 62.15 | 71.84 | 72.62 | **20.36** | 54.36 | **53.05** | 36.8 | 40.01 | 70.74 |
| Qwen-7B | 49.53 | 63.4 | 45.3 | 59.7 | 62.5 | 75 | 13.3 | 54.1 | 27.4 | 31.4 | 45.2 | 67.5 |
| XVERSE-7B | 34.27 | 61.1 | 39 | 58.4 | 60.8 | 73.7 | 2.2 | 11.7 | 4.9 | 10.2 | 31 | 24 |
| Yi-6B | 47.8 | 73 | 44.3 | 64 | **73.5** | 73.1 | 6.3 | 39.9 | 15.2 | 23.6 | 44.9 | 68 |
| **360Zhinao-7B** | 56.15 | **74.11** | 49.49 | **67.44** | 72.38 | **83.05** | 16.38 | 53.83 | 35.98 | 42.4 | 43.95 | **78.59** |
## Chat Models
The 4K and 32K models are trained separately with the same 4K SFT data.
To train the long-context models, we adopted a two-stage approach.
**First stage**: We increased RoPE base and extended the context length to 32K.
- Firstly, we performed Continual Pretraining on approximately 5B tokens with a 32K context window.
- Then during the SFT stage, we finetuned the model using long data from various sources, including high-quality human-labeled 32K data.
**Second stage**: We extended the context length to 360K, training with the following data:
- A small amount of high-quality human-labeled super-long data.
- Due to the scarcity of annotated super-long data, we constructed various forms of synthetic data.
- Multi-Doc QA: Similar to [Ziya-Reader](https://arxiv.org/abs/2311.09198), we generated multi-document QA pairs based on 360's database. Multiple QA pairs are constructed for one row of Multi-Doc QA data input, resulting in a multi-turn format and significantly improving the training efficiency.
- Single-Doc QA: Similar to [LLama2 Long](https://arxiv.org/abs/2309.16039), we constructed multi-turn QA data based on different segments within one row of long-text input.
We evaluated our models across various lengths and benchmarks.
- ### Long Context Benchmarks
We evaluated our 32K and 360K models on [LongBench](https://github.com/THUDM/LongBench), a multi-task bilingual benchmark for long contexts. We report results on **Chinese** tasks most relevant to downstream applications: Single/Multi-Doc QA, Summarization, Few-Shot Learning and Code Completion.
| Model | Avg | Single-Doc QA | Multi-Doc QA | Summarization | Few-Shot Learning | Code Completion |
| :------------------------ |:---------:|:--------:|:---------:|:---------:|:------------:|:---------:|
| GPT-3.5-Turbo-16k | 37.84 | 61.2 | 28.7 | 16 | 29.2 | 54.1 |
| ChatGLM2-6B-32k | 37.16 | 51.6 | 37.6 | 16.2 | 27.7 | 52.7 |
| ChatGLM3-6B-32k | 44.62 | **62.3** | 44.8 | 17.8 | 42 | 56.2 |
| InternLM2-Chat-7B | 42.20 | 56.65 | 29.15 | **17.99** | 43.5 | **63.72** |
| Qwen1.5-Chat-7B | 36.75 | 52.85 | 30.08 | 14.28 | 32 | 54.55 |
| Qwen1.5-Chat-14B | 39.80 | 60.39 | 27.99 | 14.77 | 37 | 58.87 |
| 360Zhinao-7B-Chat-32K | **45.18** | 57.18 | **48.06** | 15.03 | **44** | 61.64 |
- ### 360Zhinao-7B-Chat-360K on "NeedleInAHaystack"
[NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) places one small piece of information in different positions of long text and queries this information as a test of LLM's long-context capabilities.
360Zhinao-7B-Chat-360K could achieve over 98% accuracy on both English and Chinese NeedleInAHaystack tasks.
- English version(same as [NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack))
<p align="center">
<img src="assets/360Zhinao-7B-Chat-360K.en_score.png" width="600" />
<p>
**needle**:The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.
**query**:What is the best thing to do in San Francisco?
- Chinese version
<p align="center">
<img src="assets/360Zhinao-7B-Chat-360K.zh_score.png" width="600" />
<p>
We constructed the Chinese version following the [SuperCLUE-200K benchmark](https://mp.weixin.qq.com/s/QgoRf2LB-7vc3vTFOHJkpw):
**haystack**:Chinese novels.
**needle**:(in Chinese) 王莽是一名勤奋的店员,他每天凌晨就起床,赶在第一缕阳光照亮大地之前到达店铺,为即将开始的一天做准备。他清扫店铺,整理货架,为顾客提供方便。他对五金的种类和用途了如指掌,无论顾客需要什么,他总能准确地找到。\n然而,他的老板刘秀却总是对他吹毛求疵。刘秀是个挑剔的人,他总能在王莽的工作中找出一些小错误,然后以此为由扣他的工资。他对王莽的工作要求非常严格,甚至有些过分。即使王莽做得再好,刘秀也总能找出一些小问题,让王莽感到非常沮丧。\n王莽虽然对此感到不满,但他并没有放弃。他知道,只有通过自己的努力,才能获得更好的生活。他坚持每天早起,尽管他知道那天可能会再次被刘秀扣工资。他始终保持微笑,尽管他知道刘秀可能会再次对他挑剔。
**query**:(in Chinese) 王莽在谁的手下工作?
<br>
# Quickstart
We provide simple examples illustrating the use of 360Zhinao-7B-Base and 360Zhinao-7B-Chat on 🤖ModelScope and 🤗Transformers.
## Dependency Installation
- python >= 3.8
- pytorch >= 2.0
- transformers >= 4.37.2
- CUDA >= 11.4
```shell
pip install -r requirements.txt
```
Optionally, we recommend installing Flash-Attention 2 to improve performance and reduce memory footprint.
>flash-attn >= 2.3.6
```shell
FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn==2.3.6
```
## 🤗 Transformers
### Demonstration of Base Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.generation import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
### Demonstration of Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.generation import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
messages = []
#round-1
messages.append({"role": "user", "content": "介绍一下刘德华"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
#round-2
messages.append({"role": "user", "content": "他有什么代表作?"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
```
## 🤖 ModelScope
### Demonstration of Base Model Inference
```python
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
### Demonstration of Chat Model Inference
```python
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
messages = []
#round-1
messages.append({"role": "user", "content": "介绍一下刘德华"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
#round-2
messages.append({"role": "user", "content": "他有什么代表作?"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
```
## CLI Demo
Use terminal for command-line interface:
```shell
python cli_demo.py
```
<p align="center">
<img src="assets/cli_demo.gif" width="600" />
<p>
## Web Demo
```shell
streamlit run web_demo.py
```
<p align="center">
<img src="assets/web_demo.gif" width="600" />
<p>
## API Demo
Launch api:
```shell
python openai_api.py
```
Then request with parameters:
```shell
curl 'http://localhost:8360/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"max_new_tokens": 200,
"do_sample": true,
"top_k": 0,
"top_p": 0.8,
"temperature": 1.0,
"repetition_penalty": 1.0,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"}
]
}'
```
<br>
# Model Inference
## Quantization
We provide quantization schemes based on AutoGPTQ and release the Int4 quantization models.
## Deployment
### vLLM Installation
We recommend using `vLLM==0.3.3`.
If you are using **CUDA 12.1 and PyTorch 2.1**, you can install vLLM directly with:
```shell
pip install vllm==0.3.3
```
Otherwise, please refer to the official vLLM [Installation Instructions](https://docs.vllm.ai/en/latest/getting_started/installation.html).
After installation, perform the following steps:
1. Copy `vllm/zhinao.py` into `vllm/model_executor/models` in your vllm installation directory (in python/conda env).
2. Copy `vllm/serving_chat.py` into `vllm/entrypoints/openai` in your vllm installation directory.
3. Then add a line in `vllm/model_executor/models/__init__.py`
```shell
"ZhinaoForCausalLM": ("zhinao", "ZhinaoForCausalLM"),
```
### vLLM Service Start
Start the service:
```shell
python -m vllm.entrypoints.openai.api_server \
--served-model-name 360Zhinao-7B-Chat-4K \
--model qihoo360/360Zhinao-7B-Chat-4K \
--trust-remote-code \
--tensor-parallel-size 1 \
--max-model-len 4096 \
--host 0.0.0.0 \
--port 8360
```
Use curl to request the service:
```shell
curl http://localhost:8360/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "360Zhinao-7B-Chat-4K",
"max_tokens": 200,
"top_k": -1,
"top_p": 0.8,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"}
],
"stop": [
"<eod>",
"<|im_end|>",
"<|im_start|>"
]
}'
```
Use python to request the service:
```python
from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8360/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="360Zhinao-7B-Chat-4K",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"},
],
stop=[
"<eod>",
"<|im_end|>",
"<|im_start|>"
],
presence_penalty=0.0,
frequency_penalty=0.0
)
print("Chat response:", chat_response)
```
> If you need to enable repetition penalty, we recommend setting `presence_penalty` and `frequency_penalty` instead of `repetition_penalty`.
<br>
# Model Finetune
## Training data
Training Data: `data/training_data_sample.json`. This example data has 10,000 rows sampled from [multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) with converted format.
Data Format:
```json
[
{
"id": 1,
"conversations": [
{
"from": "system",
"value": "You are a helpful assistant."
},
{
"from": "user",
"value": "您好啊"
},
{
"from": "assistant",
"value": "你好!我今天能为您做些什么?有什么问题或需要帮助吗? 我在这里为您提供服务。"
}
]
}
]
```
## Finetuning scripts
```shell
set -x
HOSTFILE=hostfile
DS_CONFIG=./finetune/ds_config_zero2.json
# PARAMS
LR=5e-6
EPOCHS=3
MAX_LEN=4096
BATCH_SIZE=4
NUM_NODES=1
NUM_GPUS=8
MASTER_PORT=29500
IS_CONCAT=False # Whether to concatenate to maximum length (MAX_LEN)
DATA_PATH="./data/training_data_sample.json"
MODEL_PATH="qihoo360/360Zhinao-7B-Base"
OUTPUT_DIR="./outputs/"
deepspeed --hostfile ${HOSTFILE} \
--master_port ${MASTER_PORT} \
--num_nodes ${NUM_NODES} \
--num_gpus ${NUM_GPUS} \
finetune.py \
--report_to "tensorboard" \
--data_path ${DATA_PATH} \
--model_name_or_path ${MODEL_PATH} \
--output_dir ${OUTPUT_DIR} \
--model_max_length ${MAX_LEN} \
--num_train_epochs ${EPOCHS} \
--per_device_train_batch_size ${BATCH_SIZE} \
--gradient_accumulation_steps 1 \
--save_strategy steps \
--save_steps 200 \
--learning_rate ${LR} \
--lr_scheduler_type cosine \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--adam_epsilon 1e-8 \
--max_grad_norm 1.0 \
--weight_decay 0.1 \
--warmup_ratio 0.01 \
--gradient_checkpointing True \
--bf16 True \
--tf32 True \
--deepspeed ${DS_CONFIG} \
--is_concat ${IS_CONCAT} \
--logging_steps 1 \
--log_on_each_node False
```
```shell
bash finetune/ds_finetune.sh
```
- Configuring `HOSTFILE` switches between single-machine and multi-machine training.
- configuring `ds_config` switches between zero1, zero2 and zero3.
- `fp16, bf16` could configure mixed precision training. bf16 is recommended to be consistent with the pretrained model.
- `is_concat` configures whether the training data is concatenated or not.
<br>
# License
The source code of this repository follows the open-source license Apache 2.0.
360Zhinao open-source models support commercial use. If you wish to use these models or continue training them for commercial purposes, please contact us via email ([email protected]) to apply. For the specific license agreement, please see [<<360 Zhinao Open-Source Model License>>](https://github.com/Qihoo360/360zhinao/blob/main/360%E6%99%BA%E8%84%91%E5%BC%80%E6%BA%90%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E8%AF%81.txt).
| null |
Non_BioNLP
|
<div align="center">
<h1>
360Zhinao (360智脑)
</h1>
</div>
<div align="center">
🤗 <a href="https://huggingface.co/qihoo360">HuggingFace</a>   |   
🤖 <a href="https://www.modelscope.cn/profile/qihoo360">ModelScope</a>   |   
💬 <a href="./assets/WeChat.png">WeChat (微信)</a>  
</div>
<br>
<p align="center">
Feel free to visit 360Zhinao's official website<a href="https://ai.360.com"> https://ai.360.com</a> for more experience.
</p>
<br>
# Introduction
🎉🎉🎉 We released the 360Zhinao model series:
- **360Zhinao-7B-Base**
- **360Zhinao-7B-Chat-4K**
- **360Zhinao-7B-Chat-32K**
- **360Zhinao-7B-Chat-360K**
Notable features of our 360Zhinao models are:
- **Base Model:** Leveraging a high-quality corpus of 3.4 trillion tokens consisting of mainly Chinese, English and code, we achieved competitive performance on relevant benchmarks against other 7B models.
- **Chat Models:** Powerful chat capabilities and three context lengths of 4K, 32K and 360K. 360K (around 500k Chinese characters) is the longest context length among Chinese open-sourced models upon release (Apr. 11, 2024).
<br>
# News and Updates
- [2024.04.12] We released **360Zhinao-7B** v1.0, including the base model and three chat models with context lengths 4K, 32K and 360K.
<br>
# Table of contents
- [Download URL](#Download-URL)
- [Model Evaluation](#Model-Evaluation)
- [Quickstart](#Quickstart)
- [Model Inference](#Model-Inference)
- [Model Finetune](#Model-Finetune)
- [License](#License)
<br>
# Download URL
| Size | Model | BF16 | Int4|
|-|-|-|-|
| 7B | 360Zhinao-7B-Base | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Base/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Base">🤗</a> | |
| 7B | 360Zhinao-7B-Chat-4K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K-Int4">🤗</a> |
| 7B | 360Zhinao-7B-Chat-32K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K-Int4">🤗</a> |
| 7B | 360Zhinao-7B-Chat-360K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K-Int4">🤗</a> |
<br>
# Model Evaluation
## Base Model
We evaluate our model on [OpenCompass](https://opencompass.org.cn/home), more specifically on C-Eval, AGIEval, MMLU, CMMLU, HellaSwag, MATH, GSM8K, HumanEval, MBPP, BBH and LAMBADA.
These benchmarks test the model on
natural language understanding, knowledge, mathematics, code generation and logical reasoning, etc.
Results are listed as follows and could be viewed or reproduced on [OpenCompass leaderboard](https://rank.opencompass.org.cn/leaderboard-llm).
| <div style="width: 100pt">Model</div> | AVG | CEval | AGIEval | MMLU | CMMLU | HellaSwag | MATH | GSM8K | HumanEval | MBPP | BBH | LAMBADA |
|:----------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Baichuan2-7B | 41.49 | 56.3 | 34.6 | 54.7 | 57 | 67 | 5.4 | 24.6 | 17.7 | 24 | 41.8 | 73.3 |
| Baichuan-7B | 31.94 | 44.7 | 24.6 | 41.5 | 44.6 | 68.4 | 2.5 | 9.6 | 9.1 | 6.4 | 32.8 | 67.1 |
| ChatGLM3-6B | **58.67** | 67 | 47.4 | 62.8 | 66.5 | 76.5 | 19.2 | 61 | 44.5 | **57.2** | **66.2** | 77.1 |
| DeepSeek-7B | 39.8 | 45 | 24 | 49.3 | 46.8 | 73.4 | 4.2 | 18.3 | 25 | 36.4 | 42.8 | 72.6 |
| InternLM2-7B | 58.01 | 65.7 | 50.2 | 65.5 | 66.2 | 79.6 | 19.9 | **70.6** | 41.5 | 42.4 | 64.4 | 72.1 |
| InternLM-7B | 39.33 | 53.4 | 36.9 | 51 | 51.8 | 70.6 | 6.3 | 31.2 | 13.4 | 14 | 37 | 67 |
| LLaMA-2-7B | 33.27 | 32.5 | 21.8 | 46.8 | 31.8 | 74 | 3.3 | 16.7 | 12.8 | 14.8 | 38.2 | 73.3 |
| LLaMA-7B | 30.35 | 27.3 | 20.6 | 35.6 | 26.8 | 74.3 | 2.9 | 10 | 12.8 | 16.8 | 33.5 | 73.3 |
| Mistral-7B-v0.1 | 47.67 | 47.4 | 32.8 | 64.1 | 44.7 | 78.9 | 11.3 | 47.5 | 27.4 | 38.6 | 56.7 | 75 |
| MPT-7B | 30.06 | 23.5 | 21.3 | 27.5 | 25.9 | 75 | 2.9 | 9.1 | 17.1 | 22.8 | 35.6 | 70 |
| Qwen1.5-7B | 55.12 | 73.57 | **50.8** | 62.15 | 71.84 | 72.62 | **20.36** | 54.36 | **53.05** | 36.8 | 40.01 | 70.74 |
| Qwen-7B | 49.53 | 63.4 | 45.3 | 59.7 | 62.5 | 75 | 13.3 | 54.1 | 27.4 | 31.4 | 45.2 | 67.5 |
| XVERSE-7B | 34.27 | 61.1 | 39 | 58.4 | 60.8 | 73.7 | 2.2 | 11.7 | 4.9 | 10.2 | 31 | 24 |
| Yi-6B | 47.8 | 73 | 44.3 | 64 | **73.5** | 73.1 | 6.3 | 39.9 | 15.2 | 23.6 | 44.9 | 68 |
| **360Zhinao-7B** | 56.15 | **74.11** | 49.49 | **67.44** | 72.38 | **83.05** | 16.38 | 53.83 | 35.98 | 42.4 | 43.95 | **78.59** |
## Chat Models
The 4K and 32K models are trained separately with the same 4K SFT data.
To train the long-context models, we adopted a two-stage approach.
**First stage**: We increased RoPE base and extended the context length to 32K.
- Firstly, we performed Continual Pretraining on approximately 5B tokens with a 32K context window.
- Then during the SFT stage, we finetuned the model using long data from various sources, including high-quality human-labeled 32K data.
**Second stage**: We extended the context length to 360K, training with the following data:
- A small amount of high-quality human-labeled super-long data.
- Due to the scarcity of annotated super-long data, we constructed various forms of synthetic data.
- Multi-Doc QA: Similar to [Ziya-Reader](https://arxiv.org/abs/2311.09198), we generated multi-document QA pairs based on 360's database. Multiple QA pairs are constructed for one row of Multi-Doc QA data input, resulting in a multi-turn format and significantly improving the training efficiency.
- Single-Doc QA: Similar to [LLama2 Long](https://arxiv.org/abs/2309.16039), we constructed multi-turn QA data based on different segments within one row of long-text input.
We evaluated our models across various lengths and benchmarks.
- ### Long Context Benchmarks
We evaluated our 32K and 360K models on [LongBench](https://github.com/THUDM/LongBench), a multi-task bilingual benchmark for long contexts. We report results on **Chinese** tasks most relevant to downstream applications: Single/Multi-Doc QA, Summarization, Few-Shot Learning and Code Completion.
| Model | Avg | Single-Doc QA | Multi-Doc QA | Summarization | Few-Shot Learning | Code Completion |
| :------------------------ |:---------:|:--------:|:---------:|:---------:|:------------:|:---------:|
| GPT-3.5-Turbo-16k | 37.84 | 61.2 | 28.7 | 16 | 29.2 | 54.1 |
| ChatGLM2-6B-32k | 37.16 | 51.6 | 37.6 | 16.2 | 27.7 | 52.7 |
| ChatGLM3-6B-32k | 44.62 | **62.3** | 44.8 | 17.8 | 42 | 56.2 |
| InternLM2-Chat-7B | 42.20 | 56.65 | 29.15 | **17.99** | 43.5 | **63.72** |
| Qwen1.5-Chat-7B | 36.75 | 52.85 | 30.08 | 14.28 | 32 | 54.55 |
| Qwen1.5-Chat-14B | 39.80 | 60.39 | 27.99 | 14.77 | 37 | 58.87 |
| 360Zhinao-7B-Chat-32K | **45.18** | 57.18 | **48.06** | 15.03 | **44** | 61.64 |
- ### 360Zhinao-7B-Chat-360K on "NeedleInAHaystack"
[NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) places one small piece of information in different positions of long text and queries this information as a test of LLM's long-context capabilities.
360Zhinao-7B-Chat-360K could achieve over 98% accuracy on both English and Chinese NeedleInAHaystack tasks.
- English version(same as [NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack))
<p align="center">
<img src="assets/360Zhinao-7B-Chat-360K.en_score.png" width="600" />
<p>
**needle**:The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.
**query**:What is the best thing to do in San Francisco?
- Chinese version
<p align="center">
<img src="assets/360Zhinao-7B-Chat-360K.zh_score.png" width="600" />
<p>
We constructed the Chinese version following the [SuperCLUE-200K benchmark](https://mp.weixin.qq.com/s/QgoRf2LB-7vc3vTFOHJkpw):
**haystack**:Chinese novels.
**needle**:(in Chinese) 王莽是一名勤奋的店员,他每天凌晨就起床,赶在第一缕阳光照亮大地之前到达店铺,为即将开始的一天做准备。他清扫店铺,整理货架,为顾客提供方便。他对五金的种类和用途了如指掌,无论顾客需要什么,他总能准确地找到。\n然而,他的老板刘秀却总是对他吹毛求疵。刘秀是个挑剔的人,他总能在王莽的工作中找出一些小错误,然后以此为由扣他的工资。他对王莽的工作要求非常严格,甚至有些过分。即使王莽做得再好,刘秀也总能找出一些小问题,让王莽感到非常沮丧。\n王莽虽然对此感到不满,但他并没有放弃。他知道,只有通过自己的努力,才能获得更好的生活。他坚持每天早起,尽管他知道那天可能会再次被刘秀扣工资。他始终保持微笑,尽管他知道刘秀可能会再次对他挑剔。
**query**:(in Chinese) 王莽在谁的手下工作?
<br>
# Quickstart
We provide simple examples illustrating the use of 360Zhinao-7B-Base and 360Zhinao-7B-Chat on 🤖ModelScope and 🤗Transformers.
## Dependency Installation
- python >= 3.8
- pytorch >= 2.0
- transformers >= 4.37.2
- CUDA >= 11.4
```shell
pip install -r requirements.txt
```
Optionally, we recommend installing Flash-Attention 2 to improve performance and reduce memory footprint.
>flash-attn >= 2.3.6
```shell
FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn==2.3.6
```
## 🤗 Transformers
### Demonstration of Base Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.generation import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
### Demonstration of Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.generation import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
messages = []
#round-1
messages.append({"role": "user", "content": "介绍一下刘德华"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
#round-2
messages.append({"role": "user", "content": "他有什么代表作?"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
```
## 🤖 ModelScope
### Demonstration of Base Model Inference
```python
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
### Demonstration of Chat Model Inference
```python
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig
MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME_OR_PATH,
device_map="auto",
trust_remote_code=True)
generation_config = GenerationConfig.from_pretrained(
MODEL_NAME_OR_PATH,
trust_remote_code=True)
messages = []
#round-1
messages.append({"role": "user", "content": "介绍一下刘德华"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
#round-2
messages.append({"role": "user", "content": "他有什么代表作?"})
response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
messages.append({"role": "assistant", "content": response})
print(messages)
```
## CLI Demo
Use terminal for command-line interface:
```shell
python cli_demo.py
```
<p align="center">
<img src="assets/cli_demo.gif" width="600" />
<p>
## Web Demo
```shell
streamlit run web_demo.py
```
<p align="center">
<img src="assets/web_demo.gif" width="600" />
<p>
## API Demo
Launch api:
```shell
python openai_api.py
```
Then request with parameters:
```shell
curl 'http://localhost:8360/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"max_new_tokens": 200,
"do_sample": true,
"top_k": 0,
"top_p": 0.8,
"temperature": 1.0,
"repetition_penalty": 1.0,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"}
]
}'
```
<br>
# Model Inference
## Quantization
We provide quantization schemes based on AutoGPTQ and release the Int4 quantization models.
## Deployment
### vLLM Installation
We recommend using `vLLM==0.3.3`.
If you are using **CUDA 12.1 and PyTorch 2.1**, you can install vLLM directly with:
```shell
pip install vllm==0.3.3
```
Otherwise, please refer to the official vLLM [Installation Instructions](https://docs.vllm.ai/en/latest/getting_started/installation.html).
After installation, perform the following steps:
1. Copy `vllm/zhinao.py` into `vllm/model_executor/models` in your vllm installation directory (in python/conda env).
2. Copy `vllm/serving_chat.py` into `vllm/entrypoints/openai` in your vllm installation directory.
3. Then add a line in `vllm/model_executor/models/__init__.py`
```shell
"ZhinaoForCausalLM": ("zhinao", "ZhinaoForCausalLM"),
```
### vLLM Service Start
Start the service:
```shell
python -m vllm.entrypoints.openai.api_server \
--served-model-name 360Zhinao-7B-Chat-4K \
--model qihoo360/360Zhinao-7B-Chat-4K \
--trust-remote-code \
--tensor-parallel-size 1 \
--max-model-len 4096 \
--host 0.0.0.0 \
--port 8360
```
Use curl to request the service:
```shell
curl http://localhost:8360/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "360Zhinao-7B-Chat-4K",
"max_tokens": 200,
"top_k": -1,
"top_p": 0.8,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"}
],
"stop": [
"<eod>",
"<|im_end|>",
"<|im_start|>"
]
}'
```
Use python to request the service:
```python
from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8360/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="360Zhinao-7B-Chat-4K",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "你好"},
],
stop=[
"<eod>",
"<|im_end|>",
"<|im_start|>"
],
presence_penalty=0.0,
frequency_penalty=0.0
)
print("Chat response:", chat_response)
```
> If you need to enable repetition penalty, we recommend setting `presence_penalty` and `frequency_penalty` instead of `repetition_penalty`.
<br>
# Model Finetune
## Training data
Training Data: `data/training_data_sample.json`. This example data has 10,000 rows sampled from [multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) with converted format.
Data Format:
```json
[
{
"id": 1,
"conversations": [
{
"from": "system",
"value": "You are a helpful assistant."
},
{
"from": "user",
"value": "您好啊"
},
{
"from": "assistant",
"value": "你好!我今天能为您做些什么?有什么问题或需要帮助吗? 我在这里为您提供服务。"
}
]
}
]
```
## Finetuning scripts
```shell
set -x
HOSTFILE=hostfile
DS_CONFIG=./finetune/ds_config_zero2.json
# PARAMS
LR=5e-6
EPOCHS=3
MAX_LEN=4096
BATCH_SIZE=4
NUM_NODES=1
NUM_GPUS=8
MASTER_PORT=29500
IS_CONCAT=False # Whether to concatenate to maximum length (MAX_LEN)
DATA_PATH="./data/training_data_sample.json"
MODEL_PATH="qihoo360/360Zhinao-7B-Base"
OUTPUT_DIR="./outputs/"
deepspeed --hostfile ${HOSTFILE} \
--master_port ${MASTER_PORT} \
--num_nodes ${NUM_NODES} \
--num_gpus ${NUM_GPUS} \
finetune.py \
--report_to "tensorboard" \
--data_path ${DATA_PATH} \
--model_name_or_path ${MODEL_PATH} \
--output_dir ${OUTPUT_DIR} \
--model_max_length ${MAX_LEN} \
--num_train_epochs ${EPOCHS} \
--per_device_train_batch_size ${BATCH_SIZE} \
--gradient_accumulation_steps 1 \
--save_strategy steps \
--save_steps 200 \
--learning_rate ${LR} \
--lr_scheduler_type cosine \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--adam_epsilon 1e-8 \
--max_grad_norm 1.0 \
--weight_decay 0.1 \
--warmup_ratio 0.01 \
--gradient_checkpointing True \
--bf16 True \
--tf32 True \
--deepspeed ${DS_CONFIG} \
--is_concat ${IS_CONCAT} \
--logging_steps 1 \
--log_on_each_node False
```
```shell
bash finetune/ds_finetune.sh
```
- Configuring `HOSTFILE` switches between single-machine and multi-machine training.
- configuring `ds_config` switches between zero1, zero2 and zero3.
- `fp16, bf16` could configure mixed precision training. bf16 is recommended to be consistent with the pretrained model.
- `is_concat` configures whether the training data is concatenated or not.
<br>
# License
The source code of this repository follows the open-source license Apache 2.0.
360Zhinao open-source models support commercial use. If you wish to use these models or continue training them for commercial purposes, please contact us via email ([email protected]) to apply. For the specific license agreement, please see [<<360 Zhinao Open-Source Model License>>](https://github.com/Qihoo360/360zhinao/blob/main/360%E6%99%BA%E8%84%91%E5%BC%80%E6%BA%90%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E8%AF%81.txt).
|
{"language": ["zh", "en"], "library_name": "transformers", "license": "apache-2.0", "tags": ["qihoo360", "奇虎360", "zhinao", "360Zhinao", "pretrain"]}
|
task
|
[
"SUMMARIZATION"
] | 45,056 |
mini1013/master_item_ac
|
mini1013
|
text-classification
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
"region:us"
] | 2024-11-25T09:19:09Z |
2024-11-25T09:19:32+00:00
| 554 | 0 |
---
base_model: klue/roberta-base
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '[자체제작] 14k 콩사다리 체인 반지 핑크_D style(1푼 굵기)_10호 (주)제이디아이인터내셔널'
- text: 실리콘 동전 지갑 심플 캐릭터 [on] 블랙캣(동전지갑) 비150
- text: 체크 남자 베레모 아빠 모자 헌팅캡 패션 빵모자 외출 베이지체크 (4JS) 포제이스
- text: TIMBERLAND 남성 앨번 6인치 워터프루프 워커부츠_TB0A1OIZC641 070(250) 비츠컴퍼니
- text: 라인댄스화 헬스화 스포츠 여성 재즈화 댄스화 볼룸 모던 미드힐 37_블랙 스트레이트 3.5cm/굽(메쉬) 사랑옵다
inference: true
model-index:
- name: SetFit with klue/roberta-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.9385943021823656
name: Metric
---
# SetFit with klue/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 17 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.0 | <ul><li>'남녀공용 멀티스카프 목토시 반다나 헤어밴드 두건 블랙 비오는밤'</li><li>'후드 모자 귀달이 겨울 털모자 동물 목돌이 03.브라운 뿔샵'</li><li>'햇빛 뒷목가리개 메쉬 통풍 선가드 자외선차단썬캡가드 그늘모자 쿨메쉬모자_그레이 에스더블유컴퍼니'</li></ul> |
| 9.0 | <ul><li>'[LAP](강남점)아델라 핸들 미니 크로스백 (AP7AB208) 제트블랙(ZB)_FREE 신세계백화점'</li><li>'파스텔슬링백 힙색 미니 크로스 숄더백 그린 김후철'</li><li>'[메트로시티]봉봉백 클러치백 미듐 M233MQ3852Z 에이케이에스앤디 (주) AK인터넷쇼핑몰'</li></ul> |
| 15.0 | <ul><li>'크리스마스 뱃지 브로치 배지 19종 세트 및 낱개 봉제 사슴 5 구매대행 이음'</li><li>'오드스튜디오 ODDSTUDIO 베이직 니트 체크 머플러 - 21COLOR 블랙 CS스페이스'</li><li>'넥케이프 넥커프스 페이크카라 레이어드카라 셔츠카라 1-카라-화이트 행복나라'</li></ul> |
| 13.0 | <ul><li>'펄 쥬얼리 보석함 여행용 포켓 미니 악세사리 보관함 케이스 C타입-베이비핑크 제일사'</li><li>'[갤러리아] [비앤비골드] 14K 촘촘볼 블루큐빅 도넛링 반지 SRS39135 14K 화이트골드_1호 한화갤러리아(주)'</li><li>'미니골드 김천점 14K 18K 트레버 커플링 남자 여자 금반지 RJUC4047 RJUC4048 베이직하고 심플한 디자인 여자_14K옐로우골드 미니골드 김천점'</li></ul> |
| 1.0 | <ul><li>'[베어파우](신세계강남점)(BEARPAW) 남성 털 슬리퍼 MARY MENS 블랙 K814001ND-M BLACK (K814001ND)_280 주식회사 에스에스지닷컴'</li><li>'노스페이스 뮬 슬립온 브이모션 - NS93P53A 블랙_290 롯데백화점2관'</li><li>'사무실 남자 슬리퍼 가죽 남성 빅 사이즈 48 47 사무용 신입생코디실내화 blue_38 리마106'</li></ul> |
| 7.0 | <ul><li>'부드러운 슈트리 신발주름방지 신발모양유지 신발지탱 225 245 mm 커피와 기저귀'</li><li>'[갓성비] 꿀조합 애니비츠 세트 캐릭터 신발 악세사리 포켓몬 스누피 커비편의점SET 애니팝'</li><li>'MSMAX Jazz Dance Shoes Split Sole Men Dancing Sneakers High Top Boots for Women Silver 10.5 M Silver_11 Narrow 디아트479'</li></ul> |
| 11.0 | <ul><li>'캐리어 수트케이스 양면 개방형 기내용 바퀴가방 화이트_26인치 피스온트레이드'</li><li>'클래시 패스 커버여권 포트월렛 포트파우치 파우치 여행지갑 포트 케이스 (01 레모니) 주식회사유마켓'</li><li>'클래시패스커버 (안티스키밍 여권케이스) (10블랙) JTEC'</li></ul> |
| 4.0 | <ul><li>'고급 안경집 선글라스집 휴대용 케이스 파우치 하드 보관함 블랙 다온마켓'</li><li>'고급 올 칼라 크리스탈 다중 비즈 안경 줄 마스크 걸이 상품선택_블랙(골드) 리미몰'</li><li>'아이업꽈배기인조가죽안경줄10p세트선글라스줄 마니또야'</li></ul> |
| 14.0 | <ul><li>'[갤러리아] [Prada]프라다 23FW 사피아노 반지갑 블랙 2MO004 QME F0002 2MO004 QME F0002 FREE 한화갤러리아(주)'</li><li>'닥스 액세서리 [OSCAR][오스카][제네시스 전용] 네이비 프리미엄 토고 수입 가죽 차키케이스 DBHO2F573N2 XXX 주식회사 LF'</li><li>'톰브라운 23SS 남성 페블그레인 머니클립 블랙 MAW025L 00198 001 ONE SIZE 주식회사 이지겟인터내셔널'</li></ul> |
| 0.0 | <ul><li>'[롯데백화점]닥스ACC [선물포장/쇼핑백동봉] [GRIDⅡ] 브라운 패턴배색 소가죽 클러치백 DBBA2F266W3 롯데백화점_'</li><li>'만다리나덕 토트백 PIETRO P4T05163 은하수몰'</li><li>'내셔널지오그래픽 N245ATO510 베이직 에코백 BLACK TNSC'</li></ul> |
| 16.0 | <ul><li>'올림머리 메탈프레임 반머리 꼬임 집게핀 114 유광스틸 7cm 이지 아트 프로덕션 (EG ART PRODUCTION)'</li><li>'꼬임 메탈프레임 반머리 올림머리 집게핀 114 무광로즈 7cm 네오몰'</li><li>'폼폼 방울털 장식 미니 머리끈 포인트 헤어끈 퍼플 1P 은강'</li></ul> |
| 8.0 | <ul><li>'기모 롱 오버 니삭스 겨울 스타킹 다리 워머 롱삭스 롱양말 무릎 니하이 브라운 린이팸'</li><li>'최대12켤레 남여 국산양말 장목/니트/균일가/신상/중목/발목/수면/학생 37~38_37.여)털실 중목_4켤레 / 버건디 투투삭스'</li><li>'NY코튼클럽 5켤레 국산 극세사 기모 롱 무압박 임산부 수면양말 W8001-여성-카멜5족 GSSHOP_'</li></ul> |
| 5.0 | <ul><li>'[한국금거래소] 순금 카네이션 배지 1.875g 부모님 추석 명절 생신 생일 기념일 기념 축하 감사선물 주식회사 한국금거래소디지털에셋'</li><li>'[한국금거래소]한국금거래소 순금 용 37.5g [순금24K] 롯데아이몰'</li><li>'한국금거래소 실버바 1kg(1000g) 주식회사 한국금거래소디지털에셋'</li></ul> |
| 10.0 | <ul><li>'캠퍼 브루투스 트렉 첼시 앵클부츠 346335 EU 39 주식회사 수비르글로벌커머스(SUBIR Global Commerce)'</li><li>'슈콤마보니 워커 부츠 DG3CW22519BLK 블랙_250 롯데쇼핑(주) 프리미엄아울렛 타임빌라스'</li><li>'말랑 쿠키 거실화 실내화 거실슬리퍼 실내슬리퍼 LWS 그레이265mm 생활공작소365'</li></ul> |
| 6.0 | <ul><li>'BOXY 박시 워치와인더 BWS-S / BWS-F 1구 아답터1개로 쌓아서 사용가능 BWS-S(DG)아답터미포함 와치닷컴'</li><li>'지샥 GA-2100 2110 지얄오크 베젤 밴드 일체형 용두 메탈 우레탄밴드 커스텀 옵션5:실버+블랙베젤_1.일반버클_화이트 방울방울'</li><li>'스타샵 카시오 MRW-200H-2B2 남성 손목시계 c57 선택19. AW-49H-1B 스타샵'</li></ul> |
| 3.0 | <ul><li>'남자 멜빵 2 5CM 남성 및 여성 서스펜더 클립 사이드 홀스터 스타일 탄성 백 서스펜더 05 밝은 빨간색 헬로우스토어'</li><li>'멜빵 소형멜빵 용 멜빵 어린이멜빵 멜빵 맬빵 MinSellAmount 모루모루'</li><li>'[닥스 액세서리] [23FW] DBBE3F097BK 여성벨트DD Symbol 블랙 DD메탈릭 골드 버클 소 XXX '</li></ul> |
| 12.0 | <ul><li>'미니 토시 사무용 광목 자수 팔토시 레드로즈 다솜이네'</li><li>'백화점 여성 남성 천연 양가죽 장갑 스마트폰 터치 털 손가락 겨울 방한 가죽 커플 장갑 2.여성용/스웨이드/차콜 힐렉스'</li><li>'[선물포장] 울 캐시미어혼방 핑거홀 장갑 JAGV2F310G2,JAGV2F311W2,JAGV2F312E2,JAGV2F313/질스튜어트 그린 롯데쇼핑(주)'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9386 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_item_ac")
# Run inference
preds = model("실리콘 동전 지갑 심플 캐릭터 [on] 블랙캣(동전지갑) 비150")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.2537 | 30 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 450 |
| 1.0 | 650 |
| 2.0 | 650 |
| 3.0 | 150 |
| 4.0 | 300 |
| 5.0 | 120 |
| 6.0 | 224 |
| 7.0 | 350 |
| 8.0 | 100 |
| 9.0 | 467 |
| 10.0 | 500 |
| 11.0 | 600 |
| 12.0 | 150 |
| 13.0 | 450 |
| 14.0 | 400 |
| 15.0 | 1000 |
| 16.0 | 250 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0009 | 1 | 0.407 | - |
| 0.0469 | 50 | 0.3772 | - |
| 0.0939 | 100 | 0.3062 | - |
| 0.1408 | 150 | 0.2861 | - |
| 0.1878 | 200 | 0.2513 | - |
| 0.2347 | 250 | 0.2284 | - |
| 0.2817 | 300 | 0.1952 | - |
| 0.3286 | 350 | 0.149 | - |
| 0.3756 | 400 | 0.1154 | - |
| 0.4225 | 450 | 0.1042 | - |
| 0.4695 | 500 | 0.0802 | - |
| 0.5164 | 550 | 0.0765 | - |
| 0.5634 | 600 | 0.0767 | - |
| 0.6103 | 650 | 0.0475 | - |
| 0.6573 | 700 | 0.0535 | - |
| 0.7042 | 750 | 0.0293 | - |
| 0.7512 | 800 | 0.0388 | - |
| 0.7981 | 850 | 0.0156 | - |
| 0.8451 | 900 | 0.0348 | - |
| 0.8920 | 950 | 0.0241 | - |
| 0.9390 | 1000 | 0.023 | - |
| 0.9859 | 1050 | 0.0166 | - |
| 1.0329 | 1100 | 0.0124 | - |
| 1.0798 | 1150 | 0.0139 | - |
| 1.1268 | 1200 | 0.0122 | - |
| 1.1737 | 1250 | 0.0111 | - |
| 1.2207 | 1300 | 0.0062 | - |
| 1.2676 | 1350 | 0.0106 | - |
| 1.3146 | 1400 | 0.0112 | - |
| 1.3615 | 1450 | 0.0137 | - |
| 1.4085 | 1500 | 0.0154 | - |
| 1.4554 | 1550 | 0.0185 | - |
| 1.5023 | 1600 | 0.0248 | - |
| 1.5493 | 1650 | 0.0128 | - |
| 1.5962 | 1700 | 0.018 | - |
| 1.6432 | 1750 | 0.0013 | - |
| 1.6901 | 1800 | 0.0151 | - |
| 1.7371 | 1850 | 0.0208 | - |
| 1.7840 | 1900 | 0.0076 | - |
| 1.8310 | 1950 | 0.0138 | - |
| 1.8779 | 2000 | 0.0133 | - |
| 1.9249 | 2050 | 0.0131 | - |
| 1.9718 | 2100 | 0.0123 | - |
| 2.0188 | 2150 | 0.0165 | - |
| 2.0657 | 2200 | 0.0084 | - |
| 2.1127 | 2250 | 0.0062 | - |
| 2.1596 | 2300 | 0.0068 | - |
| 2.2066 | 2350 | 0.0023 | - |
| 2.2535 | 2400 | 0.006 | - |
| 2.3005 | 2450 | 0.0048 | - |
| 2.3474 | 2500 | 0.0016 | - |
| 2.3944 | 2550 | 0.0046 | - |
| 2.4413 | 2600 | 0.001 | - |
| 2.4883 | 2650 | 0.0022 | - |
| 2.5352 | 2700 | 0.0014 | - |
| 2.5822 | 2750 | 0.0004 | - |
| 2.6291 | 2800 | 0.0002 | - |
| 2.6761 | 2850 | 0.0004 | - |
| 2.7230 | 2900 | 0.0016 | - |
| 2.7700 | 2950 | 0.0018 | - |
| 2.8169 | 3000 | 0.0004 | - |
| 2.8638 | 3050 | 0.0001 | - |
| 2.9108 | 3100 | 0.0002 | - |
| 2.9577 | 3150 | 0.0018 | - |
| 3.0047 | 3200 | 0.0019 | - |
| 3.0516 | 3250 | 0.0001 | - |
| 3.0986 | 3300 | 0.0011 | - |
| 3.1455 | 3350 | 0.0001 | - |
| 3.1925 | 3400 | 0.0001 | - |
| 3.2394 | 3450 | 0.0002 | - |
| 3.2864 | 3500 | 0.0007 | - |
| 3.3333 | 3550 | 0.0001 | - |
| 3.3803 | 3600 | 0.0002 | - |
| 3.4272 | 3650 | 0.0001 | - |
| 3.4742 | 3700 | 0.0011 | - |
| 3.5211 | 3750 | 0.0013 | - |
| 3.5681 | 3800 | 0.0014 | - |
| 3.6150 | 3850 | 0.0001 | - |
| 3.6620 | 3900 | 0.0001 | - |
| 3.7089 | 3950 | 0.0002 | - |
| 3.7559 | 4000 | 0.0001 | - |
| 3.8028 | 4050 | 0.0014 | - |
| 3.8498 | 4100 | 0.0002 | - |
| 3.8967 | 4150 | 0.0001 | - |
| 3.9437 | 4200 | 0.0 | - |
| 3.9906 | 4250 | 0.0 | - |
| 4.0376 | 4300 | 0.0001 | - |
| 4.0845 | 4350 | 0.0002 | - |
| 4.1315 | 4400 | 0.0 | - |
| 4.1784 | 4450 | 0.0001 | - |
| 4.2254 | 4500 | 0.0 | - |
| 4.2723 | 4550 | 0.0 | - |
| 4.3192 | 4600 | 0.0003 | - |
| 4.3662 | 4650 | 0.0007 | - |
| 4.4131 | 4700 | 0.0 | - |
| 4.4601 | 4750 | 0.0001 | - |
| 4.5070 | 4800 | 0.0011 | - |
| 4.5540 | 4850 | 0.0003 | - |
| 4.6009 | 4900 | 0.0005 | - |
| 4.6479 | 4950 | 0.0001 | - |
| 4.6948 | 5000 | 0.0001 | - |
| 4.7418 | 5050 | 0.0001 | - |
| 4.7887 | 5100 | 0.0001 | - |
| 4.8357 | 5150 | 0.0 | - |
| 4.8826 | 5200 | 0.0 | - |
| 4.9296 | 5250 | 0.0 | - |
| 4.9765 | 5300 | 0.0001 | - |
| 5.0235 | 5350 | 0.0 | - |
| 5.0704 | 5400 | 0.0 | - |
| 5.1174 | 5450 | 0.0 | - |
| 5.1643 | 5500 | 0.0 | - |
| 5.2113 | 5550 | 0.0 | - |
| 5.2582 | 5600 | 0.0001 | - |
| 5.3052 | 5650 | 0.0 | - |
| 5.3521 | 5700 | 0.0 | - |
| 5.3991 | 5750 | 0.0 | - |
| 5.4460 | 5800 | 0.0 | - |
| 5.4930 | 5850 | 0.0 | - |
| 5.5399 | 5900 | 0.0 | - |
| 5.5869 | 5950 | 0.0 | - |
| 5.6338 | 6000 | 0.0 | - |
| 5.6808 | 6050 | 0.0 | - |
| 5.7277 | 6100 | 0.0 | - |
| 5.7746 | 6150 | 0.0 | - |
| 5.8216 | 6200 | 0.0 | - |
| 5.8685 | 6250 | 0.0 | - |
| 5.9155 | 6300 | 0.0001 | - |
| 5.9624 | 6350 | 0.0004 | - |
| 6.0094 | 6400 | 0.0007 | - |
| 6.0563 | 6450 | 0.0 | - |
| 6.1033 | 6500 | 0.0001 | - |
| 6.1502 | 6550 | 0.0 | - |
| 6.1972 | 6600 | 0.0001 | - |
| 6.2441 | 6650 | 0.0 | - |
| 6.2911 | 6700 | 0.0 | - |
| 6.3380 | 6750 | 0.0009 | - |
| 6.3850 | 6800 | 0.0 | - |
| 6.4319 | 6850 | 0.0001 | - |
| 6.4789 | 6900 | 0.0 | - |
| 6.5258 | 6950 | 0.0001 | - |
| 6.5728 | 7000 | 0.0 | - |
| 6.6197 | 7050 | 0.0 | - |
| 6.6667 | 7100 | 0.0 | - |
| 6.7136 | 7150 | 0.0 | - |
| 6.7606 | 7200 | 0.0001 | - |
| 6.8075 | 7250 | 0.0 | - |
| 6.8545 | 7300 | 0.0 | - |
| 6.9014 | 7350 | 0.0 | - |
| 6.9484 | 7400 | 0.0 | - |
| 6.9953 | 7450 | 0.0 | - |
| 7.0423 | 7500 | 0.0 | - |
| 7.0892 | 7550 | 0.0 | - |
| 7.1362 | 7600 | 0.0 | - |
| 7.1831 | 7650 | 0.0 | - |
| 7.2300 | 7700 | 0.0 | - |
| 7.2770 | 7750 | 0.0001 | - |
| 7.3239 | 7800 | 0.0 | - |
| 7.3709 | 7850 | 0.0 | - |
| 7.4178 | 7900 | 0.0 | - |
| 7.4648 | 7950 | 0.0 | - |
| 7.5117 | 8000 | 0.0 | - |
| 7.5587 | 8050 | 0.0 | - |
| 7.6056 | 8100 | 0.0 | - |
| 7.6526 | 8150 | 0.0024 | - |
| 7.6995 | 8200 | 0.0 | - |
| 7.7465 | 8250 | 0.0 | - |
| 7.7934 | 8300 | 0.0 | - |
| 7.8404 | 8350 | 0.0 | - |
| 7.8873 | 8400 | 0.0 | - |
| 7.9343 | 8450 | 0.0 | - |
| 7.9812 | 8500 | 0.0 | - |
| 8.0282 | 8550 | 0.0 | - |
| 8.0751 | 8600 | 0.0 | - |
| 8.1221 | 8650 | 0.0 | - |
| 8.1690 | 8700 | 0.0 | - |
| 8.2160 | 8750 | 0.0 | - |
| 8.2629 | 8800 | 0.0 | - |
| 8.3099 | 8850 | 0.0 | - |
| 8.3568 | 8900 | 0.0 | - |
| 8.4038 | 8950 | 0.0 | - |
| 8.4507 | 9000 | 0.0 | - |
| 8.4977 | 9050 | 0.0 | - |
| 8.5446 | 9100 | 0.0 | - |
| 8.5915 | 9150 | 0.0 | - |
| 8.6385 | 9200 | 0.0002 | - |
| 8.6854 | 9250 | 0.0003 | - |
| 8.7324 | 9300 | 0.0005 | - |
| 8.7793 | 9350 | 0.0001 | - |
| 8.8263 | 9400 | 0.0001 | - |
| 8.8732 | 9450 | 0.0001 | - |
| 8.9202 | 9500 | 0.0 | - |
| 8.9671 | 9550 | 0.0 | - |
| 9.0141 | 9600 | 0.0001 | - |
| 9.0610 | 9650 | 0.0001 | - |
| 9.1080 | 9700 | 0.0 | - |
| 9.1549 | 9750 | 0.0 | - |
| 9.2019 | 9800 | 0.0001 | - |
| 9.2488 | 9850 | 0.0 | - |
| 9.2958 | 9900 | 0.0 | - |
| 9.3427 | 9950 | 0.0 | - |
| 9.3897 | 10000 | 0.0 | - |
| 9.4366 | 10050 | 0.0 | - |
| 9.4836 | 10100 | 0.0 | - |
| 9.5305 | 10150 | 0.0 | - |
| 9.5775 | 10200 | 0.0 | - |
| 9.6244 | 10250 | 0.0 | - |
| 9.6714 | 10300 | 0.0 | - |
| 9.7183 | 10350 | 0.0 | - |
| 9.7653 | 10400 | 0.0 | - |
| 9.8122 | 10450 | 0.0 | - |
| 9.8592 | 10500 | 0.0016 | - |
| 9.9061 | 10550 | 0.0 | - |
| 9.9531 | 10600 | 0.0 | - |
| 10.0 | 10650 | 0.0 | - |
| 10.0469 | 10700 | 0.0003 | - |
| 10.0939 | 10750 | 0.0 | - |
| 10.1408 | 10800 | 0.0 | - |
| 10.1878 | 10850 | 0.0 | - |
| 10.2347 | 10900 | 0.0 | - |
| 10.2817 | 10950 | 0.0 | - |
| 10.3286 | 11000 | 0.0 | - |
| 10.3756 | 11050 | 0.0 | - |
| 10.4225 | 11100 | 0.0 | - |
| 10.4695 | 11150 | 0.0 | - |
| 10.5164 | 11200 | 0.0 | - |
| 10.5634 | 11250 | 0.0 | - |
| 10.6103 | 11300 | 0.0 | - |
| 10.6573 | 11350 | 0.0 | - |
| 10.7042 | 11400 | 0.0 | - |
| 10.7512 | 11450 | 0.0 | - |
| 10.7981 | 11500 | 0.0 | - |
| 10.8451 | 11550 | 0.0 | - |
| 10.8920 | 11600 | 0.0 | - |
| 10.9390 | 11650 | 0.0 | - |
| 10.9859 | 11700 | 0.0 | - |
| 11.0329 | 11750 | 0.0 | - |
| 11.0798 | 11800 | 0.0 | - |
| 11.1268 | 11850 | 0.0 | - |
| 11.1737 | 11900 | 0.0 | - |
| 11.2207 | 11950 | 0.0 | - |
| 11.2676 | 12000 | 0.0 | - |
| 11.3146 | 12050 | 0.0 | - |
| 11.3615 | 12100 | 0.0 | - |
| 11.4085 | 12150 | 0.0 | - |
| 11.4554 | 12200 | 0.0 | - |
| 11.5023 | 12250 | 0.0015 | - |
| 11.5493 | 12300 | 0.0 | - |
| 11.5962 | 12350 | 0.0 | - |
| 11.6432 | 12400 | 0.0 | - |
| 11.6901 | 12450 | 0.0 | - |
| 11.7371 | 12500 | 0.0 | - |
| 11.7840 | 12550 | 0.0002 | - |
| 11.8310 | 12600 | 0.0 | - |
| 11.8779 | 12650 | 0.0 | - |
| 11.9249 | 12700 | 0.0 | - |
| 11.9718 | 12750 | 0.0001 | - |
| 12.0188 | 12800 | 0.0 | - |
| 12.0657 | 12850 | 0.0 | - |
| 12.1127 | 12900 | 0.0 | - |
| 12.1596 | 12950 | 0.0001 | - |
| 12.2066 | 13000 | 0.0001 | - |
| 12.2535 | 13050 | 0.0 | - |
| 12.3005 | 13100 | 0.0 | - |
| 12.3474 | 13150 | 0.0001 | - |
| 12.3944 | 13200 | 0.0 | - |
| 12.4413 | 13250 | 0.0 | - |
| 12.4883 | 13300 | 0.0 | - |
| 12.5352 | 13350 | 0.0 | - |
| 12.5822 | 13400 | 0.0 | - |
| 12.6291 | 13450 | 0.0 | - |
| 12.6761 | 13500 | 0.0 | - |
| 12.7230 | 13550 | 0.0 | - |
| 12.7700 | 13600 | 0.0 | - |
| 12.8169 | 13650 | 0.0 | - |
| 12.8638 | 13700 | 0.0 | - |
| 12.9108 | 13750 | 0.0 | - |
| 12.9577 | 13800 | 0.0 | - |
| 13.0047 | 13850 | 0.0 | - |
| 13.0516 | 13900 | 0.0 | - |
| 13.0986 | 13950 | 0.0 | - |
| 13.1455 | 14000 | 0.0 | - |
| 13.1925 | 14050 | 0.0 | - |
| 13.2394 | 14100 | 0.0 | - |
| 13.2864 | 14150 | 0.0 | - |
| 13.3333 | 14200 | 0.0 | - |
| 13.3803 | 14250 | 0.0 | - |
| 13.4272 | 14300 | 0.0 | - |
| 13.4742 | 14350 | 0.0 | - |
| 13.5211 | 14400 | 0.0 | - |
| 13.5681 | 14450 | 0.0 | - |
| 13.6150 | 14500 | 0.0 | - |
| 13.6620 | 14550 | 0.0 | - |
| 13.7089 | 14600 | 0.0 | - |
| 13.7559 | 14650 | 0.0 | - |
| 13.8028 | 14700 | 0.0 | - |
| 13.8498 | 14750 | 0.0 | - |
| 13.8967 | 14800 | 0.0 | - |
| 13.9437 | 14850 | 0.0 | - |
| 13.9906 | 14900 | 0.0 | - |
| 14.0376 | 14950 | 0.0 | - |
| 14.0845 | 15000 | 0.0 | - |
| 14.1315 | 15050 | 0.0 | - |
| 14.1784 | 15100 | 0.0001 | - |
| 14.2254 | 15150 | 0.0 | - |
| 14.2723 | 15200 | 0.0 | - |
| 14.3192 | 15250 | 0.0 | - |
| 14.3662 | 15300 | 0.0 | - |
| 14.4131 | 15350 | 0.0 | - |
| 14.4601 | 15400 | 0.0 | - |
| 14.5070 | 15450 | 0.0 | - |
| 14.5540 | 15500 | 0.0 | - |
| 14.6009 | 15550 | 0.0 | - |
| 14.6479 | 15600 | 0.0 | - |
| 14.6948 | 15650 | 0.0 | - |
| 14.7418 | 15700 | 0.0 | - |
| 14.7887 | 15750 | 0.0 | - |
| 14.8357 | 15800 | 0.0 | - |
| 14.8826 | 15850 | 0.0 | - |
| 14.9296 | 15900 | 0.0 | - |
| 14.9765 | 15950 | 0.0 | - |
| 15.0235 | 16000 | 0.0 | - |
| 15.0704 | 16050 | 0.0 | - |
| 15.1174 | 16100 | 0.0 | - |
| 15.1643 | 16150 | 0.0 | - |
| 15.2113 | 16200 | 0.0 | - |
| 15.2582 | 16250 | 0.0 | - |
| 15.3052 | 16300 | 0.0 | - |
| 15.3521 | 16350 | 0.0 | - |
| 15.3991 | 16400 | 0.0 | - |
| 15.4460 | 16450 | 0.0 | - |
| 15.4930 | 16500 | 0.0 | - |
| 15.5399 | 16550 | 0.0 | - |
| 15.5869 | 16600 | 0.0 | - |
| 15.6338 | 16650 | 0.0 | - |
| 15.6808 | 16700 | 0.0 | - |
| 15.7277 | 16750 | 0.0 | - |
| 15.7746 | 16800 | 0.0 | - |
| 15.8216 | 16850 | 0.0 | - |
| 15.8685 | 16900 | 0.0 | - |
| 15.9155 | 16950 | 0.0 | - |
| 15.9624 | 17000 | 0.0 | - |
| 16.0094 | 17050 | 0.0 | - |
| 16.0563 | 17100 | 0.0 | - |
| 16.1033 | 17150 | 0.0 | - |
| 16.1502 | 17200 | 0.0 | - |
| 16.1972 | 17250 | 0.0 | - |
| 16.2441 | 17300 | 0.0 | - |
| 16.2911 | 17350 | 0.0 | - |
| 16.3380 | 17400 | 0.0 | - |
| 16.3850 | 17450 | 0.0 | - |
| 16.4319 | 17500 | 0.0 | - |
| 16.4789 | 17550 | 0.0 | - |
| 16.5258 | 17600 | 0.0 | - |
| 16.5728 | 17650 | 0.0 | - |
| 16.6197 | 17700 | 0.0 | - |
| 16.6667 | 17750 | 0.0 | - |
| 16.7136 | 17800 | 0.0 | - |
| 16.7606 | 17850 | 0.0 | - |
| 16.8075 | 17900 | 0.0 | - |
| 16.8545 | 17950 | 0.0 | - |
| 16.9014 | 18000 | 0.0 | - |
| 16.9484 | 18050 | 0.0 | - |
| 16.9953 | 18100 | 0.0 | - |
| 17.0423 | 18150 | 0.0 | - |
| 17.0892 | 18200 | 0.0 | - |
| 17.1362 | 18250 | 0.0 | - |
| 17.1831 | 18300 | 0.0 | - |
| 17.2300 | 18350 | 0.0 | - |
| 17.2770 | 18400 | 0.0 | - |
| 17.3239 | 18450 | 0.0 | - |
| 17.3709 | 18500 | 0.0 | - |
| 17.4178 | 18550 | 0.0 | - |
| 17.4648 | 18600 | 0.0 | - |
| 17.5117 | 18650 | 0.0 | - |
| 17.5587 | 18700 | 0.0 | - |
| 17.6056 | 18750 | 0.0 | - |
| 17.6526 | 18800 | 0.0 | - |
| 17.6995 | 18850 | 0.0 | - |
| 17.7465 | 18900 | 0.0 | - |
| 17.7934 | 18950 | 0.0 | - |
| 17.8404 | 19000 | 0.0 | - |
| 17.8873 | 19050 | 0.0 | - |
| 17.9343 | 19100 | 0.0 | - |
| 17.9812 | 19150 | 0.0 | - |
| 18.0282 | 19200 | 0.0 | - |
| 18.0751 | 19250 | 0.0 | - |
| 18.1221 | 19300 | 0.0 | - |
| 18.1690 | 19350 | 0.0 | - |
| 18.2160 | 19400 | 0.0 | - |
| 18.2629 | 19450 | 0.0 | - |
| 18.3099 | 19500 | 0.0 | - |
| 18.3568 | 19550 | 0.0 | - |
| 18.4038 | 19600 | 0.0 | - |
| 18.4507 | 19650 | 0.0 | - |
| 18.4977 | 19700 | 0.0 | - |
| 18.5446 | 19750 | 0.0 | - |
| 18.5915 | 19800 | 0.0 | - |
| 18.6385 | 19850 | 0.0 | - |
| 18.6854 | 19900 | 0.0 | - |
| 18.7324 | 19950 | 0.0 | - |
| 18.7793 | 20000 | 0.0 | - |
| 18.8263 | 20050 | 0.0 | - |
| 18.8732 | 20100 | 0.0 | - |
| 18.9202 | 20150 | 0.0 | - |
| 18.9671 | 20200 | 0.0 | - |
| 19.0141 | 20250 | 0.0 | - |
| 19.0610 | 20300 | 0.0 | - |
| 19.1080 | 20350 | 0.0 | - |
| 19.1549 | 20400 | 0.0 | - |
| 19.2019 | 20450 | 0.0 | - |
| 19.2488 | 20500 | 0.0 | - |
| 19.2958 | 20550 | 0.0 | - |
| 19.3427 | 20600 | 0.0 | - |
| 19.3897 | 20650 | 0.0 | - |
| 19.4366 | 20700 | 0.0 | - |
| 19.4836 | 20750 | 0.0 | - |
| 19.5305 | 20800 | 0.0 | - |
| 19.5775 | 20850 | 0.0 | - |
| 19.6244 | 20900 | 0.0 | - |
| 19.6714 | 20950 | 0.0 | - |
| 19.7183 | 21000 | 0.0 | - |
| 19.7653 | 21050 | 0.0 | - |
| 19.8122 | 21100 | 0.0 | - |
| 19.8592 | 21150 | 0.0 | - |
| 19.9061 | 21200 | 0.0 | - |
| 19.9531 | 21250 | 0.0 | - |
| 20.0 | 21300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SetFit with klue/roberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [klue/roberta-base](https://huggingface.co/klue/roberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 17 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.0 | <ul><li>'남녀공용 멀티스카프 목토시 반다나 헤어밴드 두건 블랙 비오는밤'</li><li>'후드 모자 귀달이 겨울 털모자 동물 목돌이 03.브라운 뿔샵'</li><li>'햇빛 뒷목가리개 메쉬 통풍 선가드 자외선차단썬캡가드 그늘모자 쿨메쉬모자_그레이 에스더블유컴퍼니'</li></ul> |
| 9.0 | <ul><li>'[LAP](강남점)아델라 핸들 미니 크로스백 (AP7AB208) 제트블랙(ZB)_FREE 신세계백화점'</li><li>'파스텔슬링백 힙색 미니 크로스 숄더백 그린 김후철'</li><li>'[메트로시티]봉봉백 클러치백 미듐 M233MQ3852Z 에이케이에스앤디 (주) AK인터넷쇼핑몰'</li></ul> |
| 15.0 | <ul><li>'크리스마스 뱃지 브로치 배지 19종 세트 및 낱개 봉제 사슴 5 구매대행 이음'</li><li>'오드스튜디오 ODDSTUDIO 베이직 니트 체크 머플러 - 21COLOR 블랙 CS스페이스'</li><li>'넥케이프 넥커프스 페이크카라 레이어드카라 셔츠카라 1-카라-화이트 행복나라'</li></ul> |
| 13.0 | <ul><li>'펄 쥬얼리 보석함 여행용 포켓 미니 악세사리 보관함 케이스 C타입-베이비핑크 제일사'</li><li>'[갤러리아] [비앤비골드] 14K 촘촘볼 블루큐빅 도넛링 반지 SRS39135 14K 화이트골드_1호 한화갤러리아(주)'</li><li>'미니골드 김천점 14K 18K 트레버 커플링 남자 여자 금반지 RJUC4047 RJUC4048 베이직하고 심플한 디자인 여자_14K옐로우골드 미니골드 김천점'</li></ul> |
| 1.0 | <ul><li>'[베어파우](신세계강남점)(BEARPAW) 남성 털 슬리퍼 MARY MENS 블랙 K814001ND-M BLACK (K814001ND)_280 주식회사 에스에스지닷컴'</li><li>'노스페이스 뮬 슬립온 브이모션 - NS93P53A 블랙_290 롯데백화점2관'</li><li>'사무실 남자 슬리퍼 가죽 남성 빅 사이즈 48 47 사무용 신입생코디실내화 blue_38 리마106'</li></ul> |
| 7.0 | <ul><li>'부드러운 슈트리 신발주름방지 신발모양유지 신발지탱 225 245 mm 커피와 기저귀'</li><li>'[갓성비] 꿀조합 애니비츠 세트 캐릭터 신발 악세사리 포켓몬 스누피 커비편의점SET 애니팝'</li><li>'MSMAX Jazz Dance Shoes Split Sole Men Dancing Sneakers High Top Boots for Women Silver 10.5 M Silver_11 Narrow 디아트479'</li></ul> |
| 11.0 | <ul><li>'캐리어 수트케이스 양면 개방형 기내용 바퀴가방 화이트_26인치 피스온트레이드'</li><li>'클래시 패스 커버여권 포트월렛 포트파우치 파우치 여행지갑 포트 케이스 (01 레모니) 주식회사유마켓'</li><li>'클래시패스커버 (안티스키밍 여권케이스) (10블랙) JTEC'</li></ul> |
| 4.0 | <ul><li>'고급 안경집 선글라스집 휴대용 케이스 파우치 하드 보관함 블랙 다온마켓'</li><li>'고급 올 칼라 크리스탈 다중 비즈 안경 줄 마스크 걸이 상품선택_블랙(골드) 리미몰'</li><li>'아이업꽈배기인조가죽안경줄10p세트선글라스줄 마니또야'</li></ul> |
| 14.0 | <ul><li>'[갤러리아] [Prada]프라다 23FW 사피아노 반지갑 블랙 2MO004 QME F0002 2MO004 QME F0002 FREE 한화갤러리아(주)'</li><li>'닥스 액세서리 [OSCAR][오스카][제네시스 전용] 네이비 프리미엄 토고 수입 가죽 차키케이스 DBHO2F573N2 XXX 주식회사 LF'</li><li>'톰브라운 23SS 남성 페블그레인 머니클립 블랙 MAW025L 00198 001 ONE SIZE 주식회사 이지겟인터내셔널'</li></ul> |
| 0.0 | <ul><li>'[롯데백화점]닥스ACC [선물포장/쇼핑백동봉] [GRIDⅡ] 브라운 패턴배색 소가죽 클러치백 DBBA2F266W3 롯데백화점_'</li><li>'만다리나덕 토트백 PIETRO P4T05163 은하수몰'</li><li>'내셔널지오그래픽 N245ATO510 베이직 에코백 BLACK TNSC'</li></ul> |
| 16.0 | <ul><li>'올림머리 메탈프레임 반머리 꼬임 집게핀 114 유광스틸 7cm 이지 아트 프로덕션 (EG ART PRODUCTION)'</li><li>'꼬임 메탈프레임 반머리 올림머리 집게핀 114 무광로즈 7cm 네오몰'</li><li>'폼폼 방울털 장식 미니 머리끈 포인트 헤어끈 퍼플 1P 은강'</li></ul> |
| 8.0 | <ul><li>'기모 롱 오버 니삭스 겨울 스타킹 다리 워머 롱삭스 롱양말 무릎 니하이 브라운 린이팸'</li><li>'최대12켤레 남여 국산양말 장목/니트/균일가/신상/중목/발목/수면/학생 37~38_37.여)털실 중목_4켤레 / 버건디 투투삭스'</li><li>'NY코튼클럽 5켤레 국산 극세사 기모 롱 무압박 임산부 수면양말 W8001-여성-카멜5족 GSSHOP_'</li></ul> |
| 5.0 | <ul><li>'[한국금거래소] 순금 카네이션 배지 1.875g 부모님 추석 명절 생신 생일 기념일 기념 축하 감사선물 주식회사 한국금거래소디지털에셋'</li><li>'[한국금거래소]한국금거래소 순금 용 37.5g [순금24K] 롯데아이몰'</li><li>'한국금거래소 실버바 1kg(1000g) 주식회사 한국금거래소디지털에셋'</li></ul> |
| 10.0 | <ul><li>'캠퍼 브루투스 트렉 첼시 앵클부츠 346335 EU 39 주식회사 수비르글로벌커머스(SUBIR Global Commerce)'</li><li>'슈콤마보니 워커 부츠 DG3CW22519BLK 블랙_250 롯데쇼핑(주) 프리미엄아울렛 타임빌라스'</li><li>'말랑 쿠키 거실화 실내화 거실슬리퍼 실내슬리퍼 LWS 그레이265mm 생활공작소365'</li></ul> |
| 6.0 | <ul><li>'BOXY 박시 워치와인더 BWS-S / BWS-F 1구 아답터1개로 쌓아서 사용가능 BWS-S(DG)아답터미포함 와치닷컴'</li><li>'지샥 GA-2100 2110 지얄오크 베젤 밴드 일체형 용두 메탈 우레탄밴드 커스텀 옵션5:실버+블랙베젤_1.일반버클_화이트 방울방울'</li><li>'스타샵 카시오 MRW-200H-2B2 남성 손목시계 c57 선택19. AW-49H-1B 스타샵'</li></ul> |
| 3.0 | <ul><li>'남자 멜빵 2 5CM 남성 및 여성 서스펜더 클립 사이드 홀스터 스타일 탄성 백 서스펜더 05 밝은 빨간색 헬로우스토어'</li><li>'멜빵 소형멜빵 용 멜빵 어린이멜빵 멜빵 맬빵 MinSellAmount 모루모루'</li><li>'[닥스 액세서리] [23FW] DBBE3F097BK 여성벨트DD Symbol 블랙 DD메탈릭 골드 버클 소 XXX '</li></ul> |
| 12.0 | <ul><li>'미니 토시 사무용 광목 자수 팔토시 레드로즈 다솜이네'</li><li>'백화점 여성 남성 천연 양가죽 장갑 스마트폰 터치 털 손가락 겨울 방한 가죽 커플 장갑 2.여성용/스웨이드/차콜 힐렉스'</li><li>'[선물포장] 울 캐시미어혼방 핑거홀 장갑 JAGV2F310G2,JAGV2F311W2,JAGV2F312E2,JAGV2F313/질스튜어트 그린 롯데쇼핑(주)'</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.9386 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_item_ac")
# Run inference
preds = model("실리콘 동전 지갑 심플 캐릭터 [on] 블랙캣(동전지갑) 비150")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 10.2537 | 30 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 450 |
| 1.0 | 650 |
| 2.0 | 650 |
| 3.0 | 150 |
| 4.0 | 300 |
| 5.0 | 120 |
| 6.0 | 224 |
| 7.0 | 350 |
| 8.0 | 100 |
| 9.0 | 467 |
| 10.0 | 500 |
| 11.0 | 600 |
| 12.0 | 150 |
| 13.0 | 450 |
| 14.0 | 400 |
| 15.0 | 1000 |
| 16.0 | 250 |
### Training Hyperparameters
- batch_size: (512, 512)
- num_epochs: (20, 20)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-----:|:-------------:|:---------------:|
| 0.0009 | 1 | 0.407 | - |
| 0.0469 | 50 | 0.3772 | - |
| 0.0939 | 100 | 0.3062 | - |
| 0.1408 | 150 | 0.2861 | - |
| 0.1878 | 200 | 0.2513 | - |
| 0.2347 | 250 | 0.2284 | - |
| 0.2817 | 300 | 0.1952 | - |
| 0.3286 | 350 | 0.149 | - |
| 0.3756 | 400 | 0.1154 | - |
| 0.4225 | 450 | 0.1042 | - |
| 0.4695 | 500 | 0.0802 | - |
| 0.5164 | 550 | 0.0765 | - |
| 0.5634 | 600 | 0.0767 | - |
| 0.6103 | 650 | 0.0475 | - |
| 0.6573 | 700 | 0.0535 | - |
| 0.7042 | 750 | 0.0293 | - |
| 0.7512 | 800 | 0.0388 | - |
| 0.7981 | 850 | 0.0156 | - |
| 0.8451 | 900 | 0.0348 | - |
| 0.8920 | 950 | 0.0241 | - |
| 0.9390 | 1000 | 0.023 | - |
| 0.9859 | 1050 | 0.0166 | - |
| 1.0329 | 1100 | 0.0124 | - |
| 1.0798 | 1150 | 0.0139 | - |
| 1.1268 | 1200 | 0.0122 | - |
| 1.1737 | 1250 | 0.0111 | - |
| 1.2207 | 1300 | 0.0062 | - |
| 1.2676 | 1350 | 0.0106 | - |
| 1.3146 | 1400 | 0.0112 | - |
| 1.3615 | 1450 | 0.0137 | - |
| 1.4085 | 1500 | 0.0154 | - |
| 1.4554 | 1550 | 0.0185 | - |
| 1.5023 | 1600 | 0.0248 | - |
| 1.5493 | 1650 | 0.0128 | - |
| 1.5962 | 1700 | 0.018 | - |
| 1.6432 | 1750 | 0.0013 | - |
| 1.6901 | 1800 | 0.0151 | - |
| 1.7371 | 1850 | 0.0208 | - |
| 1.7840 | 1900 | 0.0076 | - |
| 1.8310 | 1950 | 0.0138 | - |
| 1.8779 | 2000 | 0.0133 | - |
| 1.9249 | 2050 | 0.0131 | - |
| 1.9718 | 2100 | 0.0123 | - |
| 2.0188 | 2150 | 0.0165 | - |
| 2.0657 | 2200 | 0.0084 | - |
| 2.1127 | 2250 | 0.0062 | - |
| 2.1596 | 2300 | 0.0068 | - |
| 2.2066 | 2350 | 0.0023 | - |
| 2.2535 | 2400 | 0.006 | - |
| 2.3005 | 2450 | 0.0048 | - |
| 2.3474 | 2500 | 0.0016 | - |
| 2.3944 | 2550 | 0.0046 | - |
| 2.4413 | 2600 | 0.001 | - |
| 2.4883 | 2650 | 0.0022 | - |
| 2.5352 | 2700 | 0.0014 | - |
| 2.5822 | 2750 | 0.0004 | - |
| 2.6291 | 2800 | 0.0002 | - |
| 2.6761 | 2850 | 0.0004 | - |
| 2.7230 | 2900 | 0.0016 | - |
| 2.7700 | 2950 | 0.0018 | - |
| 2.8169 | 3000 | 0.0004 | - |
| 2.8638 | 3050 | 0.0001 | - |
| 2.9108 | 3100 | 0.0002 | - |
| 2.9577 | 3150 | 0.0018 | - |
| 3.0047 | 3200 | 0.0019 | - |
| 3.0516 | 3250 | 0.0001 | - |
| 3.0986 | 3300 | 0.0011 | - |
| 3.1455 | 3350 | 0.0001 | - |
| 3.1925 | 3400 | 0.0001 | - |
| 3.2394 | 3450 | 0.0002 | - |
| 3.2864 | 3500 | 0.0007 | - |
| 3.3333 | 3550 | 0.0001 | - |
| 3.3803 | 3600 | 0.0002 | - |
| 3.4272 | 3650 | 0.0001 | - |
| 3.4742 | 3700 | 0.0011 | - |
| 3.5211 | 3750 | 0.0013 | - |
| 3.5681 | 3800 | 0.0014 | - |
| 3.6150 | 3850 | 0.0001 | - |
| 3.6620 | 3900 | 0.0001 | - |
| 3.7089 | 3950 | 0.0002 | - |
| 3.7559 | 4000 | 0.0001 | - |
| 3.8028 | 4050 | 0.0014 | - |
| 3.8498 | 4100 | 0.0002 | - |
| 3.8967 | 4150 | 0.0001 | - |
| 3.9437 | 4200 | 0.0 | - |
| 3.9906 | 4250 | 0.0 | - |
| 4.0376 | 4300 | 0.0001 | - |
| 4.0845 | 4350 | 0.0002 | - |
| 4.1315 | 4400 | 0.0 | - |
| 4.1784 | 4450 | 0.0001 | - |
| 4.2254 | 4500 | 0.0 | - |
| 4.2723 | 4550 | 0.0 | - |
| 4.3192 | 4600 | 0.0003 | - |
| 4.3662 | 4650 | 0.0007 | - |
| 4.4131 | 4700 | 0.0 | - |
| 4.4601 | 4750 | 0.0001 | - |
| 4.5070 | 4800 | 0.0011 | - |
| 4.5540 | 4850 | 0.0003 | - |
| 4.6009 | 4900 | 0.0005 | - |
| 4.6479 | 4950 | 0.0001 | - |
| 4.6948 | 5000 | 0.0001 | - |
| 4.7418 | 5050 | 0.0001 | - |
| 4.7887 | 5100 | 0.0001 | - |
| 4.8357 | 5150 | 0.0 | - |
| 4.8826 | 5200 | 0.0 | - |
| 4.9296 | 5250 | 0.0 | - |
| 4.9765 | 5300 | 0.0001 | - |
| 5.0235 | 5350 | 0.0 | - |
| 5.0704 | 5400 | 0.0 | - |
| 5.1174 | 5450 | 0.0 | - |
| 5.1643 | 5500 | 0.0 | - |
| 5.2113 | 5550 | 0.0 | - |
| 5.2582 | 5600 | 0.0001 | - |
| 5.3052 | 5650 | 0.0 | - |
| 5.3521 | 5700 | 0.0 | - |
| 5.3991 | 5750 | 0.0 | - |
| 5.4460 | 5800 | 0.0 | - |
| 5.4930 | 5850 | 0.0 | - |
| 5.5399 | 5900 | 0.0 | - |
| 5.5869 | 5950 | 0.0 | - |
| 5.6338 | 6000 | 0.0 | - |
| 5.6808 | 6050 | 0.0 | - |
| 5.7277 | 6100 | 0.0 | - |
| 5.7746 | 6150 | 0.0 | - |
| 5.8216 | 6200 | 0.0 | - |
| 5.8685 | 6250 | 0.0 | - |
| 5.9155 | 6300 | 0.0001 | - |
| 5.9624 | 6350 | 0.0004 | - |
| 6.0094 | 6400 | 0.0007 | - |
| 6.0563 | 6450 | 0.0 | - |
| 6.1033 | 6500 | 0.0001 | - |
| 6.1502 | 6550 | 0.0 | - |
| 6.1972 | 6600 | 0.0001 | - |
| 6.2441 | 6650 | 0.0 | - |
| 6.2911 | 6700 | 0.0 | - |
| 6.3380 | 6750 | 0.0009 | - |
| 6.3850 | 6800 | 0.0 | - |
| 6.4319 | 6850 | 0.0001 | - |
| 6.4789 | 6900 | 0.0 | - |
| 6.5258 | 6950 | 0.0001 | - |
| 6.5728 | 7000 | 0.0 | - |
| 6.6197 | 7050 | 0.0 | - |
| 6.6667 | 7100 | 0.0 | - |
| 6.7136 | 7150 | 0.0 | - |
| 6.7606 | 7200 | 0.0001 | - |
| 6.8075 | 7250 | 0.0 | - |
| 6.8545 | 7300 | 0.0 | - |
| 6.9014 | 7350 | 0.0 | - |
| 6.9484 | 7400 | 0.0 | - |
| 6.9953 | 7450 | 0.0 | - |
| 7.0423 | 7500 | 0.0 | - |
| 7.0892 | 7550 | 0.0 | - |
| 7.1362 | 7600 | 0.0 | - |
| 7.1831 | 7650 | 0.0 | - |
| 7.2300 | 7700 | 0.0 | - |
| 7.2770 | 7750 | 0.0001 | - |
| 7.3239 | 7800 | 0.0 | - |
| 7.3709 | 7850 | 0.0 | - |
| 7.4178 | 7900 | 0.0 | - |
| 7.4648 | 7950 | 0.0 | - |
| 7.5117 | 8000 | 0.0 | - |
| 7.5587 | 8050 | 0.0 | - |
| 7.6056 | 8100 | 0.0 | - |
| 7.6526 | 8150 | 0.0024 | - |
| 7.6995 | 8200 | 0.0 | - |
| 7.7465 | 8250 | 0.0 | - |
| 7.7934 | 8300 | 0.0 | - |
| 7.8404 | 8350 | 0.0 | - |
| 7.8873 | 8400 | 0.0 | - |
| 7.9343 | 8450 | 0.0 | - |
| 7.9812 | 8500 | 0.0 | - |
| 8.0282 | 8550 | 0.0 | - |
| 8.0751 | 8600 | 0.0 | - |
| 8.1221 | 8650 | 0.0 | - |
| 8.1690 | 8700 | 0.0 | - |
| 8.2160 | 8750 | 0.0 | - |
| 8.2629 | 8800 | 0.0 | - |
| 8.3099 | 8850 | 0.0 | - |
| 8.3568 | 8900 | 0.0 | - |
| 8.4038 | 8950 | 0.0 | - |
| 8.4507 | 9000 | 0.0 | - |
| 8.4977 | 9050 | 0.0 | - |
| 8.5446 | 9100 | 0.0 | - |
| 8.5915 | 9150 | 0.0 | - |
| 8.6385 | 9200 | 0.0002 | - |
| 8.6854 | 9250 | 0.0003 | - |
| 8.7324 | 9300 | 0.0005 | - |
| 8.7793 | 9350 | 0.0001 | - |
| 8.8263 | 9400 | 0.0001 | - |
| 8.8732 | 9450 | 0.0001 | - |
| 8.9202 | 9500 | 0.0 | - |
| 8.9671 | 9550 | 0.0 | - |
| 9.0141 | 9600 | 0.0001 | - |
| 9.0610 | 9650 | 0.0001 | - |
| 9.1080 | 9700 | 0.0 | - |
| 9.1549 | 9750 | 0.0 | - |
| 9.2019 | 9800 | 0.0001 | - |
| 9.2488 | 9850 | 0.0 | - |
| 9.2958 | 9900 | 0.0 | - |
| 9.3427 | 9950 | 0.0 | - |
| 9.3897 | 10000 | 0.0 | - |
| 9.4366 | 10050 | 0.0 | - |
| 9.4836 | 10100 | 0.0 | - |
| 9.5305 | 10150 | 0.0 | - |
| 9.5775 | 10200 | 0.0 | - |
| 9.6244 | 10250 | 0.0 | - |
| 9.6714 | 10300 | 0.0 | - |
| 9.7183 | 10350 | 0.0 | - |
| 9.7653 | 10400 | 0.0 | - |
| 9.8122 | 10450 | 0.0 | - |
| 9.8592 | 10500 | 0.0016 | - |
| 9.9061 | 10550 | 0.0 | - |
| 9.9531 | 10600 | 0.0 | - |
| 10.0 | 10650 | 0.0 | - |
| 10.0469 | 10700 | 0.0003 | - |
| 10.0939 | 10750 | 0.0 | - |
| 10.1408 | 10800 | 0.0 | - |
| 10.1878 | 10850 | 0.0 | - |
| 10.2347 | 10900 | 0.0 | - |
| 10.2817 | 10950 | 0.0 | - |
| 10.3286 | 11000 | 0.0 | - |
| 10.3756 | 11050 | 0.0 | - |
| 10.4225 | 11100 | 0.0 | - |
| 10.4695 | 11150 | 0.0 | - |
| 10.5164 | 11200 | 0.0 | - |
| 10.5634 | 11250 | 0.0 | - |
| 10.6103 | 11300 | 0.0 | - |
| 10.6573 | 11350 | 0.0 | - |
| 10.7042 | 11400 | 0.0 | - |
| 10.7512 | 11450 | 0.0 | - |
| 10.7981 | 11500 | 0.0 | - |
| 10.8451 | 11550 | 0.0 | - |
| 10.8920 | 11600 | 0.0 | - |
| 10.9390 | 11650 | 0.0 | - |
| 10.9859 | 11700 | 0.0 | - |
| 11.0329 | 11750 | 0.0 | - |
| 11.0798 | 11800 | 0.0 | - |
| 11.1268 | 11850 | 0.0 | - |
| 11.1737 | 11900 | 0.0 | - |
| 11.2207 | 11950 | 0.0 | - |
| 11.2676 | 12000 | 0.0 | - |
| 11.3146 | 12050 | 0.0 | - |
| 11.3615 | 12100 | 0.0 | - |
| 11.4085 | 12150 | 0.0 | - |
| 11.4554 | 12200 | 0.0 | - |
| 11.5023 | 12250 | 0.0015 | - |
| 11.5493 | 12300 | 0.0 | - |
| 11.5962 | 12350 | 0.0 | - |
| 11.6432 | 12400 | 0.0 | - |
| 11.6901 | 12450 | 0.0 | - |
| 11.7371 | 12500 | 0.0 | - |
| 11.7840 | 12550 | 0.0002 | - |
| 11.8310 | 12600 | 0.0 | - |
| 11.8779 | 12650 | 0.0 | - |
| 11.9249 | 12700 | 0.0 | - |
| 11.9718 | 12750 | 0.0001 | - |
| 12.0188 | 12800 | 0.0 | - |
| 12.0657 | 12850 | 0.0 | - |
| 12.1127 | 12900 | 0.0 | - |
| 12.1596 | 12950 | 0.0001 | - |
| 12.2066 | 13000 | 0.0001 | - |
| 12.2535 | 13050 | 0.0 | - |
| 12.3005 | 13100 | 0.0 | - |
| 12.3474 | 13150 | 0.0001 | - |
| 12.3944 | 13200 | 0.0 | - |
| 12.4413 | 13250 | 0.0 | - |
| 12.4883 | 13300 | 0.0 | - |
| 12.5352 | 13350 | 0.0 | - |
| 12.5822 | 13400 | 0.0 | - |
| 12.6291 | 13450 | 0.0 | - |
| 12.6761 | 13500 | 0.0 | - |
| 12.7230 | 13550 | 0.0 | - |
| 12.7700 | 13600 | 0.0 | - |
| 12.8169 | 13650 | 0.0 | - |
| 12.8638 | 13700 | 0.0 | - |
| 12.9108 | 13750 | 0.0 | - |
| 12.9577 | 13800 | 0.0 | - |
| 13.0047 | 13850 | 0.0 | - |
| 13.0516 | 13900 | 0.0 | - |
| 13.0986 | 13950 | 0.0 | - |
| 13.1455 | 14000 | 0.0 | - |
| 13.1925 | 14050 | 0.0 | - |
| 13.2394 | 14100 | 0.0 | - |
| 13.2864 | 14150 | 0.0 | - |
| 13.3333 | 14200 | 0.0 | - |
| 13.3803 | 14250 | 0.0 | - |
| 13.4272 | 14300 | 0.0 | - |
| 13.4742 | 14350 | 0.0 | - |
| 13.5211 | 14400 | 0.0 | - |
| 13.5681 | 14450 | 0.0 | - |
| 13.6150 | 14500 | 0.0 | - |
| 13.6620 | 14550 | 0.0 | - |
| 13.7089 | 14600 | 0.0 | - |
| 13.7559 | 14650 | 0.0 | - |
| 13.8028 | 14700 | 0.0 | - |
| 13.8498 | 14750 | 0.0 | - |
| 13.8967 | 14800 | 0.0 | - |
| 13.9437 | 14850 | 0.0 | - |
| 13.9906 | 14900 | 0.0 | - |
| 14.0376 | 14950 | 0.0 | - |
| 14.0845 | 15000 | 0.0 | - |
| 14.1315 | 15050 | 0.0 | - |
| 14.1784 | 15100 | 0.0001 | - |
| 14.2254 | 15150 | 0.0 | - |
| 14.2723 | 15200 | 0.0 | - |
| 14.3192 | 15250 | 0.0 | - |
| 14.3662 | 15300 | 0.0 | - |
| 14.4131 | 15350 | 0.0 | - |
| 14.4601 | 15400 | 0.0 | - |
| 14.5070 | 15450 | 0.0 | - |
| 14.5540 | 15500 | 0.0 | - |
| 14.6009 | 15550 | 0.0 | - |
| 14.6479 | 15600 | 0.0 | - |
| 14.6948 | 15650 | 0.0 | - |
| 14.7418 | 15700 | 0.0 | - |
| 14.7887 | 15750 | 0.0 | - |
| 14.8357 | 15800 | 0.0 | - |
| 14.8826 | 15850 | 0.0 | - |
| 14.9296 | 15900 | 0.0 | - |
| 14.9765 | 15950 | 0.0 | - |
| 15.0235 | 16000 | 0.0 | - |
| 15.0704 | 16050 | 0.0 | - |
| 15.1174 | 16100 | 0.0 | - |
| 15.1643 | 16150 | 0.0 | - |
| 15.2113 | 16200 | 0.0 | - |
| 15.2582 | 16250 | 0.0 | - |
| 15.3052 | 16300 | 0.0 | - |
| 15.3521 | 16350 | 0.0 | - |
| 15.3991 | 16400 | 0.0 | - |
| 15.4460 | 16450 | 0.0 | - |
| 15.4930 | 16500 | 0.0 | - |
| 15.5399 | 16550 | 0.0 | - |
| 15.5869 | 16600 | 0.0 | - |
| 15.6338 | 16650 | 0.0 | - |
| 15.6808 | 16700 | 0.0 | - |
| 15.7277 | 16750 | 0.0 | - |
| 15.7746 | 16800 | 0.0 | - |
| 15.8216 | 16850 | 0.0 | - |
| 15.8685 | 16900 | 0.0 | - |
| 15.9155 | 16950 | 0.0 | - |
| 15.9624 | 17000 | 0.0 | - |
| 16.0094 | 17050 | 0.0 | - |
| 16.0563 | 17100 | 0.0 | - |
| 16.1033 | 17150 | 0.0 | - |
| 16.1502 | 17200 | 0.0 | - |
| 16.1972 | 17250 | 0.0 | - |
| 16.2441 | 17300 | 0.0 | - |
| 16.2911 | 17350 | 0.0 | - |
| 16.3380 | 17400 | 0.0 | - |
| 16.3850 | 17450 | 0.0 | - |
| 16.4319 | 17500 | 0.0 | - |
| 16.4789 | 17550 | 0.0 | - |
| 16.5258 | 17600 | 0.0 | - |
| 16.5728 | 17650 | 0.0 | - |
| 16.6197 | 17700 | 0.0 | - |
| 16.6667 | 17750 | 0.0 | - |
| 16.7136 | 17800 | 0.0 | - |
| 16.7606 | 17850 | 0.0 | - |
| 16.8075 | 17900 | 0.0 | - |
| 16.8545 | 17950 | 0.0 | - |
| 16.9014 | 18000 | 0.0 | - |
| 16.9484 | 18050 | 0.0 | - |
| 16.9953 | 18100 | 0.0 | - |
| 17.0423 | 18150 | 0.0 | - |
| 17.0892 | 18200 | 0.0 | - |
| 17.1362 | 18250 | 0.0 | - |
| 17.1831 | 18300 | 0.0 | - |
| 17.2300 | 18350 | 0.0 | - |
| 17.2770 | 18400 | 0.0 | - |
| 17.3239 | 18450 | 0.0 | - |
| 17.3709 | 18500 | 0.0 | - |
| 17.4178 | 18550 | 0.0 | - |
| 17.4648 | 18600 | 0.0 | - |
| 17.5117 | 18650 | 0.0 | - |
| 17.5587 | 18700 | 0.0 | - |
| 17.6056 | 18750 | 0.0 | - |
| 17.6526 | 18800 | 0.0 | - |
| 17.6995 | 18850 | 0.0 | - |
| 17.7465 | 18900 | 0.0 | - |
| 17.7934 | 18950 | 0.0 | - |
| 17.8404 | 19000 | 0.0 | - |
| 17.8873 | 19050 | 0.0 | - |
| 17.9343 | 19100 | 0.0 | - |
| 17.9812 | 19150 | 0.0 | - |
| 18.0282 | 19200 | 0.0 | - |
| 18.0751 | 19250 | 0.0 | - |
| 18.1221 | 19300 | 0.0 | - |
| 18.1690 | 19350 | 0.0 | - |
| 18.2160 | 19400 | 0.0 | - |
| 18.2629 | 19450 | 0.0 | - |
| 18.3099 | 19500 | 0.0 | - |
| 18.3568 | 19550 | 0.0 | - |
| 18.4038 | 19600 | 0.0 | - |
| 18.4507 | 19650 | 0.0 | - |
| 18.4977 | 19700 | 0.0 | - |
| 18.5446 | 19750 | 0.0 | - |
| 18.5915 | 19800 | 0.0 | - |
| 18.6385 | 19850 | 0.0 | - |
| 18.6854 | 19900 | 0.0 | - |
| 18.7324 | 19950 | 0.0 | - |
| 18.7793 | 20000 | 0.0 | - |
| 18.8263 | 20050 | 0.0 | - |
| 18.8732 | 20100 | 0.0 | - |
| 18.9202 | 20150 | 0.0 | - |
| 18.9671 | 20200 | 0.0 | - |
| 19.0141 | 20250 | 0.0 | - |
| 19.0610 | 20300 | 0.0 | - |
| 19.1080 | 20350 | 0.0 | - |
| 19.1549 | 20400 | 0.0 | - |
| 19.2019 | 20450 | 0.0 | - |
| 19.2488 | 20500 | 0.0 | - |
| 19.2958 | 20550 | 0.0 | - |
| 19.3427 | 20600 | 0.0 | - |
| 19.3897 | 20650 | 0.0 | - |
| 19.4366 | 20700 | 0.0 | - |
| 19.4836 | 20750 | 0.0 | - |
| 19.5305 | 20800 | 0.0 | - |
| 19.5775 | 20850 | 0.0 | - |
| 19.6244 | 20900 | 0.0 | - |
| 19.6714 | 20950 | 0.0 | - |
| 19.7183 | 21000 | 0.0 | - |
| 19.7653 | 21050 | 0.0 | - |
| 19.8122 | 21100 | 0.0 | - |
| 19.8592 | 21150 | 0.0 | - |
| 19.9061 | 21200 | 0.0 | - |
| 19.9531 | 21250 | 0.0 | - |
| 20.0 | 21300 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.1.1
- Transformers: 4.46.1
- PyTorch: 2.4.0+cu121
- Datasets: 2.20.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "klue/roberta-base", "library_name": "setfit", "metrics": ["metric"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "[자체제작] 14k 콩사다리 체인 반지 핑크_D style(1푼 굵기)_10호 (주)제이디아이인터내셔널"}, {"text": "실리콘 동전 지갑 심플 캐릭터 [on] 블랙캣(동전지갑) 비150"}, {"text": "체크 남자 베레모 아빠 모자 헌팅캡 패션 빵모자 외출 베이지체크 (4JS) 포제이스"}, {"text": "TIMBERLAND 남성 앨번 6인치 워터프루프 워커부츠_TB0A1OIZC641 070(250) 비츠컴퍼니"}, {"text": "라인댄스화 헬스화 스포츠 여성 재즈화 댄스화 볼룸 모던 미드힐 37_블랙 스트레이트 3.5cm/굽(메쉬) 사랑옵다"}], "inference": true, "model-index": [{"name": "SetFit with klue/roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "metric", "value": 0.9385943021823656, "name": "Metric"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,057 |
barto17/language-detection-fine-tuned-on-xlm-roberta-base
|
barto17
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:common_language",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-25T10:34:42Z |
2023-09-25T11:46:18+00:00
| 13 | 0 |
---
base_model: xlm-roberta-base
datasets:
- common_language
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: common_language
type: common_language
config: full
split: test
args: full
metrics:
- type: accuracy
value: 0.9778634915311085
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1527
- Accuracy: 0.9779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2047 | 1.0 | 22194 | 0.1527 | 0.9779 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1527
- Accuracy: 0.9779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2047 | 1.0 | 22194 | 0.1527 | 0.9779 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
{"base_model": "xlm-roberta-base", "datasets": ["common_language"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "language-detection-fine-tuned-on-xlm-roberta-base", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "common_language", "type": "common_language", "config": "full", "split": "test", "args": "full"}, "metrics": [{"type": "accuracy", "value": 0.9778634915311085, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,058 |
fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"Translation",
"Editing",
"French",
"Scientific",
"Medical",
"en",
"dataset:fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-24T00:08:45Z |
2024-05-24T00:09:39+00:00
| 10 | 0 |
---
datasets:
- fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Translation
- Editing
- French
- Scientific
- Medical
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
service search for translation and editing
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| null |
Non_BioNLP
|
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
service search for translation and editing
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
{"datasets": ["fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-598568", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Translation", "Editing", "French", "Scientific", "Medical"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | 45,059 |
IsmatS/xlm-roberta-az-ner
|
IsmatS
|
token-classification
|
[
"safetensors",
"xlm-roberta",
"token-classification",
"ner",
"roberta",
"multilingual",
"az",
"dataset:LocalDoc/azerbaijani-ner-dataset",
"license:mit",
"model-index",
"region:us"
] | 2024-11-03T19:04:13Z |
2024-11-12T09:02:23+00:00
| 11 | 0 |
---
datasets:
- LocalDoc/azerbaijani-ner-dataset
language:
- az
license: mit
metrics:
- precision
- recall
- f1
tags:
- token-classification
- ner
- roberta
- multilingual
model-index:
- name: XLM-RoBERTa Azerbaijani NER Model
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: Azerbaijani NER Dataset
type: LocalDoc/azerbaijani-ner-dataset
metrics:
- type: precision
value: 0.76439
name: Precision
- type: recall
value: 0.74046
name: Recall
- type: f1
value: 0.752235
name: F1
---
# XLM-RoBERTa Azerbaijani NER Model
[](https://huggingface.co/IsmatS/xlm-roberta-az-ner)
This model is a fine-tuned version of **XLM-RoBERTa** for Named Entity Recognition (NER) in the Azerbaijani language. It recognizes several entity types commonly used in Azerbaijani text, providing high accuracy on tasks requiring entity extraction, such as personal names, locations, organizations, and dates.
## Model Details
- **Base Model**: `xlm-roberta-base`
- **Fine-tuned on**: [Azerbaijani Named Entity Recognition Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset)
- **Task**: Named Entity Recognition (NER)
- **Language**: Azerbaijani (az)
- **Dataset**: Custom Azerbaijani NER dataset with entity tags such as `PERSON`, `LOCATION`, `ORGANISATION`, `DATE`, etc.
### Data Source
The model was trained on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset), which provides annotated data with 25 distinct entity types specifically for the Azerbaijani language. This dataset is an invaluable resource for improving NLP tasks in Azerbaijani, including entity recognition and language understanding.
### Entity Types
The model recognizes the following entities:
- **PERSON**: Names of people
- **LOCATION**: Geographical locations
- **ORGANISATION**: Companies, institutions
- **DATE**: Dates and periods
- **MONEY**: Monetary values
- **TIME**: Time expressions
- **GPE**: Countries, cities, states
- **FACILITY**: Buildings, landmarks, etc.
- **EVENT**: Events and occurrences
- **...and more**
For the full list of entities, please refer to the dataset description.
## Performance Metrics
### Epoch-wise Performance
| Epoch | Training Loss | Validation Loss | Precision | Recall | F1 |
|-------|---------------|-----------------|-----------|--------|--------|
| 1 | 0.323100 | 0.275503 | 0.775799 | 0.694886 | 0.733117 |
| 2 | 0.272500 | 0.262481 | 0.739266 | 0.739900 | 0.739583 |
| 3 | 0.248600 | 0.252498 | 0.751478 | 0.741152 | 0.746280 |
| 4 | 0.236800 | 0.249968 | 0.754882 | 0.741449 | 0.748105 |
| 5 | 0.223800 | 0.252187 | 0.764390 | 0.740460 | 0.752235 |
| 6 | 0.218600 | 0.249887 | 0.756352 | 0.741646 | 0.748927 |
| 7 | 0.209700 | 0.250748 | 0.760696 | 0.739438 | 0.749916 |
### Detailed Classification Report (Epoch 7)
This table summarizes the precision, recall, and F1-score for each entity type, calculated on the validation dataset.
| Entity Type | Precision | Recall | F1-Score | Support |
|----------------|-----------|--------|----------|---------|
| ART | 0.54 | 0.20 | 0.29 | 1857 |
| DATE | 0.52 | 0.47 | 0.50 | 880 |
| EVENT | 0.69 | 0.35 | 0.47 | 96 |
| FACILITY | 0.69 | 0.69 | 0.69 | 1170 |
| LAW | 0.60 | 0.61 | 0.60 | 1122 |
| LOCATION | 0.77 | 0.82 | 0.80 | 9132 |
| MONEY | 0.61 | 0.57 | 0.59 | 540 |
| ORGANISATION | 0.69 | 0.68 | 0.69 | 544 |
| PERCENTAGE | 0.79 | 0.82 | 0.81 | 3591 |
| PERSON | 0.87 | 0.83 | 0.85 | 7037 |
| PRODUCT | 0.83 | 0.85 | 0.84 | 2808 |
| TIME | 0.55 | 0.51 | 0.53 | 1569 |
**Overall Metrics**:
- **Micro Average**: Precision = 0.76, Recall = 0.74, F1-Score = 0.75
- **Macro Average**: Precision = 0.68, Recall = 0.62, F1-Score = 0.64
- **Weighted Average**: Precision = 0.75, Recall = 0.74, F1-Score = 0.74
## Usage
You can use this model with the Hugging Face `transformers` library to perform NER on Azerbaijani text. Here’s an example:
### Installation
Make sure you have the `transformers` library installed:
```bash
pip install transformers
```
### Inference Example
Load the model and tokenizer, then run the NER pipeline on Azerbaijani text:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Load the model and tokenizer
model_name = "IsmatS/xlm-roberta-az-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Set up the NER pipeline
nlp_ner = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
# Example sentence
sentence = "Bakı şəhərində Azərbaycan Respublikasının prezidenti İlham Əliyev."
entities = nlp_ner(sentence)
# Display entities
for entity in entities:
print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']}")
```
### Sample Output
```json
[
{
"entity_group": "PERSON",
"score": 0.99,
"word": "İlham Əliyev",
"start": 34,
"end": 46
},
{
"entity_group": "LOCATION",
"score": 0.98,
"word": "Bakı",
"start": 0,
"end": 4
}
]
```
## Training Details
- **Training Data**: This model was fine-tuned on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset) with 25 entity types.
- **Training Framework**: Hugging Face `transformers`
- **Optimizer**: AdamW
- **Epochs**: 8
- **Batch Size**: 64
- **Evaluation Metric**: F1-score
## Limitations
- The model is trained specifically for the Azerbaijani language and may not generalize well to other languages.
- Certain rare entities may be misclassified due to limited training data in those categories.
## Citation
If you use this model in your research or application, please consider citing:
```
@model{ismats_az_ner_2024,
title={XLM-RoBERTa Azerbaijani NER Model},
author={Ismat Samadov},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/IsmatS/xlm-roberta-az-ner}
}
```
## License
This model is available under the [MIT License](https://opensource.org/licenses/MIT).
| null |
Non_BioNLP
|
# XLM-RoBERTa Azerbaijani NER Model
[](https://huggingface.co/IsmatS/xlm-roberta-az-ner)
This model is a fine-tuned version of **XLM-RoBERTa** for Named Entity Recognition (NER) in the Azerbaijani language. It recognizes several entity types commonly used in Azerbaijani text, providing high accuracy on tasks requiring entity extraction, such as personal names, locations, organizations, and dates.
## Model Details
- **Base Model**: `xlm-roberta-base`
- **Fine-tuned on**: [Azerbaijani Named Entity Recognition Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset)
- **Task**: Named Entity Recognition (NER)
- **Language**: Azerbaijani (az)
- **Dataset**: Custom Azerbaijani NER dataset with entity tags such as `PERSON`, `LOCATION`, `ORGANISATION`, `DATE`, etc.
### Data Source
The model was trained on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset), which provides annotated data with 25 distinct entity types specifically for the Azerbaijani language. This dataset is an invaluable resource for improving NLP tasks in Azerbaijani, including entity recognition and language understanding.
### Entity Types
The model recognizes the following entities:
- **PERSON**: Names of people
- **LOCATION**: Geographical locations
- **ORGANISATION**: Companies, institutions
- **DATE**: Dates and periods
- **MONEY**: Monetary values
- **TIME**: Time expressions
- **GPE**: Countries, cities, states
- **FACILITY**: Buildings, landmarks, etc.
- **EVENT**: Events and occurrences
- **...and more**
For the full list of entities, please refer to the dataset description.
## Performance Metrics
### Epoch-wise Performance
| Epoch | Training Loss | Validation Loss | Precision | Recall | F1 |
|-------|---------------|-----------------|-----------|--------|--------|
| 1 | 0.323100 | 0.275503 | 0.775799 | 0.694886 | 0.733117 |
| 2 | 0.272500 | 0.262481 | 0.739266 | 0.739900 | 0.739583 |
| 3 | 0.248600 | 0.252498 | 0.751478 | 0.741152 | 0.746280 |
| 4 | 0.236800 | 0.249968 | 0.754882 | 0.741449 | 0.748105 |
| 5 | 0.223800 | 0.252187 | 0.764390 | 0.740460 | 0.752235 |
| 6 | 0.218600 | 0.249887 | 0.756352 | 0.741646 | 0.748927 |
| 7 | 0.209700 | 0.250748 | 0.760696 | 0.739438 | 0.749916 |
### Detailed Classification Report (Epoch 7)
This table summarizes the precision, recall, and F1-score for each entity type, calculated on the validation dataset.
| Entity Type | Precision | Recall | F1-Score | Support |
|----------------|-----------|--------|----------|---------|
| ART | 0.54 | 0.20 | 0.29 | 1857 |
| DATE | 0.52 | 0.47 | 0.50 | 880 |
| EVENT | 0.69 | 0.35 | 0.47 | 96 |
| FACILITY | 0.69 | 0.69 | 0.69 | 1170 |
| LAW | 0.60 | 0.61 | 0.60 | 1122 |
| LOCATION | 0.77 | 0.82 | 0.80 | 9132 |
| MONEY | 0.61 | 0.57 | 0.59 | 540 |
| ORGANISATION | 0.69 | 0.68 | 0.69 | 544 |
| PERCENTAGE | 0.79 | 0.82 | 0.81 | 3591 |
| PERSON | 0.87 | 0.83 | 0.85 | 7037 |
| PRODUCT | 0.83 | 0.85 | 0.84 | 2808 |
| TIME | 0.55 | 0.51 | 0.53 | 1569 |
**Overall Metrics**:
- **Micro Average**: Precision = 0.76, Recall = 0.74, F1-Score = 0.75
- **Macro Average**: Precision = 0.68, Recall = 0.62, F1-Score = 0.64
- **Weighted Average**: Precision = 0.75, Recall = 0.74, F1-Score = 0.74
## Usage
You can use this model with the Hugging Face `transformers` library to perform NER on Azerbaijani text. Here’s an example:
### Installation
Make sure you have the `transformers` library installed:
```bash
pip install transformers
```
### Inference Example
Load the model and tokenizer, then run the NER pipeline on Azerbaijani text:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Load the model and tokenizer
model_name = "IsmatS/xlm-roberta-az-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Set up the NER pipeline
nlp_ner = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
# Example sentence
sentence = "Bakı şəhərində Azərbaycan Respublikasının prezidenti İlham Əliyev."
entities = nlp_ner(sentence)
# Display entities
for entity in entities:
print(f"Entity: {entity['word']}, Label: {entity['entity_group']}, Score: {entity['score']}")
```
### Sample Output
```json
[
{
"entity_group": "PERSON",
"score": 0.99,
"word": "İlham Əliyev",
"start": 34,
"end": 46
},
{
"entity_group": "LOCATION",
"score": 0.98,
"word": "Bakı",
"start": 0,
"end": 4
}
]
```
## Training Details
- **Training Data**: This model was fine-tuned on the [Azerbaijani NER Dataset](https://huggingface.co/datasets/LocalDoc/azerbaijani-ner-dataset) with 25 entity types.
- **Training Framework**: Hugging Face `transformers`
- **Optimizer**: AdamW
- **Epochs**: 8
- **Batch Size**: 64
- **Evaluation Metric**: F1-score
## Limitations
- The model is trained specifically for the Azerbaijani language and may not generalize well to other languages.
- Certain rare entities may be misclassified due to limited training data in those categories.
## Citation
If you use this model in your research or application, please consider citing:
```
@model{ismats_az_ner_2024,
title={XLM-RoBERTa Azerbaijani NER Model},
author={Ismat Samadov},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/IsmatS/xlm-roberta-az-ner}
}
```
## License
This model is available under the [MIT License](https://opensource.org/licenses/MIT).
|
{"datasets": ["LocalDoc/azerbaijani-ner-dataset"], "language": ["az"], "license": "mit", "metrics": ["precision", "recall", "f1"], "tags": ["token-classification", "ner", "roberta", "multilingual"], "model-index": [{"name": "XLM-RoBERTa Azerbaijani NER Model", "results": [{"task": {"type": "token-classification", "name": "Named Entity Recognition"}, "dataset": {"name": "Azerbaijani NER Dataset", "type": "LocalDoc/azerbaijani-ner-dataset"}, "metrics": [{"type": "precision", "value": 0.76439, "name": "Precision"}, {"type": "recall", "value": 0.74046, "name": "Recall"}, {"type": "f1", "value": 0.752235, "name": "F1"}]}]}]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,060 |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task257
|
Lots-of-LoRAs
| null |
[
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-01T14:03:29Z |
2025-01-01T14:03:34+00:00
| 0 | 0 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task257
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task257_spl_translation_ar_en
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task257_spl_translation_ar_en sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| null |
Non_BioNLP
|
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task257
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task257_spl_translation_ar_en
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task257_spl_translation_ar_en sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"}
|
task
|
[
"TRANSLATION"
] | 45,061 |
seongil-dn/gte-base-250k-answerableHN
|
seongil-dn
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:816532",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-19T13:57:24Z |
2024-11-19T13:58:12+00:00
| 8 | 0 |
---
base_model: Alibaba-NLP/gte-multilingual-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:816532
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 김택용이 스타크래프트2에서 첫 승리를 거둔 시기는 언제인가?
sentences:
- 2008년 11월 22일, 김택용은 클럽데이 온라인 MSL 결승전에서 허영무에게 선승을 내준 후 내리 3연승, 3:1 쾌승을 거두며 자신의
세 번째 MSL 우승을 달성하였다. 이를 통해 김택용은 프로토스 최초 개인리그 3회 우승자 및 역대 네 번째 금배지(MSL 3회 우승의 상징)
획득자가 되었다.
- '김택용은 새로 개막한 SK플래닛 프로리그 시즌2에서 스타크래프트: 브루드 워, 스타크래프트 Ⅱ를 병행해서 출전했다. 스타크래프트 브루드워
실력은 여전히 건재하지만, 스타Ⅱ에서는 스타크래프트 브루드워에서의 실력을 내지 못했다. 2012년 8월까지 택뱅리쌍 일원 중에서 김택용만 유일하게
스타Ⅱ에서의 승리를 하지 못했다. (0승 6패) 더군다나 2012년 봄까지만 해도 스타Ⅱ를 완전히 이해하지 못한듯한 플레이를 보이고 있었지만,
김택용은 2012년 여름이 되어서 스타Ⅱ를 서서히 실력을 쌓고 있었다. 기존의 스타크래프트 브루드워 스타리그가 스타크래프트 Ⅱ로 종목 전환한
뒤에 열린 첫 예선에 참가했으나, 스타Ⅱ의 부족한 실력을 여실히 들어내면서 1:2로 신예선수에게 지며 예선탈락하였다. 또한 GSL 선수들과
맞붙은 WCS 예선에서 프나틱의 장재호를 만나 무기력하게 0:2로 패배하여 탈락하였고, WCG 2012 예선에서도 백동준에게 0:2로 패배해
스타Ⅱ 종목으로 열린 경기에서 모두 패배하였다. 김택용은 스타2리그 뿐만아니라 스타1리그에서도 2010년 여름부터 3년째 스타리그에 이름을
올리지 못했다. 2012년 8월 12일 마침내 염보성을 상대로 어렵게 프로리그 스타2 종목에서 처음으로 승리를 거두었다(1승 6패). 결국
부진을 극복하지 못한 채 2012년 8월 케스파 랭킹 22위로까지 떨어지고 말았다. 하지만 그 후 2012년 8월 18일 김정우 마저 김택용의
스타2 승리 제물이 되었다. 엘리전까지 가는 혈전 끝에 스타Ⅱ에서 두각을 돋보이는 김정우를 격파하였고, 2012년 9월 2일 SK플래닛 스타
프로리그 시즌2 준플레이오프 2차전에서 다시 한번 염보성을 스타Ⅱ로 격파하면서 조금씩 기세를 올렸다.'
- 이소룡의 아버지는 유명한 광둥 경극 배우였으며, 아버지의 뒤를 이어 아주 어린 나이부터 영화를 접하게 되었고, 생후 3개월에 《금문녀》라는
영화로 데뷔하였다. 그가 18세가 되었을 때 이미 그는 스무 편의 영화에 출연한 상태였다.
- source_sentence: 페니스가 없는 여성의 심리적 반응은 어떠한가?
sentences:
- PIRA는 무장해제위원회(Decommingsioning Commission)에 의해 2005년 10월 무장투쟁을 포기했음을 확인받았으며, 우익
민주연합당(DUP)를 제외한 정당들도 이를 인정했다. 단, DUP에서는 증거가 없다며 무장투쟁포기사실을 인정하지 않았는데, 이는 DUP가 PIRA를
통해서 존재할 수 있기 때문이다. 그 실례로 북아일랜드의 수도 벨파스트에서 발행하는 일간지에선 PIRA 지도자 오닐이 무장투쟁을 포기하자,
민주연합당 지도자 이언 페이즐리(Ian Paisley)가 "가지마! 난 네가 필요해!"라고 말하는 내용의 풍자만화를 실었다.
- 성적 만족을 위해서라면 정신적인 사랑 없이 육체적 결합이 가능하다고 주장하였다. 정분이 없이도 성교가 가능하며 성관계는 일종의 오락 내지는
친밀행위에 지나지 않는다고 보았다. 그러나 이는 보수적인 유학자들 외에도 남성 지식인과 기독교계열의 반발을 불러왔다.
- 첫째는 "자신에게 페니스가 없는"것을 강하게 자각하고, 완전하게 페니스가 없는 존재로 받아들일 것이다. 이것은 열등감을 가진 여자를 만든다.
이 경우 무기력한 인간이 되어버린다고 한다. 둘째는 "자신은 페니스가 언젠가 나오고, 나는 남자"라고 믿고, 남성적인 성격을 갖출 경우이다.
세 번째는 성기라는 대상을 선망할 때 성기를 "페니스 → 아이"라는 상징으로 생각하고, 아이를 손에 넣는 길을 선택하는 경우이다.
- source_sentence: 신탁청은 언제 해체되었는가?
sentences:
- 신탁통치령(信託統治領, ) 혹은 신탁통치 지역(信託統治 地域)은 국제 연맹 위임통치령의 후신으로 제2차 세계 대전의 종전과 함께 국제 연맹이
유엔으로 대체됨에 따라 생겨났다.다음 11개 지역이 신탁통치령이었다. 1994년 10월 팔라우 독립을 마지막으로 신탁통치령은 소멸되었다.
- 히가시코게 역()은 일본 돗토리현 야즈 군 야즈 정에 위치한 서일본 여객철도 인비 선의 철도역이다. 단선 승강장 1면 1선의 구조를 갖춘 지상역이다.
- 신탁청은 1994년 12월 31일 해체될 때까지 15,102개의 기업체를 매각하고 4358개의 기업체를 재사유화했으며, 호텔, 식당, 약국
및 서점 등 소규모 사업장 25,030개를 사유화하고 46,552건의 부동산을 매각해 총 91,042건의 사유화를 기록했다. 이를 통해 666억
마르크의 매각수익을 올리고, 2111억 마르크의 투자와 150만 개의 일자리를 보장받았다. 초기에 추산되었던 기업가치가 약 6000억 마르크였던
것에 비하면 1/10 수준밖에 되지 않은 턱없이 낮은 매각수익이다. 사유화된 15,000여 기업 중 구동독인들에 의한 매입은― 주로 경영자기업인수(MBO)
혹은 종업원기업인수(EBO) ― 6%에 지나지않았고, 외국인 투자자 매입도 사유화 전체 기업 중 9% 정도로 나타났다.
- source_sentence: 석신산의 탈수 반응 생성물은 무엇인가요?
sentences:
- 석신산은 푸마르산으로 산화되거나 다이에틸석시네이트(diethylsuccinate, (CHCOCHCH))와 같은 다이에스터로 전환될 수 있다.
이러한 다이에틸 에스터(diethyl ester)는 스토브 축합(Stobbe condensation) 반응의 기질이다. 석신산의 탈수는 석신산
무수물을 생성한다. 석신산은 1,4-뷰테인다이올, 말레산 무수물, 석신이미드, 2-피롤리디논 및 테트라하이드로푸란을 유도하는데 사용될 수 있다.
- 2006년 ‘동의대 5·3 동지회’ 회원 등은 “동의대 사건 이후 경찰 조사 과정에서 고문 등 인권침해가 있었다”며 진실·화해를 위한 과거사
정리 위원회(이하 진실화해위)에 진실규명을 신청하였다. 이로 인해 진실화해위 소위원회는 “구타 등 인권침해가 있어 국가가 사과해야 한다”는
내용의 조사 결과 보고서를 심의·의결, 2010년 1월 19일에 열린 진실화해위 전원위원회에 상정했으나, “진실화해위는 ‘권위주의 통치’ 시기에
일어난 일을 조사 대상으로 삼는데, 동의대 사건은 노태우 정권 시절에 일어난 일이므로 조사 대상 자체가 되지 않는다”며 재적위원 과반수가 이
사건을 각하하기로 의결해 사건이 각하되었다. 다음날인 1월 20일에는 조사하지 않기로 했다고 밝힘으로서, 보고서 내용은 논의조차 되지 못한
것으로 전해졌다.
- 저산소 상태에서 석신산의 축적은 활성 산소 생산의 증가에 의한 허혈 재관류 손상(reperfusion injury)과 관련이 있다. 허혈(ischemia)
동안 푸마르산은 퓨린 뉴클레오타이드의 분해 및 말산-아스파르트산 셔틀의 역방향 반응의 일부분으로부터 형성된다. 과도한 푸마르산은 석신산 탈수소효소의
역반응을 통해 석신산의 생산 및 축적을 야기한다. 재관류시 석신산은 신속하게 산화되어 활성산소의 갑작스럽고 광범위한 생성을 초래한다. 활성산소는
세포자살 기작을 촉발시키거나 단백질, 세포막, 세포소기관 등에 산화적 손상을 유발한다. 동물 모델에서 허혈성 석신산 축적의 약리학적 억제는
허혈 재관류 손상을 개선시켰다. 현재 석신산 매개 활성산소 생성의 억제는 약물 치료의 표적으로 조사 중이다.
- source_sentence: 파올로 말디니는 어떤 선수인가요?
sentences:
- 체사레 말디니는 1954년부터 1966년까지 AC 밀란에서 뛰었고, 아들 파올로 말디니는 1985년부터 2009년까지 AC 밀란에서 뛰었으며,
손자 크리스티안 말디니가 2005년 10월 18일 AC 밀란 유스팀에 입단해 3부자가 모두 AC 밀란에서 활약하게 되었다.
- 파올로 체사레 말디니 (, 1968년 6월 26일, 이탈리아 밀라노 ~ )는 이탈리아의 은퇴한 축구 선수로, 포지션은 왼쪽 풀백과 센터백이었다.
그는 밀란의 전설적인 수비수 였을 뿐 아니라 역대 최고 수비수로도 불릴 만큼 대단한 선수였다. 현재 밀란의 스포츠 전략 & 개발 디렉터로 활동하고
있다.
- 조 주니어(Joe Junior, 본명은 Jose Maria Rodrigues, Jr.(조즈 마리아 로드리게스 주니어), 중문명(中文名)은 羅利期(뤄리지,
나이기), 1947년 7월 22일 ~ )는 영국 국적자 신분의 포르투갈계 영국인 남성으로 중화인민공화국 마카오 특별행정구에서 출생한 중화인민공화국
홍콩 특별행정구의 가수, 작사가, 영화배우, 텔레비전 연기자이다.
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 3f013725dc4dcee1e4ca72d1ce7e053c0dcee5ef -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/gte-base-250k-answerableHN")
# Run inference
sentences = [
'파올로 말디니는 어떤 선수인가요?',
'파올로 체사레 말디니 (, 1968년 6월 26일, 이탈리아 밀라노 ~ )는 이탈리아의 은퇴한 축구 선수로, 포지션은 왼쪽 풀백과 센터백이었다. 그는 밀란의 전설적인 수비수 였을 뿐 아니라 역대 최고 수비수로도 불릴 만큼 대단한 선수였다. 현재 밀란의 스포츠 전략 & 개발 디렉터로 활동하고 있다.',
'체사레 말디니는 1954년부터 1966년까지 AC 밀란에서 뛰었고, 아들 파올로 말디니는 1985년부터 2009년까지 AC 밀란에서 뛰었으며, 손자 크리스티안 말디니가 2005년 10월 18일 AC 밀란 유스팀에 입단해 3부자가 모두 AC 밀란에서 활약하게 되었다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 816,532 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.22 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 144.47 tokens</li><li>max: 621 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 169.92 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>아들이 아버지보다 나이가 많을 수 없는 것처럼, 우주 안의 천체는 당연히 우주보다는 젊어야 하기 때문에, 여러 종류의 천체를 관측하여 그 나이를 추정하는 것으로 우주의 나이의 하한선을 얻을 수 있다. 가장 많이 쓰이는 방법 중 하나는 가장 온도가 낮은 백색왜성의 나이를 측정하는 것이다. 백색왜성은 태양과 비슷한 질량을 가진 별들이 죽으면서 만들어지는데, 백색왜성은 당시 가지고 있던 열 이외에 다른 에너지원이 없기 때문에 나이가 들면서 점점 식고, 어두워지게 된다. 따라서 가장 어둡고, 가장 온도가 낮은 백색왜성을 찾아서 그 냉각 나이를 측정하면 우주의 나이의 하한선을 얻을 수 있다.</code> |
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>이 별의 물리적 수치는 태양과 비슷한데 분광형이 태양과 똑같은 G2V 여서 유사 태양으로 분류할 수 있다. 질량은 태양보다 9 퍼센트 무겁고 반지름은 태양보다 1 퍼센트 작다. 나이는 상대적으로 젊어 약 8천만 ~ 2억 년으로 보인다. 젊은 별인만큼 자전 속도는 3.5일에 한 번 돌 정도로 빠르며 자전축은 시선방향에 대해 21도(오차범위 +8, -9도) 기울어져 있다.</code> |
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>여기서 "v"는 적도에서의 각속도이며 "t"는 별의 나이이다. 이 관계식은 1972년 앤드류 P. 스쿠마니치가 발견했으며 그의 이름을 따서 '스쿠마니치의 법칙'으로 불린다. 자이로연대학(Gyrochronology)은 태양의 속도를 기준점으로 한 항성의 자전 속도에 기초하여, 그 별의 나이를 결정하는 것이다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 40
- `gradient_accumulation_steps`: 4
- `learning_rate`: 0.0001
- `adam_epsilon`: 1e-07
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-07
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0008 | 1 | 0.4813 |
| 0.0016 | 2 | 0.5643 |
| 0.0024 | 3 | 0.4872 |
| 0.0031 | 4 | 0.3838 |
| 0.0039 | 5 | 0.4269 |
| 0.0047 | 6 | 0.434 |
| 0.0055 | 7 | 0.5153 |
| 0.0063 | 8 | 0.4429 |
| 0.0071 | 9 | 0.4464 |
| 0.0078 | 10 | 0.4187 |
| 0.0086 | 11 | 0.468 |
| 0.0094 | 12 | 0.402 |
| 0.0102 | 13 | 0.3745 |
| 0.0110 | 14 | 0.3623 |
| 0.0118 | 15 | 0.3358 |
| 0.0125 | 16 | 0.3927 |
| 0.0133 | 17 | 0.4539 |
| 0.0141 | 18 | 0.3177 |
| 0.0149 | 19 | 0.2902 |
| 0.0157 | 20 | 0.3559 |
| 0.0165 | 21 | 0.2641 |
| 0.0172 | 22 | 0.2968 |
| 0.0180 | 23 | 0.2008 |
| 0.0188 | 24 | 0.2742 |
| 0.0196 | 25 | 0.3565 |
| 0.0204 | 26 | 0.2706 |
| 0.0212 | 27 | 0.2544 |
| 0.0219 | 28 | 0.2721 |
| 0.0227 | 29 | 0.2795 |
| 0.0235 | 30 | 0.2647 |
| 0.0243 | 31 | 0.164 |
| 0.0251 | 32 | 0.2574 |
| 0.0259 | 33 | 0.1962 |
| 0.0267 | 34 | 0.2739 |
| 0.0274 | 35 | 0.2286 |
| 0.0282 | 36 | 0.2376 |
| 0.0290 | 37 | 0.3125 |
| 0.0298 | 38 | 0.2401 |
| 0.0306 | 39 | 0.1922 |
| 0.0314 | 40 | 0.2479 |
| 0.0321 | 41 | 0.1851 |
| 0.0329 | 42 | 0.1813 |
| 0.0337 | 43 | 0.2471 |
| 0.0345 | 44 | 0.2561 |
| 0.0353 | 45 | 0.2568 |
| 0.0361 | 46 | 0.3049 |
| 0.0368 | 47 | 0.2404 |
| 0.0376 | 48 | 0.231 |
| 0.0384 | 49 | 0.261 |
| 0.0392 | 50 | 0.2581 |
| 0.0400 | 51 | 0.2184 |
| 0.0408 | 52 | 0.2002 |
| 0.0415 | 53 | 0.2586 |
| 0.0423 | 54 | 0.1532 |
| 0.0431 | 55 | 0.2023 |
| 0.0439 | 56 | 0.2272 |
| 0.0447 | 57 | 0.2207 |
| 0.0455 | 58 | 0.2364 |
| 0.0462 | 59 | 0.2044 |
| 0.0470 | 60 | 0.2387 |
| 0.0478 | 61 | 0.2289 |
| 0.0486 | 62 | 0.1616 |
| 0.0494 | 63 | 0.1753 |
| 0.0502 | 64 | 0.1803 |
| 0.0510 | 65 | 0.2033 |
| 0.0517 | 66 | 0.2061 |
| 0.0525 | 67 | 0.2128 |
| 0.0533 | 68 | 0.2046 |
| 0.0541 | 69 | 0.1685 |
| 0.0549 | 70 | 0.1985 |
| 0.0557 | 71 | 0.1713 |
| 0.0564 | 72 | 0.21 |
| 0.0572 | 73 | 0.2085 |
| 0.0580 | 74 | 0.2144 |
| 0.0588 | 75 | 0.2099 |
| 0.0596 | 76 | 0.223 |
| 0.0604 | 77 | 0.2342 |
| 0.0611 | 78 | 0.2327 |
| 0.0619 | 79 | 0.1812 |
| 0.0627 | 80 | 0.2068 |
| 0.0635 | 81 | 0.1826 |
| 0.0643 | 82 | 0.1792 |
| 0.0651 | 83 | 0.2363 |
| 0.0658 | 84 | 0.1842 |
| 0.0666 | 85 | 0.1673 |
| 0.0674 | 86 | 0.2068 |
| 0.0682 | 87 | 0.2386 |
| 0.0690 | 88 | 0.1905 |
| 0.0698 | 89 | 0.22 |
| 0.0705 | 90 | 0.2351 |
| 0.0713 | 91 | 0.2444 |
| 0.0721 | 92 | 0.1984 |
| 0.0729 | 93 | 0.1823 |
| 0.0737 | 94 | 0.201 |
| 0.0745 | 95 | 0.1548 |
| 0.0752 | 96 | 0.1824 |
| 0.0760 | 97 | 0.2315 |
| 0.0768 | 98 | 0.2042 |
| 0.0776 | 99 | 0.1579 |
| 0.0784 | 100 | 0.1906 |
| 0.0792 | 101 | 0.2058 |
| 0.0800 | 102 | 0.2094 |
| 0.0807 | 103 | 0.2149 |
| 0.0815 | 104 | 0.2138 |
| 0.0823 | 105 | 0.1932 |
| 0.0831 | 106 | 0.1874 |
| 0.0839 | 107 | 0.1945 |
| 0.0847 | 108 | 0.1705 |
| 0.0854 | 109 | 0.1832 |
| 0.0862 | 110 | 0.2075 |
| 0.0870 | 111 | 0.1586 |
| 0.0878 | 112 | 0.139 |
| 0.0886 | 113 | 0.1496 |
| 0.0894 | 114 | 0.1843 |
| 0.0901 | 115 | 0.2377 |
| 0.0909 | 116 | 0.1998 |
| 0.0917 | 117 | 0.1491 |
| 0.0925 | 118 | 0.1763 |
| 0.0933 | 119 | 0.128 |
| 0.0941 | 120 | 0.1595 |
| 0.0948 | 121 | 0.1816 |
| 0.0956 | 122 | 0.2252 |
| 0.0964 | 123 | 0.1829 |
| 0.0972 | 124 | 0.1505 |
| 0.0980 | 125 | 0.1726 |
| 0.0988 | 126 | 0.2009 |
| 0.0995 | 127 | 0.2219 |
| 0.1003 | 128 | 0.1384 |
| 0.1011 | 129 | 0.1243 |
| 0.1019 | 130 | 0.2139 |
| 0.1027 | 131 | 0.1677 |
| 0.1035 | 132 | 0.1957 |
| 0.1043 | 133 | 0.1683 |
| 0.1050 | 134 | 0.168 |
| 0.1058 | 135 | 0.2021 |
| 0.1066 | 136 | 0.2112 |
| 0.1074 | 137 | 0.2093 |
| 0.1082 | 138 | 0.2279 |
| 0.1090 | 139 | 0.2001 |
| 0.1097 | 140 | 0.179 |
| 0.1105 | 141 | 0.1954 |
| 0.1113 | 142 | 0.172 |
| 0.1121 | 143 | 0.1969 |
| 0.1129 | 144 | 0.1561 |
| 0.1137 | 145 | 0.1802 |
| 0.1144 | 146 | 0.1885 |
| 0.1152 | 147 | 0.1438 |
| 0.1160 | 148 | 0.1791 |
| 0.1168 | 149 | 0.1905 |
| 0.1176 | 150 | 0.2506 |
| 0.1184 | 151 | 0.2024 |
| 0.1191 | 152 | 0.2059 |
| 0.1199 | 153 | 0.2393 |
| 0.1207 | 154 | 0.1531 |
| 0.1215 | 155 | 0.1888 |
| 0.1223 | 156 | 0.1831 |
| 0.1231 | 157 | 0.1378 |
| 0.1238 | 158 | 0.1553 |
| 0.1246 | 159 | 0.2004 |
| 0.1254 | 160 | 0.2071 |
| 0.1262 | 161 | 0.1909 |
| 0.1270 | 162 | 0.1763 |
| 0.1278 | 163 | 0.1914 |
| 0.1286 | 164 | 0.1365 |
| 0.1293 | 165 | 0.2272 |
| 0.1301 | 166 | 0.1484 |
| 0.1309 | 167 | 0.2181 |
| 0.1317 | 168 | 0.2386 |
| 0.1325 | 169 | 0.2005 |
| 0.1333 | 170 | 0.1757 |
| 0.1340 | 171 | 0.1679 |
| 0.1348 | 172 | 0.1707 |
| 0.1356 | 173 | 0.1448 |
| 0.1364 | 174 | 0.1703 |
| 0.1372 | 175 | 0.1739 |
| 0.1380 | 176 | 0.1376 |
| 0.1387 | 177 | 0.1906 |
| 0.1395 | 178 | 0.2542 |
| 0.1403 | 179 | 0.1438 |
| 0.1411 | 180 | 0.1786 |
| 0.1419 | 181 | 0.1838 |
| 0.1427 | 182 | 0.1592 |
| 0.1434 | 183 | 0.1991 |
| 0.1442 | 184 | 0.1702 |
| 0.1450 | 185 | 0.1787 |
| 0.1458 | 186 | 0.1631 |
| 0.1466 | 187 | 0.2697 |
| 0.1474 | 188 | 0.1654 |
| 0.1481 | 189 | 0.2037 |
| 0.1489 | 190 | 0.1751 |
| 0.1497 | 191 | 0.212 |
| 0.1505 | 192 | 0.1531 |
| 0.1513 | 193 | 0.1802 |
| 0.1521 | 194 | 0.1421 |
| 0.1529 | 195 | 0.236 |
| 0.1536 | 196 | 0.1702 |
| 0.1544 | 197 | 0.1869 |
| 0.1552 | 198 | 0.1796 |
| 0.1560 | 199 | 0.1537 |
| 0.1568 | 200 | 0.1646 |
| 0.1576 | 201 | 0.1603 |
| 0.1583 | 202 | 0.1662 |
| 0.1591 | 203 | 0.1323 |
| 0.1599 | 204 | 0.1672 |
| 0.1607 | 205 | 0.2217 |
| 0.1615 | 206 | 0.144 |
| 0.1623 | 207 | 0.1889 |
| 0.1630 | 208 | 0.159 |
| 0.1638 | 209 | 0.1298 |
| 0.1646 | 210 | 0.1245 |
| 0.1654 | 211 | 0.1815 |
| 0.1662 | 212 | 0.1771 |
| 0.1670 | 213 | 0.1441 |
| 0.1677 | 214 | 0.1834 |
| 0.1685 | 215 | 0.1997 |
| 0.1693 | 216 | 0.203 |
| 0.1701 | 217 | 0.1986 |
| 0.1709 | 218 | 0.1965 |
| 0.1717 | 219 | 0.1682 |
| 0.1724 | 220 | 0.1485 |
| 0.1732 | 221 | 0.1531 |
| 0.1740 | 222 | 0.16 |
| 0.1748 | 223 | 0.1554 |
| 0.1756 | 224 | 0.1705 |
| 0.1764 | 225 | 0.1771 |
| 0.1772 | 226 | 0.1507 |
| 0.1779 | 227 | 0.1623 |
| 0.1787 | 228 | 0.1527 |
| 0.1795 | 229 | 0.1332 |
| 0.1803 | 230 | 0.1556 |
| 0.1811 | 231 | 0.1504 |
| 0.1819 | 232 | 0.1581 |
| 0.1826 | 233 | 0.15 |
| 0.1834 | 234 | 0.2012 |
| 0.1842 | 235 | 0.1587 |
| 0.1850 | 236 | 0.2141 |
| 0.1858 | 237 | 0.1431 |
| 0.1866 | 238 | 0.1092 |
| 0.1873 | 239 | 0.1688 |
| 0.1881 | 240 | 0.2185 |
| 0.1889 | 241 | 0.2071 |
| 0.1897 | 242 | 0.1575 |
| 0.1905 | 243 | 0.1251 |
| 0.1913 | 244 | 0.1692 |
| 0.1920 | 245 | 0.1746 |
| 0.1928 | 246 | 0.2024 |
| 0.1936 | 247 | 0.2074 |
| 0.1944 | 248 | 0.2422 |
| 0.1952 | 249 | 0.1994 |
| 0.1960 | 250 | 0.1672 |
| 0.1967 | 251 | 0.1474 |
| 0.1975 | 252 | 0.1888 |
| 0.1983 | 253 | 0.2173 |
| 0.1991 | 254 | 0.1448 |
| 0.1999 | 255 | 0.2403 |
| 0.2007 | 256 | 0.1652 |
| 0.2015 | 257 | 0.1929 |
| 0.2022 | 258 | 0.1272 |
| 0.2030 | 259 | 0.193 |
| 0.2038 | 260 | 0.1665 |
| 0.2046 | 261 | 0.1677 |
| 0.2054 | 262 | 0.1558 |
| 0.2062 | 263 | 0.1825 |
| 0.2069 | 264 | 0.1549 |
| 0.2077 | 265 | 0.199 |
| 0.2085 | 266 | 0.1495 |
| 0.2093 | 267 | 0.1478 |
| 0.2101 | 268 | 0.168 |
| 0.2109 | 269 | 0.1015 |
| 0.2116 | 270 | 0.1924 |
| 0.2124 | 271 | 0.1397 |
| 0.2132 | 272 | 0.1449 |
| 0.2140 | 273 | 0.1797 |
| 0.2148 | 274 | 0.1689 |
| 0.2156 | 275 | 0.1738 |
| 0.2163 | 276 | 0.1758 |
| 0.2171 | 277 | 0.1298 |
| 0.2179 | 278 | 0.1889 |
| 0.2187 | 279 | 0.1377 |
| 0.2195 | 280 | 0.1592 |
| 0.2203 | 281 | 0.1506 |
| 0.2210 | 282 | 0.1622 |
| 0.2218 | 283 | 0.1484 |
| 0.2226 | 284 | 0.1493 |
| 0.2234 | 285 | 0.1305 |
| 0.2242 | 286 | 0.1131 |
| 0.2250 | 287 | 0.1466 |
| 0.2257 | 288 | 0.1267 |
| 0.2265 | 289 | 0.1426 |
| 0.2273 | 290 | 0.1649 |
| 0.2281 | 291 | 0.1263 |
| 0.2289 | 292 | 0.2029 |
| 0.2297 | 293 | 0.1845 |
| 0.2305 | 294 | 0.1364 |
| 0.2312 | 295 | 0.1688 |
| 0.2320 | 296 | 0.2093 |
| 0.2328 | 297 | 0.1605 |
| 0.2336 | 298 | 0.1206 |
| 0.2344 | 299 | 0.2165 |
| 0.2352 | 300 | 0.2139 |
| 0.2359 | 301 | 0.1673 |
| 0.2367 | 302 | 0.1455 |
| 0.2375 | 303 | 0.1617 |
| 0.2383 | 304 | 0.1663 |
| 0.2391 | 305 | 0.1649 |
| 0.2399 | 306 | 0.1358 |
| 0.2406 | 307 | 0.1746 |
| 0.2414 | 308 | 0.1664 |
| 0.2422 | 309 | 0.1135 |
| 0.2430 | 310 | 0.1612 |
| 0.2438 | 311 | 0.1529 |
| 0.2446 | 312 | 0.1367 |
| 0.2453 | 313 | 0.1709 |
| 0.2461 | 314 | 0.1757 |
| 0.2469 | 315 | 0.1885 |
| 0.2477 | 316 | 0.1792 |
| 0.2485 | 317 | 0.1195 |
| 0.2493 | 318 | 0.1451 |
| 0.2500 | 319 | 0.1684 |
| 0.2508 | 320 | 0.1299 |
| 0.2516 | 321 | 0.1867 |
| 0.2524 | 322 | 0.1899 |
| 0.2532 | 323 | 0.1329 |
| 0.2540 | 324 | 0.1403 |
| 0.2548 | 325 | 0.1862 |
| 0.2555 | 326 | 0.1407 |
| 0.2563 | 327 | 0.1756 |
| 0.2571 | 328 | 0.1465 |
| 0.2579 | 329 | 0.1638 |
| 0.2587 | 330 | 0.1506 |
| 0.2595 | 331 | 0.1431 |
| 0.2602 | 332 | 0.1975 |
| 0.2610 | 333 | 0.1678 |
| 0.2618 | 334 | 0.1695 |
| 0.2626 | 335 | 0.1905 |
| 0.2634 | 336 | 0.1754 |
| 0.2642 | 337 | 0.145 |
| 0.2649 | 338 | 0.1787 |
| 0.2657 | 339 | 0.1464 |
| 0.2665 | 340 | 0.1598 |
| 0.2673 | 341 | 0.1159 |
| 0.2681 | 342 | 0.1573 |
| 0.2689 | 343 | 0.2009 |
| 0.2696 | 344 | 0.2046 |
| 0.2704 | 345 | 0.1523 |
| 0.2712 | 346 | 0.1293 |
| 0.2720 | 347 | 0.1614 |
| 0.2728 | 348 | 0.1538 |
| 0.2736 | 349 | 0.1418 |
| 0.2743 | 350 | 0.158 |
| 0.2751 | 351 | 0.1443 |
| 0.2759 | 352 | 0.1437 |
| 0.2767 | 353 | 0.1506 |
| 0.2775 | 354 | 0.1452 |
| 0.2783 | 355 | 0.1637 |
| 0.2791 | 356 | 0.1015 |
| 0.2798 | 357 | 0.1531 |
| 0.2806 | 358 | 0.162 |
| 0.2814 | 359 | 0.1166 |
| 0.2822 | 360 | 0.1968 |
| 0.2830 | 361 | 0.1828 |
| 0.2838 | 362 | 0.1281 |
| 0.2845 | 363 | 0.1738 |
| 0.2853 | 364 | 0.1785 |
| 0.2861 | 365 | 0.1475 |
| 0.2869 | 366 | 0.179 |
| 0.2877 | 367 | 0.1322 |
| 0.2885 | 368 | 0.234 |
| 0.2892 | 369 | 0.1465 |
| 0.2900 | 370 | 0.125 |
| 0.2908 | 371 | 0.1945 |
| 0.2916 | 372 | 0.1728 |
| 0.2924 | 373 | 0.1246 |
| 0.2932 | 374 | 0.1662 |
| 0.2939 | 375 | 0.1881 |
| 0.2947 | 376 | 0.1409 |
| 0.2955 | 377 | 0.188 |
| 0.2963 | 378 | 0.1482 |
| 0.2971 | 379 | 0.1451 |
| 0.2979 | 380 | 0.1562 |
| 0.2986 | 381 | 0.1606 |
| 0.2994 | 382 | 0.1437 |
| 0.3002 | 383 | 0.1271 |
| 0.3010 | 384 | 0.1796 |
| 0.3018 | 385 | 0.14 |
| 0.3026 | 386 | 0.1645 |
| 0.3034 | 387 | 0.1589 |
| 0.3041 | 388 | 0.1668 |
| 0.3049 | 389 | 0.1176 |
| 0.3057 | 390 | 0.1651 |
| 0.3065 | 391 | 0.1425 |
| 0.3073 | 392 | 0.194 |
| 0.3081 | 393 | 0.13 |
| 0.3088 | 394 | 0.1302 |
| 0.3096 | 395 | 0.1224 |
| 0.3104 | 396 | 0.1249 |
| 0.3112 | 397 | 0.1821 |
| 0.3120 | 398 | 0.1551 |
| 0.3128 | 399 | 0.1444 |
| 0.3135 | 400 | 0.1841 |
| 0.3143 | 401 | 0.1276 |
| 0.3151 | 402 | 0.1733 |
| 0.3159 | 403 | 0.1595 |
| 0.3167 | 404 | 0.2037 |
| 0.3175 | 405 | 0.1601 |
| 0.3182 | 406 | 0.1501 |
| 0.3190 | 407 | 0.1467 |
| 0.3198 | 408 | 0.1194 |
| 0.3206 | 409 | 0.1532 |
| 0.3214 | 410 | 0.1292 |
| 0.3222 | 411 | 0.1576 |
| 0.3229 | 412 | 0.1431 |
| 0.3237 | 413 | 0.151 |
| 0.3245 | 414 | 0.1024 |
| 0.3253 | 415 | 0.1696 |
| 0.3261 | 416 | 0.129 |
| 0.3269 | 417 | 0.1934 |
| 0.3277 | 418 | 0.2072 |
| 0.3284 | 419 | 0.1387 |
| 0.3292 | 420 | 0.146 |
| 0.3300 | 421 | 0.1325 |
| 0.3308 | 422 | 0.1555 |
| 0.3316 | 423 | 0.1281 |
| 0.3324 | 424 | 0.1869 |
| 0.3331 | 425 | 0.1802 |
| 0.3339 | 426 | 0.1774 |
| 0.3347 | 427 | 0.1495 |
| 0.3355 | 428 | 0.1022 |
| 0.3363 | 429 | 0.1546 |
| 0.3371 | 430 | 0.1512 |
| 0.3378 | 431 | 0.1734 |
| 0.3386 | 432 | 0.1285 |
| 0.3394 | 433 | 0.1562 |
| 0.3402 | 434 | 0.1437 |
| 0.3410 | 435 | 0.1485 |
| 0.3418 | 436 | 0.1443 |
| 0.3425 | 437 | 0.1304 |
| 0.3433 | 438 | 0.1479 |
| 0.3441 | 439 | 0.1544 |
| 0.3449 | 440 | 0.1947 |
| 0.3457 | 441 | 0.1685 |
| 0.3465 | 442 | 0.1715 |
| 0.3472 | 443 | 0.1269 |
| 0.3480 | 444 | 0.1739 |
| 0.3488 | 445 | 0.1798 |
| 0.3496 | 446 | 0.1329 |
| 0.3504 | 447 | 0.1737 |
| 0.3512 | 448 | 0.1197 |
| 0.3519 | 449 | 0.1326 |
| 0.3527 | 450 | 0.131 |
| 0.3535 | 451 | 0.1498 |
| 0.3543 | 452 | 0.1836 |
| 0.3551 | 453 | 0.115 |
| 0.3559 | 454 | 0.1766 |
| 0.3567 | 455 | 0.1289 |
| 0.3574 | 456 | 0.1359 |
| 0.3582 | 457 | 0.1245 |
| 0.3590 | 458 | 0.1793 |
| 0.3598 | 459 | 0.1615 |
| 0.3606 | 460 | 0.1122 |
| 0.3614 | 461 | 0.1767 |
| 0.3621 | 462 | 0.1464 |
| 0.3629 | 463 | 0.1377 |
| 0.3637 | 464 | 0.1341 |
| 0.3645 | 465 | 0.1511 |
| 0.3653 | 466 | 0.1444 |
| 0.3661 | 467 | 0.1407 |
| 0.3668 | 468 | 0.1602 |
| 0.3676 | 469 | 0.1352 |
| 0.3684 | 470 | 0.1203 |
| 0.3692 | 471 | 0.1367 |
| 0.3700 | 472 | 0.1554 |
| 0.3708 | 473 | 0.1006 |
| 0.3715 | 474 | 0.1499 |
| 0.3723 | 475 | 0.1324 |
| 0.3731 | 476 | 0.1654 |
| 0.3739 | 477 | 0.1509 |
| 0.3747 | 478 | 0.1237 |
| 0.3755 | 479 | 0.1298 |
| 0.3762 | 480 | 0.1403 |
| 0.3770 | 481 | 0.1314 |
| 0.3778 | 482 | 0.1704 |
| 0.3786 | 483 | 0.1285 |
| 0.3794 | 484 | 0.1896 |
| 0.3802 | 485 | 0.1358 |
| 0.3810 | 486 | 0.1065 |
| 0.3817 | 487 | 0.1382 |
| 0.3825 | 488 | 0.1372 |
| 0.3833 | 489 | 0.1215 |
| 0.3841 | 490 | 0.2131 |
| 0.3849 | 491 | 0.1512 |
| 0.3857 | 492 | 0.1323 |
| 0.3864 | 493 | 0.1398 |
| 0.3872 | 494 | 0.151 |
| 0.3880 | 495 | 0.1297 |
| 0.3888 | 496 | 0.1852 |
| 0.3896 | 497 | 0.1044 |
| 0.3904 | 498 | 0.1185 |
| 0.3911 | 499 | 0.1724 |
| 0.3919 | 500 | 0.097 |
| 0.3927 | 501 | 0.1486 |
| 0.3935 | 502 | 0.1124 |
| 0.3943 | 503 | 0.1264 |
| 0.3951 | 504 | 0.0993 |
| 0.3958 | 505 | 0.1369 |
| 0.3966 | 506 | 0.1587 |
| 0.3974 | 507 | 0.1455 |
| 0.3982 | 508 | 0.1236 |
| 0.3990 | 509 | 0.1547 |
| 0.3998 | 510 | 0.1286 |
| 0.4005 | 511 | 0.1257 |
| 0.4013 | 512 | 0.1452 |
| 0.4021 | 513 | 0.1595 |
| 0.4029 | 514 | 0.1479 |
| 0.4037 | 515 | 0.166 |
| 0.4045 | 516 | 0.1623 |
| 0.4053 | 517 | 0.136 |
| 0.4060 | 518 | 0.149 |
| 0.4068 | 519 | 0.1496 |
| 0.4076 | 520 | 0.1154 |
| 0.4084 | 521 | 0.1493 |
| 0.4092 | 522 | 0.113 |
| 0.4100 | 523 | 0.137 |
| 0.4107 | 524 | 0.2077 |
| 0.4115 | 525 | 0.112 |
| 0.4123 | 526 | 0.1491 |
| 0.4131 | 527 | 0.1608 |
| 0.4139 | 528 | 0.1446 |
| 0.4147 | 529 | 0.1188 |
| 0.4154 | 530 | 0.137 |
| 0.4162 | 531 | 0.1072 |
| 0.4170 | 532 | 0.088 |
| 0.4178 | 533 | 0.1182 |
| 0.4186 | 534 | 0.2556 |
| 0.4194 | 535 | 0.1907 |
| 0.4201 | 536 | 0.1156 |
| 0.4209 | 537 | 0.1676 |
| 0.4217 | 538 | 0.1236 |
| 0.4225 | 539 | 0.1009 |
| 0.4233 | 540 | 0.1567 |
| 0.4241 | 541 | 0.2222 |
| 0.4248 | 542 | 0.148 |
| 0.4256 | 543 | 0.1182 |
| 0.4264 | 544 | 0.1267 |
| 0.4272 | 545 | 0.127 |
| 0.4280 | 546 | 0.1372 |
| 0.4288 | 547 | 0.1299 |
| 0.4296 | 548 | 0.1711 |
| 0.4303 | 549 | 0.1608 |
| 0.4311 | 550 | 0.1278 |
| 0.4319 | 551 | 0.106 |
| 0.4327 | 552 | 0.1494 |
| 0.4335 | 553 | 0.1093 |
| 0.4343 | 554 | 0.1833 |
| 0.4350 | 555 | 0.1876 |
| 0.4358 | 556 | 0.1774 |
| 0.4366 | 557 | 0.1443 |
| 0.4374 | 558 | 0.1351 |
| 0.4382 | 559 | 0.1094 |
| 0.4390 | 560 | 0.1485 |
| 0.4397 | 561 | 0.1156 |
| 0.4405 | 562 | 0.1324 |
| 0.4413 | 563 | 0.1314 |
| 0.4421 | 564 | 0.1601 |
| 0.4429 | 565 | 0.1434 |
| 0.4437 | 566 | 0.1785 |
| 0.4444 | 567 | 0.1044 |
| 0.4452 | 568 | 0.1123 |
| 0.4460 | 569 | 0.1235 |
| 0.4468 | 570 | 0.1384 |
| 0.4476 | 571 | 0.1357 |
| 0.4484 | 572 | 0.1357 |
| 0.4491 | 573 | 0.1276 |
| 0.4499 | 574 | 0.1554 |
| 0.4507 | 575 | 0.1235 |
| 0.4515 | 576 | 0.1319 |
| 0.4523 | 577 | 0.1862 |
| 0.4531 | 578 | 0.1523 |
| 0.4539 | 579 | 0.1224 |
| 0.4546 | 580 | 0.1629 |
| 0.4554 | 581 | 0.1113 |
| 0.4562 | 582 | 0.1261 |
| 0.4570 | 583 | 0.1246 |
| 0.4578 | 584 | 0.1461 |
| 0.4586 | 585 | 0.1831 |
| 0.4593 | 586 | 0.138 |
| 0.4601 | 587 | 0.1206 |
| 0.4609 | 588 | 0.1269 |
| 0.4617 | 589 | 0.1512 |
| 0.4625 | 590 | 0.1131 |
| 0.4633 | 591 | 0.1206 |
| 0.4640 | 592 | 0.1555 |
| 0.4648 | 593 | 0.1404 |
| 0.4656 | 594 | 0.101 |
| 0.4664 | 595 | 0.0881 |
| 0.4672 | 596 | 0.1793 |
| 0.4680 | 597 | 0.0995 |
| 0.4687 | 598 | 0.1369 |
| 0.4695 | 599 | 0.141 |
| 0.4703 | 600 | 0.1494 |
| 0.4711 | 601 | 0.1824 |
| 0.4719 | 602 | 0.1671 |
| 0.4727 | 603 | 0.1805 |
| 0.4734 | 604 | 0.1475 |
| 0.4742 | 605 | 0.1128 |
| 0.4750 | 606 | 0.1748 |
| 0.4758 | 607 | 0.1564 |
| 0.4766 | 608 | 0.0922 |
| 0.4774 | 609 | 0.1008 |
| 0.4782 | 610 | 0.1324 |
| 0.4789 | 611 | 0.1022 |
| 0.4797 | 612 | 0.1604 |
| 0.4805 | 613 | 0.145 |
| 0.4813 | 614 | 0.1621 |
| 0.4821 | 615 | 0.15 |
| 0.4829 | 616 | 0.1092 |
| 0.4836 | 617 | 0.1239 |
| 0.4844 | 618 | 0.1352 |
| 0.4852 | 619 | 0.1098 |
| 0.4860 | 620 | 0.1341 |
| 0.4868 | 621 | 0.1538 |
| 0.4876 | 622 | 0.1146 |
| 0.4883 | 623 | 0.1498 |
| 0.4891 | 624 | 0.1358 |
| 0.4899 | 625 | 0.1571 |
| 0.4907 | 626 | 0.1508 |
| 0.4915 | 627 | 0.1424 |
| 0.4923 | 628 | 0.1731 |
| 0.4930 | 629 | 0.1398 |
| 0.4938 | 630 | 0.1234 |
| 0.4946 | 631 | 0.1409 |
| 0.4954 | 632 | 0.136 |
| 0.4962 | 633 | 0.1294 |
| 0.4970 | 634 | 0.1612 |
| 0.4977 | 635 | 0.1597 |
| 0.4985 | 636 | 0.1685 |
| 0.4993 | 637 | 0.1723 |
| 0.5001 | 638 | 0.1643 |
| 0.5009 | 639 | 0.1831 |
| 0.5017 | 640 | 0.0791 |
| 0.5024 | 641 | 0.1109 |
| 0.5032 | 642 | 0.1189 |
| 0.5040 | 643 | 0.1484 |
| 0.5048 | 644 | 0.1399 |
| 0.5056 | 645 | 0.1519 |
| 0.5064 | 646 | 0.1182 |
| 0.5072 | 647 | 0.1969 |
| 0.5079 | 648 | 0.1729 |
| 0.5087 | 649 | 0.1119 |
| 0.5095 | 650 | 0.099 |
| 0.5103 | 651 | 0.1265 |
| 0.5111 | 652 | 0.1068 |
| 0.5119 | 653 | 0.173 |
| 0.5126 | 654 | 0.1059 |
| 0.5134 | 655 | 0.1622 |
| 0.5142 | 656 | 0.1787 |
| 0.5150 | 657 | 0.2004 |
| 0.5158 | 658 | 0.1282 |
| 0.5166 | 659 | 0.1218 |
| 0.5173 | 660 | 0.1457 |
| 0.5181 | 661 | 0.0966 |
| 0.5189 | 662 | 0.1101 |
| 0.5197 | 663 | 0.1581 |
| 0.5205 | 664 | 0.1162 |
| 0.5213 | 665 | 0.1724 |
| 0.5220 | 666 | 0.1455 |
| 0.5228 | 667 | 0.1586 |
| 0.5236 | 668 | 0.1283 |
| 0.5244 | 669 | 0.1475 |
| 0.5252 | 670 | 0.1136 |
| 0.5260 | 671 | 0.1461 |
| 0.5267 | 672 | 0.1789 |
| 0.5275 | 673 | 0.1617 |
| 0.5283 | 674 | 0.1344 |
| 0.5291 | 675 | 0.1603 |
| 0.5299 | 676 | 0.1529 |
| 0.5307 | 677 | 0.1135 |
| 0.5315 | 678 | 0.1312 |
| 0.5322 | 679 | 0.1493 |
| 0.5330 | 680 | 0.158 |
| 0.5338 | 681 | 0.1032 |
| 0.5346 | 682 | 0.1082 |
| 0.5354 | 683 | 0.1043 |
| 0.5362 | 684 | 0.1127 |
| 0.5369 | 685 | 0.105 |
| 0.5377 | 686 | 0.1703 |
| 0.5385 | 687 | 0.1805 |
| 0.5393 | 688 | 0.1098 |
| 0.5401 | 689 | 0.1161 |
| 0.5409 | 690 | 0.107 |
| 0.5416 | 691 | 0.1619 |
| 0.5424 | 692 | 0.1076 |
| 0.5432 | 693 | 0.1248 |
| 0.5440 | 694 | 0.117 |
| 0.5448 | 695 | 0.1158 |
| 0.5456 | 696 | 0.1665 |
| 0.5463 | 697 | 0.1261 |
| 0.5471 | 698 | 0.1074 |
| 0.5479 | 699 | 0.1018 |
| 0.5487 | 700 | 0.1425 |
| 0.5495 | 701 | 0.1119 |
| 0.5503 | 702 | 0.1608 |
| 0.5510 | 703 | 0.1732 |
| 0.5518 | 704 | 0.1324 |
| 0.5526 | 705 | 0.1151 |
| 0.5534 | 706 | 0.1368 |
| 0.5542 | 707 | 0.1507 |
| 0.5550 | 708 | 0.1703 |
| 0.5558 | 709 | 0.1286 |
| 0.5565 | 710 | 0.1305 |
| 0.5573 | 711 | 0.1771 |
| 0.5581 | 712 | 0.1106 |
| 0.5589 | 713 | 0.1431 |
| 0.5597 | 714 | 0.1381 |
| 0.5605 | 715 | 0.1388 |
| 0.5612 | 716 | 0.1536 |
| 0.5620 | 717 | 0.1843 |
| 0.5628 | 718 | 0.1695 |
| 0.5636 | 719 | 0.1179 |
| 0.5644 | 720 | 0.1113 |
| 0.5652 | 721 | 0.0922 |
| 0.5659 | 722 | 0.1341 |
| 0.5667 | 723 | 0.1129 |
| 0.5675 | 724 | 0.1344 |
| 0.5683 | 725 | 0.1571 |
| 0.5691 | 726 | 0.1257 |
| 0.5699 | 727 | 0.126 |
| 0.5706 | 728 | 0.1706 |
| 0.5714 | 729 | 0.1245 |
| 0.5722 | 730 | 0.1703 |
| 0.5730 | 731 | 0.1304 |
| 0.5738 | 732 | 0.1552 |
| 0.5746 | 733 | 0.1036 |
| 0.5753 | 734 | 0.1269 |
| 0.5761 | 735 | 0.1355 |
| 0.5769 | 736 | 0.1153 |
| 0.5777 | 737 | 0.0923 |
| 0.5785 | 738 | 0.1359 |
| 0.5793 | 739 | 0.1495 |
| 0.5801 | 740 | 0.1818 |
| 0.5808 | 741 | 0.1325 |
| 0.5816 | 742 | 0.1755 |
| 0.5824 | 743 | 0.1443 |
| 0.5832 | 744 | 0.1255 |
| 0.5840 | 745 | 0.1248 |
| 0.5848 | 746 | 0.1161 |
| 0.5855 | 747 | 0.1513 |
| 0.5863 | 748 | 0.1117 |
| 0.5871 | 749 | 0.156 |
| 0.5879 | 750 | 0.1238 |
| 0.5887 | 751 | 0.1318 |
| 0.5895 | 752 | 0.1406 |
| 0.5902 | 753 | 0.1065 |
| 0.5910 | 754 | 0.1227 |
| 0.5918 | 755 | 0.1444 |
| 0.5926 | 756 | 0.1059 |
| 0.5934 | 757 | 0.1307 |
| 0.5942 | 758 | 0.1253 |
| 0.5949 | 759 | 0.0993 |
| 0.5957 | 760 | 0.1243 |
| 0.5965 | 761 | 0.1326 |
| 0.5973 | 762 | 0.1638 |
| 0.5981 | 763 | 0.1423 |
| 0.5989 | 764 | 0.1804 |
| 0.5996 | 765 | 0.1176 |
| 0.6004 | 766 | 0.1022 |
| 0.6012 | 767 | 0.1451 |
| 0.6020 | 768 | 0.1497 |
| 0.6028 | 769 | 0.1407 |
| 0.6036 | 770 | 0.1235 |
| 0.6044 | 771 | 0.1017 |
| 0.6051 | 772 | 0.1705 |
| 0.6059 | 773 | 0.1385 |
| 0.6067 | 774 | 0.1194 |
| 0.6075 | 775 | 0.1029 |
| 0.6083 | 776 | 0.139 |
| 0.6091 | 777 | 0.1298 |
| 0.6098 | 778 | 0.1878 |
| 0.6106 | 779 | 0.1353 |
| 0.6114 | 780 | 0.1413 |
| 0.6122 | 781 | 0.1129 |
| 0.6130 | 782 | 0.1296 |
| 0.6138 | 783 | 0.1532 |
| 0.6145 | 784 | 0.1769 |
| 0.6153 | 785 | 0.1235 |
| 0.6161 | 786 | 0.1059 |
| 0.6169 | 787 | 0.1224 |
| 0.6177 | 788 | 0.1591 |
| 0.6185 | 789 | 0.1127 |
| 0.6192 | 790 | 0.1519 |
| 0.6200 | 791 | 0.1473 |
| 0.6208 | 792 | 0.0953 |
| 0.6216 | 793 | 0.1302 |
| 0.6224 | 794 | 0.149 |
| 0.6232 | 795 | 0.1053 |
| 0.6239 | 796 | 0.1712 |
| 0.6247 | 797 | 0.1342 |
| 0.6255 | 798 | 0.1199 |
| 0.6263 | 799 | 0.1099 |
| 0.6271 | 800 | 0.1545 |
| 0.6279 | 801 | 0.1158 |
| 0.6286 | 802 | 0.1541 |
| 0.6294 | 803 | 0.1234 |
| 0.6302 | 804 | 0.1451 |
| 0.6310 | 805 | 0.1069 |
| 0.6318 | 806 | 0.1282 |
| 0.6326 | 807 | 0.1589 |
| 0.6334 | 808 | 0.1358 |
| 0.6341 | 809 | 0.1515 |
| 0.6349 | 810 | 0.1334 |
| 0.6357 | 811 | 0.1232 |
| 0.6365 | 812 | 0.1612 |
| 0.6373 | 813 | 0.1379 |
| 0.6381 | 814 | 0.1347 |
| 0.6388 | 815 | 0.1588 |
| 0.6396 | 816 | 0.1173 |
| 0.6404 | 817 | 0.1318 |
| 0.6412 | 818 | 0.1541 |
| 0.6420 | 819 | 0.1054 |
| 0.6428 | 820 | 0.1117 |
| 0.6435 | 821 | 0.1684 |
| 0.6443 | 822 | 0.1234 |
| 0.6451 | 823 | 0.1422 |
| 0.6459 | 824 | 0.0979 |
| 0.6467 | 825 | 0.1365 |
| 0.6475 | 826 | 0.1177 |
| 0.6482 | 827 | 0.1656 |
| 0.6490 | 828 | 0.1288 |
| 0.6498 | 829 | 0.1198 |
| 0.6506 | 830 | 0.1546 |
| 0.6514 | 831 | 0.1397 |
| 0.6522 | 832 | 0.1578 |
| 0.6529 | 833 | 0.1736 |
| 0.6537 | 834 | 0.1174 |
| 0.6545 | 835 | 0.1275 |
| 0.6553 | 836 | 0.0971 |
| 0.6561 | 837 | 0.1285 |
| 0.6569 | 838 | 0.1285 |
| 0.6577 | 839 | 0.1563 |
| 0.6584 | 840 | 0.155 |
| 0.6592 | 841 | 0.1398 |
| 0.6600 | 842 | 0.1465 |
| 0.6608 | 843 | 0.1201 |
| 0.6616 | 844 | 0.1278 |
| 0.6624 | 845 | 0.1155 |
| 0.6631 | 846 | 0.0946 |
| 0.6639 | 847 | 0.1152 |
| 0.6647 | 848 | 0.1191 |
| 0.6655 | 849 | 0.1175 |
| 0.6663 | 850 | 0.133 |
| 0.6671 | 851 | 0.1134 |
| 0.6678 | 852 | 0.1664 |
| 0.6686 | 853 | 0.1803 |
| 0.6694 | 854 | 0.1155 |
| 0.6702 | 855 | 0.1188 |
| 0.6710 | 856 | 0.1283 |
| 0.6718 | 857 | 0.0995 |
| 0.6725 | 858 | 0.1438 |
| 0.6733 | 859 | 0.1105 |
| 0.6741 | 860 | 0.1114 |
| 0.6749 | 861 | 0.089 |
| 0.6757 | 862 | 0.1249 |
| 0.6765 | 863 | 0.1194 |
| 0.6772 | 864 | 0.1591 |
| 0.6780 | 865 | 0.128 |
| 0.6788 | 866 | 0.0787 |
| 0.6796 | 867 | 0.13 |
| 0.6804 | 868 | 0.0992 |
| 0.6812 | 869 | 0.1229 |
| 0.6820 | 870 | 0.095 |
| 0.6827 | 871 | 0.1234 |
| 0.6835 | 872 | 0.1201 |
| 0.6843 | 873 | 0.1069 |
| 0.6851 | 874 | 0.1282 |
| 0.6859 | 875 | 0.1602 |
| 0.6867 | 876 | 0.1 |
| 0.6874 | 877 | 0.1437 |
| 0.6882 | 878 | 0.1167 |
| 0.6890 | 879 | 0.1841 |
| 0.6898 | 880 | 0.1011 |
| 0.6906 | 881 | 0.1264 |
| 0.6914 | 882 | 0.1249 |
| 0.6921 | 883 | 0.1261 |
| 0.6929 | 884 | 0.1608 |
| 0.6937 | 885 | 0.1398 |
| 0.6945 | 886 | 0.15 |
| 0.6953 | 887 | 0.1562 |
| 0.6961 | 888 | 0.1092 |
| 0.6968 | 889 | 0.1311 |
| 0.6976 | 890 | 0.1564 |
| 0.6984 | 891 | 0.1224 |
| 0.6992 | 892 | 0.1126 |
| 0.7000 | 893 | 0.0974 |
| 0.7008 | 894 | 0.1638 |
| 0.7015 | 895 | 0.118 |
| 0.7023 | 896 | 0.1156 |
| 0.7031 | 897 | 0.1141 |
| 0.7039 | 898 | 0.1756 |
| 0.7047 | 899 | 0.1165 |
| 0.7055 | 900 | 0.142 |
| 0.7063 | 901 | 0.1705 |
| 0.7070 | 902 | 0.1311 |
| 0.7078 | 903 | 0.1045 |
| 0.7086 | 904 | 0.1034 |
| 0.7094 | 905 | 0.1205 |
| 0.7102 | 906 | 0.1448 |
| 0.7110 | 907 | 0.1318 |
| 0.7117 | 908 | 0.1369 |
| 0.7125 | 909 | 0.1427 |
| 0.7133 | 910 | 0.1218 |
| 0.7141 | 911 | 0.103 |
| 0.7149 | 912 | 0.1147 |
| 0.7157 | 913 | 0.1297 |
| 0.7164 | 914 | 0.1089 |
| 0.7172 | 915 | 0.1371 |
| 0.7180 | 916 | 0.1182 |
| 0.7188 | 917 | 0.1273 |
| 0.7196 | 918 | 0.1238 |
| 0.7204 | 919 | 0.144 |
| 0.7211 | 920 | 0.0859 |
| 0.7219 | 921 | 0.0939 |
| 0.7227 | 922 | 0.0999 |
| 0.7235 | 923 | 0.1143 |
| 0.7243 | 924 | 0.1251 |
| 0.7251 | 925 | 0.107 |
| 0.7258 | 926 | 0.1077 |
| 0.7266 | 927 | 0.138 |
| 0.7274 | 928 | 0.155 |
| 0.7282 | 929 | 0.0977 |
| 0.7290 | 930 | 0.1003 |
| 0.7298 | 931 | 0.1382 |
| 0.7306 | 932 | 0.1006 |
| 0.7313 | 933 | 0.1027 |
| 0.7321 | 934 | 0.1124 |
| 0.7329 | 935 | 0.1813 |
| 0.7337 | 936 | 0.1159 |
| 0.7345 | 937 | 0.0791 |
| 0.7353 | 938 | 0.1435 |
| 0.7360 | 939 | 0.1288 |
| 0.7368 | 940 | 0.1078 |
| 0.7376 | 941 | 0.127 |
| 0.7384 | 942 | 0.1211 |
| 0.7392 | 943 | 0.1442 |
| 0.7400 | 944 | 0.1668 |
| 0.7407 | 945 | 0.1679 |
| 0.7415 | 946 | 0.1168 |
| 0.7423 | 947 | 0.1626 |
| 0.7431 | 948 | 0.1538 |
| 0.7439 | 949 | 0.0938 |
| 0.7447 | 950 | 0.1657 |
| 0.7454 | 951 | 0.1303 |
| 0.7462 | 952 | 0.098 |
| 0.7470 | 953 | 0.1014 |
| 0.7478 | 954 | 0.1153 |
| 0.7486 | 955 | 0.1192 |
| 0.7494 | 956 | 0.1418 |
| 0.7501 | 957 | 0.1206 |
| 0.7509 | 958 | 0.109 |
| 0.7517 | 959 | 0.1 |
| 0.7525 | 960 | 0.115 |
| 0.7533 | 961 | 0.1099 |
| 0.7541 | 962 | 0.1252 |
| 0.7549 | 963 | 0.0938 |
| 0.7556 | 964 | 0.1704 |
| 0.7564 | 965 | 0.1313 |
| 0.7572 | 966 | 0.1342 |
| 0.7580 | 967 | 0.1648 |
| 0.7588 | 968 | 0.107 |
| 0.7596 | 969 | 0.1177 |
| 0.7603 | 970 | 0.1528 |
| 0.7611 | 971 | 0.1577 |
| 0.7619 | 972 | 0.1109 |
| 0.7627 | 973 | 0.1336 |
| 0.7635 | 974 | 0.1544 |
| 0.7643 | 975 | 0.1304 |
| 0.7650 | 976 | 0.1083 |
| 0.7658 | 977 | 0.1017 |
| 0.7666 | 978 | 0.1492 |
| 0.7674 | 979 | 0.0846 |
| 0.7682 | 980 | 0.1179 |
| 0.7690 | 981 | 0.1634 |
| 0.7697 | 982 | 0.0893 |
| 0.7705 | 983 | 0.1357 |
| 0.7713 | 984 | 0.1757 |
| 0.7721 | 985 | 0.1112 |
| 0.7729 | 986 | 0.1258 |
| 0.7737 | 987 | 0.123 |
| 0.7744 | 988 | 0.1354 |
| 0.7752 | 989 | 0.0855 |
| 0.7760 | 990 | 0.1167 |
| 0.7768 | 991 | 0.1131 |
| 0.7776 | 992 | 0.1222 |
| 0.7784 | 993 | 0.1447 |
| 0.7791 | 994 | 0.1122 |
| 0.7799 | 995 | 0.1508 |
| 0.7807 | 996 | 0.1484 |
| 0.7815 | 997 | 0.0985 |
| 0.7823 | 998 | 0.1686 |
| 0.7831 | 999 | 0.1509 |
| 0.7839 | 1000 | 0.1356 |
| 0.7846 | 1001 | 0.1114 |
| 0.7854 | 1002 | 0.1098 |
| 0.7862 | 1003 | 0.1643 |
| 0.7870 | 1004 | 0.1784 |
| 0.7878 | 1005 | 0.1038 |
| 0.7886 | 1006 | 0.1362 |
| 0.7893 | 1007 | 0.1289 |
| 0.7901 | 1008 | 0.1188 |
| 0.7909 | 1009 | 0.1065 |
| 0.7917 | 1010 | 0.1195 |
| 0.7925 | 1011 | 0.1142 |
| 0.7933 | 1012 | 0.0801 |
| 0.7940 | 1013 | 0.1427 |
| 0.7948 | 1014 | 0.2034 |
| 0.7956 | 1015 | 0.1508 |
| 0.7964 | 1016 | 0.0888 |
| 0.7972 | 1017 | 0.0847 |
| 0.7980 | 1018 | 0.1007 |
| 0.7987 | 1019 | 0.1122 |
| 0.7995 | 1020 | 0.1215 |
| 0.8003 | 1021 | 0.1529 |
| 0.8011 | 1022 | 0.1095 |
| 0.8019 | 1023 | 0.1364 |
| 0.8027 | 1024 | 0.0978 |
| 0.8034 | 1025 | 0.1606 |
| 0.8042 | 1026 | 0.1131 |
| 0.8050 | 1027 | 0.0861 |
| 0.8058 | 1028 | 0.1523 |
| 0.8066 | 1029 | 0.1444 |
| 0.8074 | 1030 | 0.1255 |
| 0.8082 | 1031 | 0.1418 |
| 0.8089 | 1032 | 0.1007 |
| 0.8097 | 1033 | 0.1042 |
| 0.8105 | 1034 | 0.1423 |
| 0.8113 | 1035 | 0.1137 |
| 0.8121 | 1036 | 0.1314 |
| 0.8129 | 1037 | 0.1572 |
| 0.8136 | 1038 | 0.1188 |
| 0.8144 | 1039 | 0.0916 |
| 0.8152 | 1040 | 0.1043 |
| 0.8160 | 1041 | 0.1333 |
| 0.8168 | 1042 | 0.1299 |
| 0.8176 | 1043 | 0.1404 |
| 0.8183 | 1044 | 0.1209 |
| 0.8191 | 1045 | 0.0973 |
| 0.8199 | 1046 | 0.1359 |
| 0.8207 | 1047 | 0.1194 |
| 0.8215 | 1048 | 0.2011 |
| 0.8223 | 1049 | 0.1306 |
| 0.8230 | 1050 | 0.1073 |
| 0.8238 | 1051 | 0.1154 |
| 0.8246 | 1052 | 0.1224 |
| 0.8254 | 1053 | 0.1045 |
| 0.8262 | 1054 | 0.1067 |
| 0.8270 | 1055 | 0.1086 |
| 0.8277 | 1056 | 0.0923 |
| 0.8285 | 1057 | 0.1228 |
| 0.8293 | 1058 | 0.1474 |
| 0.8301 | 1059 | 0.0949 |
| 0.8309 | 1060 | 0.1259 |
| 0.8317 | 1061 | 0.1152 |
| 0.8325 | 1062 | 0.0937 |
| 0.8332 | 1063 | 0.1602 |
| 0.8340 | 1064 | 0.1165 |
| 0.8348 | 1065 | 0.1036 |
| 0.8356 | 1066 | 0.1665 |
| 0.8364 | 1067 | 0.1163 |
| 0.8372 | 1068 | 0.1124 |
| 0.8379 | 1069 | 0.1093 |
| 0.8387 | 1070 | 0.1015 |
| 0.8395 | 1071 | 0.1602 |
| 0.8403 | 1072 | 0.0913 |
| 0.8411 | 1073 | 0.1327 |
| 0.8419 | 1074 | 0.1149 |
| 0.8426 | 1075 | 0.1137 |
| 0.8434 | 1076 | 0.1197 |
| 0.8442 | 1077 | 0.1335 |
| 0.8450 | 1078 | 0.1366 |
| 0.8458 | 1079 | 0.1265 |
| 0.8466 | 1080 | 0.0921 |
| 0.8473 | 1081 | 0.1339 |
| 0.8481 | 1082 | 0.1155 |
| 0.8489 | 1083 | 0.103 |
| 0.8497 | 1084 | 0.1302 |
| 0.8505 | 1085 | 0.1311 |
| 0.8513 | 1086 | 0.1275 |
| 0.8520 | 1087 | 0.1585 |
| 0.8528 | 1088 | 0.0961 |
| 0.8536 | 1089 | 0.1222 |
| 0.8544 | 1090 | 0.0887 |
| 0.8552 | 1091 | 0.1599 |
| 0.8560 | 1092 | 0.0909 |
| 0.8568 | 1093 | 0.1566 |
| 0.8575 | 1094 | 0.1201 |
| 0.8583 | 1095 | 0.0786 |
| 0.8591 | 1096 | 0.1383 |
| 0.8599 | 1097 | 0.1593 |
| 0.8607 | 1098 | 0.1582 |
| 0.8615 | 1099 | 0.1474 |
| 0.8622 | 1100 | 0.0924 |
| 0.8630 | 1101 | 0.1379 |
| 0.8638 | 1102 | 0.1324 |
| 0.8646 | 1103 | 0.1139 |
| 0.8654 | 1104 | 0.0941 |
| 0.8662 | 1105 | 0.1107 |
| 0.8669 | 1106 | 0.1183 |
| 0.8677 | 1107 | 0.1024 |
| 0.8685 | 1108 | 0.1346 |
| 0.8693 | 1109 | 0.131 |
| 0.8701 | 1110 | 0.1244 |
| 0.8709 | 1111 | 0.1423 |
| 0.8716 | 1112 | 0.1604 |
| 0.8724 | 1113 | 0.146 |
| 0.8732 | 1114 | 0.1398 |
| 0.8740 | 1115 | 0.1393 |
| 0.8748 | 1116 | 0.1643 |
| 0.8756 | 1117 | 0.1006 |
| 0.8763 | 1118 | 0.0956 |
| 0.8771 | 1119 | 0.1304 |
| 0.8779 | 1120 | 0.1151 |
| 0.8787 | 1121 | 0.161 |
| 0.8795 | 1122 | 0.0871 |
| 0.8803 | 1123 | 0.1028 |
| 0.8811 | 1124 | 0.1715 |
| 0.8818 | 1125 | 0.1674 |
| 0.8826 | 1126 | 0.1073 |
| 0.8834 | 1127 | 0.0867 |
| 0.8842 | 1128 | 0.1117 |
| 0.8850 | 1129 | 0.1333 |
| 0.8858 | 1130 | 0.126 |
| 0.8865 | 1131 | 0.0853 |
| 0.8873 | 1132 | 0.1152 |
| 0.8881 | 1133 | 0.1467 |
| 0.8889 | 1134 | 0.1643 |
| 0.8897 | 1135 | 0.1117 |
| 0.8905 | 1136 | 0.0909 |
| 0.8912 | 1137 | 0.1645 |
| 0.8920 | 1138 | 0.1359 |
| 0.8928 | 1139 | 0.1204 |
| 0.8936 | 1140 | 0.1574 |
| 0.8944 | 1141 | 0.1187 |
| 0.8952 | 1142 | 0.1588 |
| 0.8959 | 1143 | 0.1419 |
| 0.8967 | 1144 | 0.1109 |
| 0.8975 | 1145 | 0.1048 |
| 0.8983 | 1146 | 0.1232 |
| 0.8991 | 1147 | 0.1159 |
| 0.8999 | 1148 | 0.1442 |
| 0.9006 | 1149 | 0.1345 |
| 0.9014 | 1150 | 0.0893 |
| 0.9022 | 1151 | 0.1033 |
| 0.9030 | 1152 | 0.1133 |
| 0.9038 | 1153 | 0.2009 |
| 0.9046 | 1154 | 0.1669 |
| 0.9053 | 1155 | 0.1095 |
| 0.9061 | 1156 | 0.1099 |
| 0.9069 | 1157 | 0.0893 |
| 0.9077 | 1158 | 0.137 |
| 0.9085 | 1159 | 0.1346 |
| 0.9093 | 1160 | 0.1135 |
| 0.9101 | 1161 | 0.1003 |
| 0.9108 | 1162 | 0.1224 |
| 0.9116 | 1163 | 0.098 |
| 0.9124 | 1164 | 0.1353 |
| 0.9132 | 1165 | 0.1481 |
| 0.9140 | 1166 | 0.1168 |
| 0.9148 | 1167 | 0.0794 |
| 0.9155 | 1168 | 0.0979 |
| 0.9163 | 1169 | 0.1093 |
| 0.9171 | 1170 | 0.1022 |
| 0.9179 | 1171 | 0.1498 |
| 0.9187 | 1172 | 0.1596 |
| 0.9195 | 1173 | 0.1657 |
| 0.9202 | 1174 | 0.1195 |
| 0.9210 | 1175 | 0.1278 |
| 0.9218 | 1176 | 0.1307 |
| 0.9226 | 1177 | 0.1071 |
| 0.9234 | 1178 | 0.0969 |
| 0.9242 | 1179 | 0.1192 |
| 0.9249 | 1180 | 0.1166 |
| 0.9257 | 1181 | 0.1221 |
| 0.9265 | 1182 | 0.1179 |
| 0.9273 | 1183 | 0.1414 |
| 0.9281 | 1184 | 0.1247 |
| 0.9289 | 1185 | 0.1148 |
| 0.9296 | 1186 | 0.1211 |
| 0.9304 | 1187 | 0.1373 |
| 0.9312 | 1188 | 0.1105 |
| 0.9320 | 1189 | 0.0911 |
| 0.9328 | 1190 | 0.1205 |
| 0.9336 | 1191 | 0.1479 |
| 0.9344 | 1192 | 0.115 |
| 0.9351 | 1193 | 0.0951 |
| 0.9359 | 1194 | 0.1501 |
| 0.9367 | 1195 | 0.1069 |
| 0.9375 | 1196 | 0.1091 |
| 0.9383 | 1197 | 0.0988 |
| 0.9391 | 1198 | 0.1278 |
| 0.9398 | 1199 | 0.1221 |
| 0.9406 | 1200 | 0.1418 |
| 0.9414 | 1201 | 0.1354 |
| 0.9422 | 1202 | 0.1435 |
| 0.9430 | 1203 | 0.101 |
| 0.9438 | 1204 | 0.1119 |
| 0.9445 | 1205 | 0.1566 |
| 0.9453 | 1206 | 0.1238 |
| 0.9461 | 1207 | 0.1008 |
| 0.9469 | 1208 | 0.1126 |
| 0.9477 | 1209 | 0.0897 |
| 0.9485 | 1210 | 0.1486 |
| 0.9492 | 1211 | 0.0976 |
| 0.9500 | 1212 | 0.124 |
| 0.9508 | 1213 | 0.1034 |
| 0.9516 | 1214 | 0.1229 |
| 0.9524 | 1215 | 0.1301 |
| 0.9532 | 1216 | 0.1363 |
| 0.9539 | 1217 | 0.1161 |
| 0.9547 | 1218 | 0.1199 |
| 0.9555 | 1219 | 0.0815 |
| 0.9563 | 1220 | 0.1034 |
| 0.9571 | 1221 | 0.1554 |
| 0.9579 | 1222 | 0.1266 |
| 0.9587 | 1223 | 0.1153 |
| 0.9594 | 1224 | 0.1129 |
| 0.9602 | 1225 | 0.1228 |
| 0.9610 | 1226 | 0.1268 |
| 0.9618 | 1227 | 0.1515 |
| 0.9626 | 1228 | 0.0885 |
| 0.9634 | 1229 | 0.1142 |
| 0.9641 | 1230 | 0.187 |
| 0.9649 | 1231 | 0.0836 |
| 0.9657 | 1232 | 0.0967 |
| 0.9665 | 1233 | 0.1516 |
| 0.9673 | 1234 | 0.0581 |
| 0.9681 | 1235 | 0.0847 |
| 0.9688 | 1236 | 0.1105 |
| 0.9696 | 1237 | 0.0958 |
| 0.9704 | 1238 | 0.1238 |
| 0.9712 | 1239 | 0.1076 |
| 0.9720 | 1240 | 0.1137 |
| 0.9728 | 1241 | 0.1236 |
| 0.9735 | 1242 | 0.129 |
| 0.9743 | 1243 | 0.1113 |
| 0.9751 | 1244 | 0.1466 |
| 0.9759 | 1245 | 0.1593 |
| 0.9767 | 1246 | 0.1151 |
| 0.9775 | 1247 | 0.153 |
| 0.9782 | 1248 | 0.1564 |
| 0.9790 | 1249 | 0.1208 |
| 0.9798 | 1250 | 0.0925 |
| 0.9806 | 1251 | 0.1146 |
| 0.9814 | 1252 | 0.1043 |
| 0.9822 | 1253 | 0.0926 |
| 0.9830 | 1254 | 0.1442 |
| 0.9837 | 1255 | 0.134 |
| 0.9845 | 1256 | 0.0841 |
| 0.9853 | 1257 | 0.1256 |
| 0.9861 | 1258 | 0.12 |
| 0.9869 | 1259 | 0.0815 |
| 0.9877 | 1260 | 0.1298 |
| 0.9884 | 1261 | 0.1569 |
| 0.9892 | 1262 | 0.1296 |
| 0.9900 | 1263 | 0.1418 |
| 0.9908 | 1264 | 0.1204 |
| 0.9916 | 1265 | 0.1207 |
| 0.9924 | 1266 | 0.1116 |
| 0.9931 | 1267 | 0.0807 |
| 0.9939 | 1268 | 0.1082 |
| 0.9947 | 1269 | 0.1213 |
| 0.9955 | 1270 | 0.1156 |
| 0.9963 | 1271 | 0.1517 |
| 0.9971 | 1272 | 0.1238 |
| 0.9978 | 1273 | 0.1313 |
| 0.9986 | 1274 | 0.131 |
| 0.9994 | 1275 | 0.1584 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 3f013725dc4dcee1e4ca72d1ce7e053c0dcee5ef -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/gte-base-250k-answerableHN")
# Run inference
sentences = [
'파올로 말디니는 어떤 선수인가요?',
'파올로 체사레 말디니 (, 1968년 6월 26일, 이탈리아 밀라노 ~ )는 이탈리아의 은퇴한 축구 선수로, 포지션은 왼쪽 풀백과 센터백이었다. 그는 밀란의 전설적인 수비수 였을 뿐 아니라 역대 최고 수비수로도 불릴 만큼 대단한 선수였다. 현재 밀란의 스포츠 전략 & 개발 디렉터로 활동하고 있다.',
'체사레 말디니는 1954년부터 1966년까지 AC 밀란에서 뛰었고, 아들 파올로 말디니는 1985년부터 2009년까지 AC 밀란에서 뛰었으며, 손자 크리스티안 말디니가 2005년 10월 18일 AC 밀란 유스팀에 입단해 3부자가 모두 AC 밀란에서 활약하게 되었다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 816,532 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.22 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 144.47 tokens</li><li>max: 621 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 169.92 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>아들이 아버지보다 나이가 많을 수 없는 것처럼, 우주 안의 천체는 당연히 우주보다는 젊어야 하기 때문에, 여러 종류의 천체를 관측하여 그 나이를 추정하는 것으로 우주의 나이의 하한선을 얻을 수 있다. 가장 많이 쓰이는 방법 중 하나는 가장 온도가 낮은 백색왜성의 나이를 측정하는 것이다. 백색왜성은 태양과 비슷한 질량을 가진 별들이 죽으면서 만들어지는데, 백색왜성은 당시 가지고 있던 열 이외에 다른 에너지원이 없기 때문에 나이가 들면서 점점 식고, 어두워지게 된다. 따라서 가장 어둡고, 가장 온도가 낮은 백색왜성을 찾아서 그 냉각 나이를 측정하면 우주의 나이의 하한선을 얻을 수 있다.</code> |
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>이 별의 물리적 수치는 태양과 비슷한데 분광형이 태양과 똑같은 G2V 여서 유사 태양으로 분류할 수 있다. 질량은 태양보다 9 퍼센트 무겁고 반지름은 태양보다 1 퍼센트 작다. 나이는 상대적으로 젊어 약 8천만 ~ 2억 년으로 보인다. 젊은 별인만큼 자전 속도는 3.5일에 한 번 돌 정도로 빠르며 자전축은 시선방향에 대해 21도(오차범위 +8, -9도) 기울어져 있다.</code> |
| <code>별의 나이는 어떻게 측정하는가?</code> | <code>별의 나이는 토륨과 다른 성분들에 의해 만들어진 스펙트럼선들의 상대적인 힘을 측정하기 위해 초거대망원경의 자외선 분광기를 사용하여 추측한다. 선의 힘은 여러 가지 다양한 동위원소를 만들어내는데, 그것들로부터 핵우주 연대학을 사용하여 별의 나이를 짐작하는 것이다.</code> | <code>여기서 "v"는 적도에서의 각속도이며 "t"는 별의 나이이다. 이 관계식은 1972년 앤드류 P. 스쿠마니치가 발견했으며 그의 이름을 따서 '스쿠마니치의 법칙'으로 불린다. 자이로연대학(Gyrochronology)은 태양의 속도를 기준점으로 한 항성의 자전 속도에 기초하여, 그 별의 나이를 결정하는 것이다.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 40
- `gradient_accumulation_steps`: 4
- `learning_rate`: 0.0001
- `adam_epsilon`: 1e-07
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-07
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0008 | 1 | 0.4813 |
| 0.0016 | 2 | 0.5643 |
| 0.0024 | 3 | 0.4872 |
| 0.0031 | 4 | 0.3838 |
| 0.0039 | 5 | 0.4269 |
| 0.0047 | 6 | 0.434 |
| 0.0055 | 7 | 0.5153 |
| 0.0063 | 8 | 0.4429 |
| 0.0071 | 9 | 0.4464 |
| 0.0078 | 10 | 0.4187 |
| 0.0086 | 11 | 0.468 |
| 0.0094 | 12 | 0.402 |
| 0.0102 | 13 | 0.3745 |
| 0.0110 | 14 | 0.3623 |
| 0.0118 | 15 | 0.3358 |
| 0.0125 | 16 | 0.3927 |
| 0.0133 | 17 | 0.4539 |
| 0.0141 | 18 | 0.3177 |
| 0.0149 | 19 | 0.2902 |
| 0.0157 | 20 | 0.3559 |
| 0.0165 | 21 | 0.2641 |
| 0.0172 | 22 | 0.2968 |
| 0.0180 | 23 | 0.2008 |
| 0.0188 | 24 | 0.2742 |
| 0.0196 | 25 | 0.3565 |
| 0.0204 | 26 | 0.2706 |
| 0.0212 | 27 | 0.2544 |
| 0.0219 | 28 | 0.2721 |
| 0.0227 | 29 | 0.2795 |
| 0.0235 | 30 | 0.2647 |
| 0.0243 | 31 | 0.164 |
| 0.0251 | 32 | 0.2574 |
| 0.0259 | 33 | 0.1962 |
| 0.0267 | 34 | 0.2739 |
| 0.0274 | 35 | 0.2286 |
| 0.0282 | 36 | 0.2376 |
| 0.0290 | 37 | 0.3125 |
| 0.0298 | 38 | 0.2401 |
| 0.0306 | 39 | 0.1922 |
| 0.0314 | 40 | 0.2479 |
| 0.0321 | 41 | 0.1851 |
| 0.0329 | 42 | 0.1813 |
| 0.0337 | 43 | 0.2471 |
| 0.0345 | 44 | 0.2561 |
| 0.0353 | 45 | 0.2568 |
| 0.0361 | 46 | 0.3049 |
| 0.0368 | 47 | 0.2404 |
| 0.0376 | 48 | 0.231 |
| 0.0384 | 49 | 0.261 |
| 0.0392 | 50 | 0.2581 |
| 0.0400 | 51 | 0.2184 |
| 0.0408 | 52 | 0.2002 |
| 0.0415 | 53 | 0.2586 |
| 0.0423 | 54 | 0.1532 |
| 0.0431 | 55 | 0.2023 |
| 0.0439 | 56 | 0.2272 |
| 0.0447 | 57 | 0.2207 |
| 0.0455 | 58 | 0.2364 |
| 0.0462 | 59 | 0.2044 |
| 0.0470 | 60 | 0.2387 |
| 0.0478 | 61 | 0.2289 |
| 0.0486 | 62 | 0.1616 |
| 0.0494 | 63 | 0.1753 |
| 0.0502 | 64 | 0.1803 |
| 0.0510 | 65 | 0.2033 |
| 0.0517 | 66 | 0.2061 |
| 0.0525 | 67 | 0.2128 |
| 0.0533 | 68 | 0.2046 |
| 0.0541 | 69 | 0.1685 |
| 0.0549 | 70 | 0.1985 |
| 0.0557 | 71 | 0.1713 |
| 0.0564 | 72 | 0.21 |
| 0.0572 | 73 | 0.2085 |
| 0.0580 | 74 | 0.2144 |
| 0.0588 | 75 | 0.2099 |
| 0.0596 | 76 | 0.223 |
| 0.0604 | 77 | 0.2342 |
| 0.0611 | 78 | 0.2327 |
| 0.0619 | 79 | 0.1812 |
| 0.0627 | 80 | 0.2068 |
| 0.0635 | 81 | 0.1826 |
| 0.0643 | 82 | 0.1792 |
| 0.0651 | 83 | 0.2363 |
| 0.0658 | 84 | 0.1842 |
| 0.0666 | 85 | 0.1673 |
| 0.0674 | 86 | 0.2068 |
| 0.0682 | 87 | 0.2386 |
| 0.0690 | 88 | 0.1905 |
| 0.0698 | 89 | 0.22 |
| 0.0705 | 90 | 0.2351 |
| 0.0713 | 91 | 0.2444 |
| 0.0721 | 92 | 0.1984 |
| 0.0729 | 93 | 0.1823 |
| 0.0737 | 94 | 0.201 |
| 0.0745 | 95 | 0.1548 |
| 0.0752 | 96 | 0.1824 |
| 0.0760 | 97 | 0.2315 |
| 0.0768 | 98 | 0.2042 |
| 0.0776 | 99 | 0.1579 |
| 0.0784 | 100 | 0.1906 |
| 0.0792 | 101 | 0.2058 |
| 0.0800 | 102 | 0.2094 |
| 0.0807 | 103 | 0.2149 |
| 0.0815 | 104 | 0.2138 |
| 0.0823 | 105 | 0.1932 |
| 0.0831 | 106 | 0.1874 |
| 0.0839 | 107 | 0.1945 |
| 0.0847 | 108 | 0.1705 |
| 0.0854 | 109 | 0.1832 |
| 0.0862 | 110 | 0.2075 |
| 0.0870 | 111 | 0.1586 |
| 0.0878 | 112 | 0.139 |
| 0.0886 | 113 | 0.1496 |
| 0.0894 | 114 | 0.1843 |
| 0.0901 | 115 | 0.2377 |
| 0.0909 | 116 | 0.1998 |
| 0.0917 | 117 | 0.1491 |
| 0.0925 | 118 | 0.1763 |
| 0.0933 | 119 | 0.128 |
| 0.0941 | 120 | 0.1595 |
| 0.0948 | 121 | 0.1816 |
| 0.0956 | 122 | 0.2252 |
| 0.0964 | 123 | 0.1829 |
| 0.0972 | 124 | 0.1505 |
| 0.0980 | 125 | 0.1726 |
| 0.0988 | 126 | 0.2009 |
| 0.0995 | 127 | 0.2219 |
| 0.1003 | 128 | 0.1384 |
| 0.1011 | 129 | 0.1243 |
| 0.1019 | 130 | 0.2139 |
| 0.1027 | 131 | 0.1677 |
| 0.1035 | 132 | 0.1957 |
| 0.1043 | 133 | 0.1683 |
| 0.1050 | 134 | 0.168 |
| 0.1058 | 135 | 0.2021 |
| 0.1066 | 136 | 0.2112 |
| 0.1074 | 137 | 0.2093 |
| 0.1082 | 138 | 0.2279 |
| 0.1090 | 139 | 0.2001 |
| 0.1097 | 140 | 0.179 |
| 0.1105 | 141 | 0.1954 |
| 0.1113 | 142 | 0.172 |
| 0.1121 | 143 | 0.1969 |
| 0.1129 | 144 | 0.1561 |
| 0.1137 | 145 | 0.1802 |
| 0.1144 | 146 | 0.1885 |
| 0.1152 | 147 | 0.1438 |
| 0.1160 | 148 | 0.1791 |
| 0.1168 | 149 | 0.1905 |
| 0.1176 | 150 | 0.2506 |
| 0.1184 | 151 | 0.2024 |
| 0.1191 | 152 | 0.2059 |
| 0.1199 | 153 | 0.2393 |
| 0.1207 | 154 | 0.1531 |
| 0.1215 | 155 | 0.1888 |
| 0.1223 | 156 | 0.1831 |
| 0.1231 | 157 | 0.1378 |
| 0.1238 | 158 | 0.1553 |
| 0.1246 | 159 | 0.2004 |
| 0.1254 | 160 | 0.2071 |
| 0.1262 | 161 | 0.1909 |
| 0.1270 | 162 | 0.1763 |
| 0.1278 | 163 | 0.1914 |
| 0.1286 | 164 | 0.1365 |
| 0.1293 | 165 | 0.2272 |
| 0.1301 | 166 | 0.1484 |
| 0.1309 | 167 | 0.2181 |
| 0.1317 | 168 | 0.2386 |
| 0.1325 | 169 | 0.2005 |
| 0.1333 | 170 | 0.1757 |
| 0.1340 | 171 | 0.1679 |
| 0.1348 | 172 | 0.1707 |
| 0.1356 | 173 | 0.1448 |
| 0.1364 | 174 | 0.1703 |
| 0.1372 | 175 | 0.1739 |
| 0.1380 | 176 | 0.1376 |
| 0.1387 | 177 | 0.1906 |
| 0.1395 | 178 | 0.2542 |
| 0.1403 | 179 | 0.1438 |
| 0.1411 | 180 | 0.1786 |
| 0.1419 | 181 | 0.1838 |
| 0.1427 | 182 | 0.1592 |
| 0.1434 | 183 | 0.1991 |
| 0.1442 | 184 | 0.1702 |
| 0.1450 | 185 | 0.1787 |
| 0.1458 | 186 | 0.1631 |
| 0.1466 | 187 | 0.2697 |
| 0.1474 | 188 | 0.1654 |
| 0.1481 | 189 | 0.2037 |
| 0.1489 | 190 | 0.1751 |
| 0.1497 | 191 | 0.212 |
| 0.1505 | 192 | 0.1531 |
| 0.1513 | 193 | 0.1802 |
| 0.1521 | 194 | 0.1421 |
| 0.1529 | 195 | 0.236 |
| 0.1536 | 196 | 0.1702 |
| 0.1544 | 197 | 0.1869 |
| 0.1552 | 198 | 0.1796 |
| 0.1560 | 199 | 0.1537 |
| 0.1568 | 200 | 0.1646 |
| 0.1576 | 201 | 0.1603 |
| 0.1583 | 202 | 0.1662 |
| 0.1591 | 203 | 0.1323 |
| 0.1599 | 204 | 0.1672 |
| 0.1607 | 205 | 0.2217 |
| 0.1615 | 206 | 0.144 |
| 0.1623 | 207 | 0.1889 |
| 0.1630 | 208 | 0.159 |
| 0.1638 | 209 | 0.1298 |
| 0.1646 | 210 | 0.1245 |
| 0.1654 | 211 | 0.1815 |
| 0.1662 | 212 | 0.1771 |
| 0.1670 | 213 | 0.1441 |
| 0.1677 | 214 | 0.1834 |
| 0.1685 | 215 | 0.1997 |
| 0.1693 | 216 | 0.203 |
| 0.1701 | 217 | 0.1986 |
| 0.1709 | 218 | 0.1965 |
| 0.1717 | 219 | 0.1682 |
| 0.1724 | 220 | 0.1485 |
| 0.1732 | 221 | 0.1531 |
| 0.1740 | 222 | 0.16 |
| 0.1748 | 223 | 0.1554 |
| 0.1756 | 224 | 0.1705 |
| 0.1764 | 225 | 0.1771 |
| 0.1772 | 226 | 0.1507 |
| 0.1779 | 227 | 0.1623 |
| 0.1787 | 228 | 0.1527 |
| 0.1795 | 229 | 0.1332 |
| 0.1803 | 230 | 0.1556 |
| 0.1811 | 231 | 0.1504 |
| 0.1819 | 232 | 0.1581 |
| 0.1826 | 233 | 0.15 |
| 0.1834 | 234 | 0.2012 |
| 0.1842 | 235 | 0.1587 |
| 0.1850 | 236 | 0.2141 |
| 0.1858 | 237 | 0.1431 |
| 0.1866 | 238 | 0.1092 |
| 0.1873 | 239 | 0.1688 |
| 0.1881 | 240 | 0.2185 |
| 0.1889 | 241 | 0.2071 |
| 0.1897 | 242 | 0.1575 |
| 0.1905 | 243 | 0.1251 |
| 0.1913 | 244 | 0.1692 |
| 0.1920 | 245 | 0.1746 |
| 0.1928 | 246 | 0.2024 |
| 0.1936 | 247 | 0.2074 |
| 0.1944 | 248 | 0.2422 |
| 0.1952 | 249 | 0.1994 |
| 0.1960 | 250 | 0.1672 |
| 0.1967 | 251 | 0.1474 |
| 0.1975 | 252 | 0.1888 |
| 0.1983 | 253 | 0.2173 |
| 0.1991 | 254 | 0.1448 |
| 0.1999 | 255 | 0.2403 |
| 0.2007 | 256 | 0.1652 |
| 0.2015 | 257 | 0.1929 |
| 0.2022 | 258 | 0.1272 |
| 0.2030 | 259 | 0.193 |
| 0.2038 | 260 | 0.1665 |
| 0.2046 | 261 | 0.1677 |
| 0.2054 | 262 | 0.1558 |
| 0.2062 | 263 | 0.1825 |
| 0.2069 | 264 | 0.1549 |
| 0.2077 | 265 | 0.199 |
| 0.2085 | 266 | 0.1495 |
| 0.2093 | 267 | 0.1478 |
| 0.2101 | 268 | 0.168 |
| 0.2109 | 269 | 0.1015 |
| 0.2116 | 270 | 0.1924 |
| 0.2124 | 271 | 0.1397 |
| 0.2132 | 272 | 0.1449 |
| 0.2140 | 273 | 0.1797 |
| 0.2148 | 274 | 0.1689 |
| 0.2156 | 275 | 0.1738 |
| 0.2163 | 276 | 0.1758 |
| 0.2171 | 277 | 0.1298 |
| 0.2179 | 278 | 0.1889 |
| 0.2187 | 279 | 0.1377 |
| 0.2195 | 280 | 0.1592 |
| 0.2203 | 281 | 0.1506 |
| 0.2210 | 282 | 0.1622 |
| 0.2218 | 283 | 0.1484 |
| 0.2226 | 284 | 0.1493 |
| 0.2234 | 285 | 0.1305 |
| 0.2242 | 286 | 0.1131 |
| 0.2250 | 287 | 0.1466 |
| 0.2257 | 288 | 0.1267 |
| 0.2265 | 289 | 0.1426 |
| 0.2273 | 290 | 0.1649 |
| 0.2281 | 291 | 0.1263 |
| 0.2289 | 292 | 0.2029 |
| 0.2297 | 293 | 0.1845 |
| 0.2305 | 294 | 0.1364 |
| 0.2312 | 295 | 0.1688 |
| 0.2320 | 296 | 0.2093 |
| 0.2328 | 297 | 0.1605 |
| 0.2336 | 298 | 0.1206 |
| 0.2344 | 299 | 0.2165 |
| 0.2352 | 300 | 0.2139 |
| 0.2359 | 301 | 0.1673 |
| 0.2367 | 302 | 0.1455 |
| 0.2375 | 303 | 0.1617 |
| 0.2383 | 304 | 0.1663 |
| 0.2391 | 305 | 0.1649 |
| 0.2399 | 306 | 0.1358 |
| 0.2406 | 307 | 0.1746 |
| 0.2414 | 308 | 0.1664 |
| 0.2422 | 309 | 0.1135 |
| 0.2430 | 310 | 0.1612 |
| 0.2438 | 311 | 0.1529 |
| 0.2446 | 312 | 0.1367 |
| 0.2453 | 313 | 0.1709 |
| 0.2461 | 314 | 0.1757 |
| 0.2469 | 315 | 0.1885 |
| 0.2477 | 316 | 0.1792 |
| 0.2485 | 317 | 0.1195 |
| 0.2493 | 318 | 0.1451 |
| 0.2500 | 319 | 0.1684 |
| 0.2508 | 320 | 0.1299 |
| 0.2516 | 321 | 0.1867 |
| 0.2524 | 322 | 0.1899 |
| 0.2532 | 323 | 0.1329 |
| 0.2540 | 324 | 0.1403 |
| 0.2548 | 325 | 0.1862 |
| 0.2555 | 326 | 0.1407 |
| 0.2563 | 327 | 0.1756 |
| 0.2571 | 328 | 0.1465 |
| 0.2579 | 329 | 0.1638 |
| 0.2587 | 330 | 0.1506 |
| 0.2595 | 331 | 0.1431 |
| 0.2602 | 332 | 0.1975 |
| 0.2610 | 333 | 0.1678 |
| 0.2618 | 334 | 0.1695 |
| 0.2626 | 335 | 0.1905 |
| 0.2634 | 336 | 0.1754 |
| 0.2642 | 337 | 0.145 |
| 0.2649 | 338 | 0.1787 |
| 0.2657 | 339 | 0.1464 |
| 0.2665 | 340 | 0.1598 |
| 0.2673 | 341 | 0.1159 |
| 0.2681 | 342 | 0.1573 |
| 0.2689 | 343 | 0.2009 |
| 0.2696 | 344 | 0.2046 |
| 0.2704 | 345 | 0.1523 |
| 0.2712 | 346 | 0.1293 |
| 0.2720 | 347 | 0.1614 |
| 0.2728 | 348 | 0.1538 |
| 0.2736 | 349 | 0.1418 |
| 0.2743 | 350 | 0.158 |
| 0.2751 | 351 | 0.1443 |
| 0.2759 | 352 | 0.1437 |
| 0.2767 | 353 | 0.1506 |
| 0.2775 | 354 | 0.1452 |
| 0.2783 | 355 | 0.1637 |
| 0.2791 | 356 | 0.1015 |
| 0.2798 | 357 | 0.1531 |
| 0.2806 | 358 | 0.162 |
| 0.2814 | 359 | 0.1166 |
| 0.2822 | 360 | 0.1968 |
| 0.2830 | 361 | 0.1828 |
| 0.2838 | 362 | 0.1281 |
| 0.2845 | 363 | 0.1738 |
| 0.2853 | 364 | 0.1785 |
| 0.2861 | 365 | 0.1475 |
| 0.2869 | 366 | 0.179 |
| 0.2877 | 367 | 0.1322 |
| 0.2885 | 368 | 0.234 |
| 0.2892 | 369 | 0.1465 |
| 0.2900 | 370 | 0.125 |
| 0.2908 | 371 | 0.1945 |
| 0.2916 | 372 | 0.1728 |
| 0.2924 | 373 | 0.1246 |
| 0.2932 | 374 | 0.1662 |
| 0.2939 | 375 | 0.1881 |
| 0.2947 | 376 | 0.1409 |
| 0.2955 | 377 | 0.188 |
| 0.2963 | 378 | 0.1482 |
| 0.2971 | 379 | 0.1451 |
| 0.2979 | 380 | 0.1562 |
| 0.2986 | 381 | 0.1606 |
| 0.2994 | 382 | 0.1437 |
| 0.3002 | 383 | 0.1271 |
| 0.3010 | 384 | 0.1796 |
| 0.3018 | 385 | 0.14 |
| 0.3026 | 386 | 0.1645 |
| 0.3034 | 387 | 0.1589 |
| 0.3041 | 388 | 0.1668 |
| 0.3049 | 389 | 0.1176 |
| 0.3057 | 390 | 0.1651 |
| 0.3065 | 391 | 0.1425 |
| 0.3073 | 392 | 0.194 |
| 0.3081 | 393 | 0.13 |
| 0.3088 | 394 | 0.1302 |
| 0.3096 | 395 | 0.1224 |
| 0.3104 | 396 | 0.1249 |
| 0.3112 | 397 | 0.1821 |
| 0.3120 | 398 | 0.1551 |
| 0.3128 | 399 | 0.1444 |
| 0.3135 | 400 | 0.1841 |
| 0.3143 | 401 | 0.1276 |
| 0.3151 | 402 | 0.1733 |
| 0.3159 | 403 | 0.1595 |
| 0.3167 | 404 | 0.2037 |
| 0.3175 | 405 | 0.1601 |
| 0.3182 | 406 | 0.1501 |
| 0.3190 | 407 | 0.1467 |
| 0.3198 | 408 | 0.1194 |
| 0.3206 | 409 | 0.1532 |
| 0.3214 | 410 | 0.1292 |
| 0.3222 | 411 | 0.1576 |
| 0.3229 | 412 | 0.1431 |
| 0.3237 | 413 | 0.151 |
| 0.3245 | 414 | 0.1024 |
| 0.3253 | 415 | 0.1696 |
| 0.3261 | 416 | 0.129 |
| 0.3269 | 417 | 0.1934 |
| 0.3277 | 418 | 0.2072 |
| 0.3284 | 419 | 0.1387 |
| 0.3292 | 420 | 0.146 |
| 0.3300 | 421 | 0.1325 |
| 0.3308 | 422 | 0.1555 |
| 0.3316 | 423 | 0.1281 |
| 0.3324 | 424 | 0.1869 |
| 0.3331 | 425 | 0.1802 |
| 0.3339 | 426 | 0.1774 |
| 0.3347 | 427 | 0.1495 |
| 0.3355 | 428 | 0.1022 |
| 0.3363 | 429 | 0.1546 |
| 0.3371 | 430 | 0.1512 |
| 0.3378 | 431 | 0.1734 |
| 0.3386 | 432 | 0.1285 |
| 0.3394 | 433 | 0.1562 |
| 0.3402 | 434 | 0.1437 |
| 0.3410 | 435 | 0.1485 |
| 0.3418 | 436 | 0.1443 |
| 0.3425 | 437 | 0.1304 |
| 0.3433 | 438 | 0.1479 |
| 0.3441 | 439 | 0.1544 |
| 0.3449 | 440 | 0.1947 |
| 0.3457 | 441 | 0.1685 |
| 0.3465 | 442 | 0.1715 |
| 0.3472 | 443 | 0.1269 |
| 0.3480 | 444 | 0.1739 |
| 0.3488 | 445 | 0.1798 |
| 0.3496 | 446 | 0.1329 |
| 0.3504 | 447 | 0.1737 |
| 0.3512 | 448 | 0.1197 |
| 0.3519 | 449 | 0.1326 |
| 0.3527 | 450 | 0.131 |
| 0.3535 | 451 | 0.1498 |
| 0.3543 | 452 | 0.1836 |
| 0.3551 | 453 | 0.115 |
| 0.3559 | 454 | 0.1766 |
| 0.3567 | 455 | 0.1289 |
| 0.3574 | 456 | 0.1359 |
| 0.3582 | 457 | 0.1245 |
| 0.3590 | 458 | 0.1793 |
| 0.3598 | 459 | 0.1615 |
| 0.3606 | 460 | 0.1122 |
| 0.3614 | 461 | 0.1767 |
| 0.3621 | 462 | 0.1464 |
| 0.3629 | 463 | 0.1377 |
| 0.3637 | 464 | 0.1341 |
| 0.3645 | 465 | 0.1511 |
| 0.3653 | 466 | 0.1444 |
| 0.3661 | 467 | 0.1407 |
| 0.3668 | 468 | 0.1602 |
| 0.3676 | 469 | 0.1352 |
| 0.3684 | 470 | 0.1203 |
| 0.3692 | 471 | 0.1367 |
| 0.3700 | 472 | 0.1554 |
| 0.3708 | 473 | 0.1006 |
| 0.3715 | 474 | 0.1499 |
| 0.3723 | 475 | 0.1324 |
| 0.3731 | 476 | 0.1654 |
| 0.3739 | 477 | 0.1509 |
| 0.3747 | 478 | 0.1237 |
| 0.3755 | 479 | 0.1298 |
| 0.3762 | 480 | 0.1403 |
| 0.3770 | 481 | 0.1314 |
| 0.3778 | 482 | 0.1704 |
| 0.3786 | 483 | 0.1285 |
| 0.3794 | 484 | 0.1896 |
| 0.3802 | 485 | 0.1358 |
| 0.3810 | 486 | 0.1065 |
| 0.3817 | 487 | 0.1382 |
| 0.3825 | 488 | 0.1372 |
| 0.3833 | 489 | 0.1215 |
| 0.3841 | 490 | 0.2131 |
| 0.3849 | 491 | 0.1512 |
| 0.3857 | 492 | 0.1323 |
| 0.3864 | 493 | 0.1398 |
| 0.3872 | 494 | 0.151 |
| 0.3880 | 495 | 0.1297 |
| 0.3888 | 496 | 0.1852 |
| 0.3896 | 497 | 0.1044 |
| 0.3904 | 498 | 0.1185 |
| 0.3911 | 499 | 0.1724 |
| 0.3919 | 500 | 0.097 |
| 0.3927 | 501 | 0.1486 |
| 0.3935 | 502 | 0.1124 |
| 0.3943 | 503 | 0.1264 |
| 0.3951 | 504 | 0.0993 |
| 0.3958 | 505 | 0.1369 |
| 0.3966 | 506 | 0.1587 |
| 0.3974 | 507 | 0.1455 |
| 0.3982 | 508 | 0.1236 |
| 0.3990 | 509 | 0.1547 |
| 0.3998 | 510 | 0.1286 |
| 0.4005 | 511 | 0.1257 |
| 0.4013 | 512 | 0.1452 |
| 0.4021 | 513 | 0.1595 |
| 0.4029 | 514 | 0.1479 |
| 0.4037 | 515 | 0.166 |
| 0.4045 | 516 | 0.1623 |
| 0.4053 | 517 | 0.136 |
| 0.4060 | 518 | 0.149 |
| 0.4068 | 519 | 0.1496 |
| 0.4076 | 520 | 0.1154 |
| 0.4084 | 521 | 0.1493 |
| 0.4092 | 522 | 0.113 |
| 0.4100 | 523 | 0.137 |
| 0.4107 | 524 | 0.2077 |
| 0.4115 | 525 | 0.112 |
| 0.4123 | 526 | 0.1491 |
| 0.4131 | 527 | 0.1608 |
| 0.4139 | 528 | 0.1446 |
| 0.4147 | 529 | 0.1188 |
| 0.4154 | 530 | 0.137 |
| 0.4162 | 531 | 0.1072 |
| 0.4170 | 532 | 0.088 |
| 0.4178 | 533 | 0.1182 |
| 0.4186 | 534 | 0.2556 |
| 0.4194 | 535 | 0.1907 |
| 0.4201 | 536 | 0.1156 |
| 0.4209 | 537 | 0.1676 |
| 0.4217 | 538 | 0.1236 |
| 0.4225 | 539 | 0.1009 |
| 0.4233 | 540 | 0.1567 |
| 0.4241 | 541 | 0.2222 |
| 0.4248 | 542 | 0.148 |
| 0.4256 | 543 | 0.1182 |
| 0.4264 | 544 | 0.1267 |
| 0.4272 | 545 | 0.127 |
| 0.4280 | 546 | 0.1372 |
| 0.4288 | 547 | 0.1299 |
| 0.4296 | 548 | 0.1711 |
| 0.4303 | 549 | 0.1608 |
| 0.4311 | 550 | 0.1278 |
| 0.4319 | 551 | 0.106 |
| 0.4327 | 552 | 0.1494 |
| 0.4335 | 553 | 0.1093 |
| 0.4343 | 554 | 0.1833 |
| 0.4350 | 555 | 0.1876 |
| 0.4358 | 556 | 0.1774 |
| 0.4366 | 557 | 0.1443 |
| 0.4374 | 558 | 0.1351 |
| 0.4382 | 559 | 0.1094 |
| 0.4390 | 560 | 0.1485 |
| 0.4397 | 561 | 0.1156 |
| 0.4405 | 562 | 0.1324 |
| 0.4413 | 563 | 0.1314 |
| 0.4421 | 564 | 0.1601 |
| 0.4429 | 565 | 0.1434 |
| 0.4437 | 566 | 0.1785 |
| 0.4444 | 567 | 0.1044 |
| 0.4452 | 568 | 0.1123 |
| 0.4460 | 569 | 0.1235 |
| 0.4468 | 570 | 0.1384 |
| 0.4476 | 571 | 0.1357 |
| 0.4484 | 572 | 0.1357 |
| 0.4491 | 573 | 0.1276 |
| 0.4499 | 574 | 0.1554 |
| 0.4507 | 575 | 0.1235 |
| 0.4515 | 576 | 0.1319 |
| 0.4523 | 577 | 0.1862 |
| 0.4531 | 578 | 0.1523 |
| 0.4539 | 579 | 0.1224 |
| 0.4546 | 580 | 0.1629 |
| 0.4554 | 581 | 0.1113 |
| 0.4562 | 582 | 0.1261 |
| 0.4570 | 583 | 0.1246 |
| 0.4578 | 584 | 0.1461 |
| 0.4586 | 585 | 0.1831 |
| 0.4593 | 586 | 0.138 |
| 0.4601 | 587 | 0.1206 |
| 0.4609 | 588 | 0.1269 |
| 0.4617 | 589 | 0.1512 |
| 0.4625 | 590 | 0.1131 |
| 0.4633 | 591 | 0.1206 |
| 0.4640 | 592 | 0.1555 |
| 0.4648 | 593 | 0.1404 |
| 0.4656 | 594 | 0.101 |
| 0.4664 | 595 | 0.0881 |
| 0.4672 | 596 | 0.1793 |
| 0.4680 | 597 | 0.0995 |
| 0.4687 | 598 | 0.1369 |
| 0.4695 | 599 | 0.141 |
| 0.4703 | 600 | 0.1494 |
| 0.4711 | 601 | 0.1824 |
| 0.4719 | 602 | 0.1671 |
| 0.4727 | 603 | 0.1805 |
| 0.4734 | 604 | 0.1475 |
| 0.4742 | 605 | 0.1128 |
| 0.4750 | 606 | 0.1748 |
| 0.4758 | 607 | 0.1564 |
| 0.4766 | 608 | 0.0922 |
| 0.4774 | 609 | 0.1008 |
| 0.4782 | 610 | 0.1324 |
| 0.4789 | 611 | 0.1022 |
| 0.4797 | 612 | 0.1604 |
| 0.4805 | 613 | 0.145 |
| 0.4813 | 614 | 0.1621 |
| 0.4821 | 615 | 0.15 |
| 0.4829 | 616 | 0.1092 |
| 0.4836 | 617 | 0.1239 |
| 0.4844 | 618 | 0.1352 |
| 0.4852 | 619 | 0.1098 |
| 0.4860 | 620 | 0.1341 |
| 0.4868 | 621 | 0.1538 |
| 0.4876 | 622 | 0.1146 |
| 0.4883 | 623 | 0.1498 |
| 0.4891 | 624 | 0.1358 |
| 0.4899 | 625 | 0.1571 |
| 0.4907 | 626 | 0.1508 |
| 0.4915 | 627 | 0.1424 |
| 0.4923 | 628 | 0.1731 |
| 0.4930 | 629 | 0.1398 |
| 0.4938 | 630 | 0.1234 |
| 0.4946 | 631 | 0.1409 |
| 0.4954 | 632 | 0.136 |
| 0.4962 | 633 | 0.1294 |
| 0.4970 | 634 | 0.1612 |
| 0.4977 | 635 | 0.1597 |
| 0.4985 | 636 | 0.1685 |
| 0.4993 | 637 | 0.1723 |
| 0.5001 | 638 | 0.1643 |
| 0.5009 | 639 | 0.1831 |
| 0.5017 | 640 | 0.0791 |
| 0.5024 | 641 | 0.1109 |
| 0.5032 | 642 | 0.1189 |
| 0.5040 | 643 | 0.1484 |
| 0.5048 | 644 | 0.1399 |
| 0.5056 | 645 | 0.1519 |
| 0.5064 | 646 | 0.1182 |
| 0.5072 | 647 | 0.1969 |
| 0.5079 | 648 | 0.1729 |
| 0.5087 | 649 | 0.1119 |
| 0.5095 | 650 | 0.099 |
| 0.5103 | 651 | 0.1265 |
| 0.5111 | 652 | 0.1068 |
| 0.5119 | 653 | 0.173 |
| 0.5126 | 654 | 0.1059 |
| 0.5134 | 655 | 0.1622 |
| 0.5142 | 656 | 0.1787 |
| 0.5150 | 657 | 0.2004 |
| 0.5158 | 658 | 0.1282 |
| 0.5166 | 659 | 0.1218 |
| 0.5173 | 660 | 0.1457 |
| 0.5181 | 661 | 0.0966 |
| 0.5189 | 662 | 0.1101 |
| 0.5197 | 663 | 0.1581 |
| 0.5205 | 664 | 0.1162 |
| 0.5213 | 665 | 0.1724 |
| 0.5220 | 666 | 0.1455 |
| 0.5228 | 667 | 0.1586 |
| 0.5236 | 668 | 0.1283 |
| 0.5244 | 669 | 0.1475 |
| 0.5252 | 670 | 0.1136 |
| 0.5260 | 671 | 0.1461 |
| 0.5267 | 672 | 0.1789 |
| 0.5275 | 673 | 0.1617 |
| 0.5283 | 674 | 0.1344 |
| 0.5291 | 675 | 0.1603 |
| 0.5299 | 676 | 0.1529 |
| 0.5307 | 677 | 0.1135 |
| 0.5315 | 678 | 0.1312 |
| 0.5322 | 679 | 0.1493 |
| 0.5330 | 680 | 0.158 |
| 0.5338 | 681 | 0.1032 |
| 0.5346 | 682 | 0.1082 |
| 0.5354 | 683 | 0.1043 |
| 0.5362 | 684 | 0.1127 |
| 0.5369 | 685 | 0.105 |
| 0.5377 | 686 | 0.1703 |
| 0.5385 | 687 | 0.1805 |
| 0.5393 | 688 | 0.1098 |
| 0.5401 | 689 | 0.1161 |
| 0.5409 | 690 | 0.107 |
| 0.5416 | 691 | 0.1619 |
| 0.5424 | 692 | 0.1076 |
| 0.5432 | 693 | 0.1248 |
| 0.5440 | 694 | 0.117 |
| 0.5448 | 695 | 0.1158 |
| 0.5456 | 696 | 0.1665 |
| 0.5463 | 697 | 0.1261 |
| 0.5471 | 698 | 0.1074 |
| 0.5479 | 699 | 0.1018 |
| 0.5487 | 700 | 0.1425 |
| 0.5495 | 701 | 0.1119 |
| 0.5503 | 702 | 0.1608 |
| 0.5510 | 703 | 0.1732 |
| 0.5518 | 704 | 0.1324 |
| 0.5526 | 705 | 0.1151 |
| 0.5534 | 706 | 0.1368 |
| 0.5542 | 707 | 0.1507 |
| 0.5550 | 708 | 0.1703 |
| 0.5558 | 709 | 0.1286 |
| 0.5565 | 710 | 0.1305 |
| 0.5573 | 711 | 0.1771 |
| 0.5581 | 712 | 0.1106 |
| 0.5589 | 713 | 0.1431 |
| 0.5597 | 714 | 0.1381 |
| 0.5605 | 715 | 0.1388 |
| 0.5612 | 716 | 0.1536 |
| 0.5620 | 717 | 0.1843 |
| 0.5628 | 718 | 0.1695 |
| 0.5636 | 719 | 0.1179 |
| 0.5644 | 720 | 0.1113 |
| 0.5652 | 721 | 0.0922 |
| 0.5659 | 722 | 0.1341 |
| 0.5667 | 723 | 0.1129 |
| 0.5675 | 724 | 0.1344 |
| 0.5683 | 725 | 0.1571 |
| 0.5691 | 726 | 0.1257 |
| 0.5699 | 727 | 0.126 |
| 0.5706 | 728 | 0.1706 |
| 0.5714 | 729 | 0.1245 |
| 0.5722 | 730 | 0.1703 |
| 0.5730 | 731 | 0.1304 |
| 0.5738 | 732 | 0.1552 |
| 0.5746 | 733 | 0.1036 |
| 0.5753 | 734 | 0.1269 |
| 0.5761 | 735 | 0.1355 |
| 0.5769 | 736 | 0.1153 |
| 0.5777 | 737 | 0.0923 |
| 0.5785 | 738 | 0.1359 |
| 0.5793 | 739 | 0.1495 |
| 0.5801 | 740 | 0.1818 |
| 0.5808 | 741 | 0.1325 |
| 0.5816 | 742 | 0.1755 |
| 0.5824 | 743 | 0.1443 |
| 0.5832 | 744 | 0.1255 |
| 0.5840 | 745 | 0.1248 |
| 0.5848 | 746 | 0.1161 |
| 0.5855 | 747 | 0.1513 |
| 0.5863 | 748 | 0.1117 |
| 0.5871 | 749 | 0.156 |
| 0.5879 | 750 | 0.1238 |
| 0.5887 | 751 | 0.1318 |
| 0.5895 | 752 | 0.1406 |
| 0.5902 | 753 | 0.1065 |
| 0.5910 | 754 | 0.1227 |
| 0.5918 | 755 | 0.1444 |
| 0.5926 | 756 | 0.1059 |
| 0.5934 | 757 | 0.1307 |
| 0.5942 | 758 | 0.1253 |
| 0.5949 | 759 | 0.0993 |
| 0.5957 | 760 | 0.1243 |
| 0.5965 | 761 | 0.1326 |
| 0.5973 | 762 | 0.1638 |
| 0.5981 | 763 | 0.1423 |
| 0.5989 | 764 | 0.1804 |
| 0.5996 | 765 | 0.1176 |
| 0.6004 | 766 | 0.1022 |
| 0.6012 | 767 | 0.1451 |
| 0.6020 | 768 | 0.1497 |
| 0.6028 | 769 | 0.1407 |
| 0.6036 | 770 | 0.1235 |
| 0.6044 | 771 | 0.1017 |
| 0.6051 | 772 | 0.1705 |
| 0.6059 | 773 | 0.1385 |
| 0.6067 | 774 | 0.1194 |
| 0.6075 | 775 | 0.1029 |
| 0.6083 | 776 | 0.139 |
| 0.6091 | 777 | 0.1298 |
| 0.6098 | 778 | 0.1878 |
| 0.6106 | 779 | 0.1353 |
| 0.6114 | 780 | 0.1413 |
| 0.6122 | 781 | 0.1129 |
| 0.6130 | 782 | 0.1296 |
| 0.6138 | 783 | 0.1532 |
| 0.6145 | 784 | 0.1769 |
| 0.6153 | 785 | 0.1235 |
| 0.6161 | 786 | 0.1059 |
| 0.6169 | 787 | 0.1224 |
| 0.6177 | 788 | 0.1591 |
| 0.6185 | 789 | 0.1127 |
| 0.6192 | 790 | 0.1519 |
| 0.6200 | 791 | 0.1473 |
| 0.6208 | 792 | 0.0953 |
| 0.6216 | 793 | 0.1302 |
| 0.6224 | 794 | 0.149 |
| 0.6232 | 795 | 0.1053 |
| 0.6239 | 796 | 0.1712 |
| 0.6247 | 797 | 0.1342 |
| 0.6255 | 798 | 0.1199 |
| 0.6263 | 799 | 0.1099 |
| 0.6271 | 800 | 0.1545 |
| 0.6279 | 801 | 0.1158 |
| 0.6286 | 802 | 0.1541 |
| 0.6294 | 803 | 0.1234 |
| 0.6302 | 804 | 0.1451 |
| 0.6310 | 805 | 0.1069 |
| 0.6318 | 806 | 0.1282 |
| 0.6326 | 807 | 0.1589 |
| 0.6334 | 808 | 0.1358 |
| 0.6341 | 809 | 0.1515 |
| 0.6349 | 810 | 0.1334 |
| 0.6357 | 811 | 0.1232 |
| 0.6365 | 812 | 0.1612 |
| 0.6373 | 813 | 0.1379 |
| 0.6381 | 814 | 0.1347 |
| 0.6388 | 815 | 0.1588 |
| 0.6396 | 816 | 0.1173 |
| 0.6404 | 817 | 0.1318 |
| 0.6412 | 818 | 0.1541 |
| 0.6420 | 819 | 0.1054 |
| 0.6428 | 820 | 0.1117 |
| 0.6435 | 821 | 0.1684 |
| 0.6443 | 822 | 0.1234 |
| 0.6451 | 823 | 0.1422 |
| 0.6459 | 824 | 0.0979 |
| 0.6467 | 825 | 0.1365 |
| 0.6475 | 826 | 0.1177 |
| 0.6482 | 827 | 0.1656 |
| 0.6490 | 828 | 0.1288 |
| 0.6498 | 829 | 0.1198 |
| 0.6506 | 830 | 0.1546 |
| 0.6514 | 831 | 0.1397 |
| 0.6522 | 832 | 0.1578 |
| 0.6529 | 833 | 0.1736 |
| 0.6537 | 834 | 0.1174 |
| 0.6545 | 835 | 0.1275 |
| 0.6553 | 836 | 0.0971 |
| 0.6561 | 837 | 0.1285 |
| 0.6569 | 838 | 0.1285 |
| 0.6577 | 839 | 0.1563 |
| 0.6584 | 840 | 0.155 |
| 0.6592 | 841 | 0.1398 |
| 0.6600 | 842 | 0.1465 |
| 0.6608 | 843 | 0.1201 |
| 0.6616 | 844 | 0.1278 |
| 0.6624 | 845 | 0.1155 |
| 0.6631 | 846 | 0.0946 |
| 0.6639 | 847 | 0.1152 |
| 0.6647 | 848 | 0.1191 |
| 0.6655 | 849 | 0.1175 |
| 0.6663 | 850 | 0.133 |
| 0.6671 | 851 | 0.1134 |
| 0.6678 | 852 | 0.1664 |
| 0.6686 | 853 | 0.1803 |
| 0.6694 | 854 | 0.1155 |
| 0.6702 | 855 | 0.1188 |
| 0.6710 | 856 | 0.1283 |
| 0.6718 | 857 | 0.0995 |
| 0.6725 | 858 | 0.1438 |
| 0.6733 | 859 | 0.1105 |
| 0.6741 | 860 | 0.1114 |
| 0.6749 | 861 | 0.089 |
| 0.6757 | 862 | 0.1249 |
| 0.6765 | 863 | 0.1194 |
| 0.6772 | 864 | 0.1591 |
| 0.6780 | 865 | 0.128 |
| 0.6788 | 866 | 0.0787 |
| 0.6796 | 867 | 0.13 |
| 0.6804 | 868 | 0.0992 |
| 0.6812 | 869 | 0.1229 |
| 0.6820 | 870 | 0.095 |
| 0.6827 | 871 | 0.1234 |
| 0.6835 | 872 | 0.1201 |
| 0.6843 | 873 | 0.1069 |
| 0.6851 | 874 | 0.1282 |
| 0.6859 | 875 | 0.1602 |
| 0.6867 | 876 | 0.1 |
| 0.6874 | 877 | 0.1437 |
| 0.6882 | 878 | 0.1167 |
| 0.6890 | 879 | 0.1841 |
| 0.6898 | 880 | 0.1011 |
| 0.6906 | 881 | 0.1264 |
| 0.6914 | 882 | 0.1249 |
| 0.6921 | 883 | 0.1261 |
| 0.6929 | 884 | 0.1608 |
| 0.6937 | 885 | 0.1398 |
| 0.6945 | 886 | 0.15 |
| 0.6953 | 887 | 0.1562 |
| 0.6961 | 888 | 0.1092 |
| 0.6968 | 889 | 0.1311 |
| 0.6976 | 890 | 0.1564 |
| 0.6984 | 891 | 0.1224 |
| 0.6992 | 892 | 0.1126 |
| 0.7000 | 893 | 0.0974 |
| 0.7008 | 894 | 0.1638 |
| 0.7015 | 895 | 0.118 |
| 0.7023 | 896 | 0.1156 |
| 0.7031 | 897 | 0.1141 |
| 0.7039 | 898 | 0.1756 |
| 0.7047 | 899 | 0.1165 |
| 0.7055 | 900 | 0.142 |
| 0.7063 | 901 | 0.1705 |
| 0.7070 | 902 | 0.1311 |
| 0.7078 | 903 | 0.1045 |
| 0.7086 | 904 | 0.1034 |
| 0.7094 | 905 | 0.1205 |
| 0.7102 | 906 | 0.1448 |
| 0.7110 | 907 | 0.1318 |
| 0.7117 | 908 | 0.1369 |
| 0.7125 | 909 | 0.1427 |
| 0.7133 | 910 | 0.1218 |
| 0.7141 | 911 | 0.103 |
| 0.7149 | 912 | 0.1147 |
| 0.7157 | 913 | 0.1297 |
| 0.7164 | 914 | 0.1089 |
| 0.7172 | 915 | 0.1371 |
| 0.7180 | 916 | 0.1182 |
| 0.7188 | 917 | 0.1273 |
| 0.7196 | 918 | 0.1238 |
| 0.7204 | 919 | 0.144 |
| 0.7211 | 920 | 0.0859 |
| 0.7219 | 921 | 0.0939 |
| 0.7227 | 922 | 0.0999 |
| 0.7235 | 923 | 0.1143 |
| 0.7243 | 924 | 0.1251 |
| 0.7251 | 925 | 0.107 |
| 0.7258 | 926 | 0.1077 |
| 0.7266 | 927 | 0.138 |
| 0.7274 | 928 | 0.155 |
| 0.7282 | 929 | 0.0977 |
| 0.7290 | 930 | 0.1003 |
| 0.7298 | 931 | 0.1382 |
| 0.7306 | 932 | 0.1006 |
| 0.7313 | 933 | 0.1027 |
| 0.7321 | 934 | 0.1124 |
| 0.7329 | 935 | 0.1813 |
| 0.7337 | 936 | 0.1159 |
| 0.7345 | 937 | 0.0791 |
| 0.7353 | 938 | 0.1435 |
| 0.7360 | 939 | 0.1288 |
| 0.7368 | 940 | 0.1078 |
| 0.7376 | 941 | 0.127 |
| 0.7384 | 942 | 0.1211 |
| 0.7392 | 943 | 0.1442 |
| 0.7400 | 944 | 0.1668 |
| 0.7407 | 945 | 0.1679 |
| 0.7415 | 946 | 0.1168 |
| 0.7423 | 947 | 0.1626 |
| 0.7431 | 948 | 0.1538 |
| 0.7439 | 949 | 0.0938 |
| 0.7447 | 950 | 0.1657 |
| 0.7454 | 951 | 0.1303 |
| 0.7462 | 952 | 0.098 |
| 0.7470 | 953 | 0.1014 |
| 0.7478 | 954 | 0.1153 |
| 0.7486 | 955 | 0.1192 |
| 0.7494 | 956 | 0.1418 |
| 0.7501 | 957 | 0.1206 |
| 0.7509 | 958 | 0.109 |
| 0.7517 | 959 | 0.1 |
| 0.7525 | 960 | 0.115 |
| 0.7533 | 961 | 0.1099 |
| 0.7541 | 962 | 0.1252 |
| 0.7549 | 963 | 0.0938 |
| 0.7556 | 964 | 0.1704 |
| 0.7564 | 965 | 0.1313 |
| 0.7572 | 966 | 0.1342 |
| 0.7580 | 967 | 0.1648 |
| 0.7588 | 968 | 0.107 |
| 0.7596 | 969 | 0.1177 |
| 0.7603 | 970 | 0.1528 |
| 0.7611 | 971 | 0.1577 |
| 0.7619 | 972 | 0.1109 |
| 0.7627 | 973 | 0.1336 |
| 0.7635 | 974 | 0.1544 |
| 0.7643 | 975 | 0.1304 |
| 0.7650 | 976 | 0.1083 |
| 0.7658 | 977 | 0.1017 |
| 0.7666 | 978 | 0.1492 |
| 0.7674 | 979 | 0.0846 |
| 0.7682 | 980 | 0.1179 |
| 0.7690 | 981 | 0.1634 |
| 0.7697 | 982 | 0.0893 |
| 0.7705 | 983 | 0.1357 |
| 0.7713 | 984 | 0.1757 |
| 0.7721 | 985 | 0.1112 |
| 0.7729 | 986 | 0.1258 |
| 0.7737 | 987 | 0.123 |
| 0.7744 | 988 | 0.1354 |
| 0.7752 | 989 | 0.0855 |
| 0.7760 | 990 | 0.1167 |
| 0.7768 | 991 | 0.1131 |
| 0.7776 | 992 | 0.1222 |
| 0.7784 | 993 | 0.1447 |
| 0.7791 | 994 | 0.1122 |
| 0.7799 | 995 | 0.1508 |
| 0.7807 | 996 | 0.1484 |
| 0.7815 | 997 | 0.0985 |
| 0.7823 | 998 | 0.1686 |
| 0.7831 | 999 | 0.1509 |
| 0.7839 | 1000 | 0.1356 |
| 0.7846 | 1001 | 0.1114 |
| 0.7854 | 1002 | 0.1098 |
| 0.7862 | 1003 | 0.1643 |
| 0.7870 | 1004 | 0.1784 |
| 0.7878 | 1005 | 0.1038 |
| 0.7886 | 1006 | 0.1362 |
| 0.7893 | 1007 | 0.1289 |
| 0.7901 | 1008 | 0.1188 |
| 0.7909 | 1009 | 0.1065 |
| 0.7917 | 1010 | 0.1195 |
| 0.7925 | 1011 | 0.1142 |
| 0.7933 | 1012 | 0.0801 |
| 0.7940 | 1013 | 0.1427 |
| 0.7948 | 1014 | 0.2034 |
| 0.7956 | 1015 | 0.1508 |
| 0.7964 | 1016 | 0.0888 |
| 0.7972 | 1017 | 0.0847 |
| 0.7980 | 1018 | 0.1007 |
| 0.7987 | 1019 | 0.1122 |
| 0.7995 | 1020 | 0.1215 |
| 0.8003 | 1021 | 0.1529 |
| 0.8011 | 1022 | 0.1095 |
| 0.8019 | 1023 | 0.1364 |
| 0.8027 | 1024 | 0.0978 |
| 0.8034 | 1025 | 0.1606 |
| 0.8042 | 1026 | 0.1131 |
| 0.8050 | 1027 | 0.0861 |
| 0.8058 | 1028 | 0.1523 |
| 0.8066 | 1029 | 0.1444 |
| 0.8074 | 1030 | 0.1255 |
| 0.8082 | 1031 | 0.1418 |
| 0.8089 | 1032 | 0.1007 |
| 0.8097 | 1033 | 0.1042 |
| 0.8105 | 1034 | 0.1423 |
| 0.8113 | 1035 | 0.1137 |
| 0.8121 | 1036 | 0.1314 |
| 0.8129 | 1037 | 0.1572 |
| 0.8136 | 1038 | 0.1188 |
| 0.8144 | 1039 | 0.0916 |
| 0.8152 | 1040 | 0.1043 |
| 0.8160 | 1041 | 0.1333 |
| 0.8168 | 1042 | 0.1299 |
| 0.8176 | 1043 | 0.1404 |
| 0.8183 | 1044 | 0.1209 |
| 0.8191 | 1045 | 0.0973 |
| 0.8199 | 1046 | 0.1359 |
| 0.8207 | 1047 | 0.1194 |
| 0.8215 | 1048 | 0.2011 |
| 0.8223 | 1049 | 0.1306 |
| 0.8230 | 1050 | 0.1073 |
| 0.8238 | 1051 | 0.1154 |
| 0.8246 | 1052 | 0.1224 |
| 0.8254 | 1053 | 0.1045 |
| 0.8262 | 1054 | 0.1067 |
| 0.8270 | 1055 | 0.1086 |
| 0.8277 | 1056 | 0.0923 |
| 0.8285 | 1057 | 0.1228 |
| 0.8293 | 1058 | 0.1474 |
| 0.8301 | 1059 | 0.0949 |
| 0.8309 | 1060 | 0.1259 |
| 0.8317 | 1061 | 0.1152 |
| 0.8325 | 1062 | 0.0937 |
| 0.8332 | 1063 | 0.1602 |
| 0.8340 | 1064 | 0.1165 |
| 0.8348 | 1065 | 0.1036 |
| 0.8356 | 1066 | 0.1665 |
| 0.8364 | 1067 | 0.1163 |
| 0.8372 | 1068 | 0.1124 |
| 0.8379 | 1069 | 0.1093 |
| 0.8387 | 1070 | 0.1015 |
| 0.8395 | 1071 | 0.1602 |
| 0.8403 | 1072 | 0.0913 |
| 0.8411 | 1073 | 0.1327 |
| 0.8419 | 1074 | 0.1149 |
| 0.8426 | 1075 | 0.1137 |
| 0.8434 | 1076 | 0.1197 |
| 0.8442 | 1077 | 0.1335 |
| 0.8450 | 1078 | 0.1366 |
| 0.8458 | 1079 | 0.1265 |
| 0.8466 | 1080 | 0.0921 |
| 0.8473 | 1081 | 0.1339 |
| 0.8481 | 1082 | 0.1155 |
| 0.8489 | 1083 | 0.103 |
| 0.8497 | 1084 | 0.1302 |
| 0.8505 | 1085 | 0.1311 |
| 0.8513 | 1086 | 0.1275 |
| 0.8520 | 1087 | 0.1585 |
| 0.8528 | 1088 | 0.0961 |
| 0.8536 | 1089 | 0.1222 |
| 0.8544 | 1090 | 0.0887 |
| 0.8552 | 1091 | 0.1599 |
| 0.8560 | 1092 | 0.0909 |
| 0.8568 | 1093 | 0.1566 |
| 0.8575 | 1094 | 0.1201 |
| 0.8583 | 1095 | 0.0786 |
| 0.8591 | 1096 | 0.1383 |
| 0.8599 | 1097 | 0.1593 |
| 0.8607 | 1098 | 0.1582 |
| 0.8615 | 1099 | 0.1474 |
| 0.8622 | 1100 | 0.0924 |
| 0.8630 | 1101 | 0.1379 |
| 0.8638 | 1102 | 0.1324 |
| 0.8646 | 1103 | 0.1139 |
| 0.8654 | 1104 | 0.0941 |
| 0.8662 | 1105 | 0.1107 |
| 0.8669 | 1106 | 0.1183 |
| 0.8677 | 1107 | 0.1024 |
| 0.8685 | 1108 | 0.1346 |
| 0.8693 | 1109 | 0.131 |
| 0.8701 | 1110 | 0.1244 |
| 0.8709 | 1111 | 0.1423 |
| 0.8716 | 1112 | 0.1604 |
| 0.8724 | 1113 | 0.146 |
| 0.8732 | 1114 | 0.1398 |
| 0.8740 | 1115 | 0.1393 |
| 0.8748 | 1116 | 0.1643 |
| 0.8756 | 1117 | 0.1006 |
| 0.8763 | 1118 | 0.0956 |
| 0.8771 | 1119 | 0.1304 |
| 0.8779 | 1120 | 0.1151 |
| 0.8787 | 1121 | 0.161 |
| 0.8795 | 1122 | 0.0871 |
| 0.8803 | 1123 | 0.1028 |
| 0.8811 | 1124 | 0.1715 |
| 0.8818 | 1125 | 0.1674 |
| 0.8826 | 1126 | 0.1073 |
| 0.8834 | 1127 | 0.0867 |
| 0.8842 | 1128 | 0.1117 |
| 0.8850 | 1129 | 0.1333 |
| 0.8858 | 1130 | 0.126 |
| 0.8865 | 1131 | 0.0853 |
| 0.8873 | 1132 | 0.1152 |
| 0.8881 | 1133 | 0.1467 |
| 0.8889 | 1134 | 0.1643 |
| 0.8897 | 1135 | 0.1117 |
| 0.8905 | 1136 | 0.0909 |
| 0.8912 | 1137 | 0.1645 |
| 0.8920 | 1138 | 0.1359 |
| 0.8928 | 1139 | 0.1204 |
| 0.8936 | 1140 | 0.1574 |
| 0.8944 | 1141 | 0.1187 |
| 0.8952 | 1142 | 0.1588 |
| 0.8959 | 1143 | 0.1419 |
| 0.8967 | 1144 | 0.1109 |
| 0.8975 | 1145 | 0.1048 |
| 0.8983 | 1146 | 0.1232 |
| 0.8991 | 1147 | 0.1159 |
| 0.8999 | 1148 | 0.1442 |
| 0.9006 | 1149 | 0.1345 |
| 0.9014 | 1150 | 0.0893 |
| 0.9022 | 1151 | 0.1033 |
| 0.9030 | 1152 | 0.1133 |
| 0.9038 | 1153 | 0.2009 |
| 0.9046 | 1154 | 0.1669 |
| 0.9053 | 1155 | 0.1095 |
| 0.9061 | 1156 | 0.1099 |
| 0.9069 | 1157 | 0.0893 |
| 0.9077 | 1158 | 0.137 |
| 0.9085 | 1159 | 0.1346 |
| 0.9093 | 1160 | 0.1135 |
| 0.9101 | 1161 | 0.1003 |
| 0.9108 | 1162 | 0.1224 |
| 0.9116 | 1163 | 0.098 |
| 0.9124 | 1164 | 0.1353 |
| 0.9132 | 1165 | 0.1481 |
| 0.9140 | 1166 | 0.1168 |
| 0.9148 | 1167 | 0.0794 |
| 0.9155 | 1168 | 0.0979 |
| 0.9163 | 1169 | 0.1093 |
| 0.9171 | 1170 | 0.1022 |
| 0.9179 | 1171 | 0.1498 |
| 0.9187 | 1172 | 0.1596 |
| 0.9195 | 1173 | 0.1657 |
| 0.9202 | 1174 | 0.1195 |
| 0.9210 | 1175 | 0.1278 |
| 0.9218 | 1176 | 0.1307 |
| 0.9226 | 1177 | 0.1071 |
| 0.9234 | 1178 | 0.0969 |
| 0.9242 | 1179 | 0.1192 |
| 0.9249 | 1180 | 0.1166 |
| 0.9257 | 1181 | 0.1221 |
| 0.9265 | 1182 | 0.1179 |
| 0.9273 | 1183 | 0.1414 |
| 0.9281 | 1184 | 0.1247 |
| 0.9289 | 1185 | 0.1148 |
| 0.9296 | 1186 | 0.1211 |
| 0.9304 | 1187 | 0.1373 |
| 0.9312 | 1188 | 0.1105 |
| 0.9320 | 1189 | 0.0911 |
| 0.9328 | 1190 | 0.1205 |
| 0.9336 | 1191 | 0.1479 |
| 0.9344 | 1192 | 0.115 |
| 0.9351 | 1193 | 0.0951 |
| 0.9359 | 1194 | 0.1501 |
| 0.9367 | 1195 | 0.1069 |
| 0.9375 | 1196 | 0.1091 |
| 0.9383 | 1197 | 0.0988 |
| 0.9391 | 1198 | 0.1278 |
| 0.9398 | 1199 | 0.1221 |
| 0.9406 | 1200 | 0.1418 |
| 0.9414 | 1201 | 0.1354 |
| 0.9422 | 1202 | 0.1435 |
| 0.9430 | 1203 | 0.101 |
| 0.9438 | 1204 | 0.1119 |
| 0.9445 | 1205 | 0.1566 |
| 0.9453 | 1206 | 0.1238 |
| 0.9461 | 1207 | 0.1008 |
| 0.9469 | 1208 | 0.1126 |
| 0.9477 | 1209 | 0.0897 |
| 0.9485 | 1210 | 0.1486 |
| 0.9492 | 1211 | 0.0976 |
| 0.9500 | 1212 | 0.124 |
| 0.9508 | 1213 | 0.1034 |
| 0.9516 | 1214 | 0.1229 |
| 0.9524 | 1215 | 0.1301 |
| 0.9532 | 1216 | 0.1363 |
| 0.9539 | 1217 | 0.1161 |
| 0.9547 | 1218 | 0.1199 |
| 0.9555 | 1219 | 0.0815 |
| 0.9563 | 1220 | 0.1034 |
| 0.9571 | 1221 | 0.1554 |
| 0.9579 | 1222 | 0.1266 |
| 0.9587 | 1223 | 0.1153 |
| 0.9594 | 1224 | 0.1129 |
| 0.9602 | 1225 | 0.1228 |
| 0.9610 | 1226 | 0.1268 |
| 0.9618 | 1227 | 0.1515 |
| 0.9626 | 1228 | 0.0885 |
| 0.9634 | 1229 | 0.1142 |
| 0.9641 | 1230 | 0.187 |
| 0.9649 | 1231 | 0.0836 |
| 0.9657 | 1232 | 0.0967 |
| 0.9665 | 1233 | 0.1516 |
| 0.9673 | 1234 | 0.0581 |
| 0.9681 | 1235 | 0.0847 |
| 0.9688 | 1236 | 0.1105 |
| 0.9696 | 1237 | 0.0958 |
| 0.9704 | 1238 | 0.1238 |
| 0.9712 | 1239 | 0.1076 |
| 0.9720 | 1240 | 0.1137 |
| 0.9728 | 1241 | 0.1236 |
| 0.9735 | 1242 | 0.129 |
| 0.9743 | 1243 | 0.1113 |
| 0.9751 | 1244 | 0.1466 |
| 0.9759 | 1245 | 0.1593 |
| 0.9767 | 1246 | 0.1151 |
| 0.9775 | 1247 | 0.153 |
| 0.9782 | 1248 | 0.1564 |
| 0.9790 | 1249 | 0.1208 |
| 0.9798 | 1250 | 0.0925 |
| 0.9806 | 1251 | 0.1146 |
| 0.9814 | 1252 | 0.1043 |
| 0.9822 | 1253 | 0.0926 |
| 0.9830 | 1254 | 0.1442 |
| 0.9837 | 1255 | 0.134 |
| 0.9845 | 1256 | 0.0841 |
| 0.9853 | 1257 | 0.1256 |
| 0.9861 | 1258 | 0.12 |
| 0.9869 | 1259 | 0.0815 |
| 0.9877 | 1260 | 0.1298 |
| 0.9884 | 1261 | 0.1569 |
| 0.9892 | 1262 | 0.1296 |
| 0.9900 | 1263 | 0.1418 |
| 0.9908 | 1264 | 0.1204 |
| 0.9916 | 1265 | 0.1207 |
| 0.9924 | 1266 | 0.1116 |
| 0.9931 | 1267 | 0.0807 |
| 0.9939 | 1268 | 0.1082 |
| 0.9947 | 1269 | 0.1213 |
| 0.9955 | 1270 | 0.1156 |
| 0.9963 | 1271 | 0.1517 |
| 0.9971 | 1272 | 0.1238 |
| 0.9978 | 1273 | 0.1313 |
| 0.9986 | 1274 | 0.131 |
| 0.9994 | 1275 | 0.1584 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Alibaba-NLP/gte-multilingual-base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:816532", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "김택용이 스타크래프트2에서 첫 승리를 거둔 시기는 언제인가?", "sentences": ["2008년 11월 22일, 김택용은 클럽데이 온라인 MSL 결승전에서 허영무에게 선승을 내준 후 내리 3연승, 3:1 쾌승을 거두며 자신의 세 번째 MSL 우승을 달성하였다. 이를 통해 김택용은 프로토스 최초 개인리그 3회 우승자 및 역대 네 번째 금배지(MSL 3회 우승의 상징) 획득자가 되었다.", "김택용은 새로 개막한 SK플래닛 프로리그 시즌2에서 스타크래프트: 브루드 워, 스타크래프트 Ⅱ를 병행해서 출전했다. 스타크래프트 브루드워 실력은 여전히 건재하지만, 스타Ⅱ에서는 스타크래프트 브루드워에서의 실력을 내지 못했다. 2012년 8월까지 택뱅리쌍 일원 중에서 김택용만 유일하게 스타Ⅱ에서의 승리를 하지 못했다. (0승 6패) 더군다나 2012년 봄까지만 해도 스타Ⅱ를 완전히 이해하지 못한듯한 플레이를 보이고 있었지만, 김택용은 2012년 여름이 되어서 스타Ⅱ를 서서히 실력을 쌓고 있었다. 기존의 스타크래프트 브루드워 스타리그가 스타크래프트 Ⅱ로 종목 전환한 뒤에 열린 첫 예선에 참가했으나, 스타Ⅱ의 부족한 실력을 여실히 들어내면서 1:2로 신예선수에게 지며 예선탈락하였다. 또한 GSL 선수들과 맞붙은 WCS 예선에서 프나틱의 장재호를 만나 무기력하게 0:2로 패배하여 탈락하였고, WCG 2012 예선에서도 백동준에게 0:2로 패배해 스타Ⅱ 종목으로 열린 경기에서 모두 패배하였다. 김택용은 스타2리그 뿐만아니라 스타1리그에서도 2010년 여름부터 3년째 스타리그에 이름을 올리지 못했다. 2012년 8월 12일 마침내 염보성을 상대로 어렵게 프로리그 스타2 종목에서 처음으로 승리를 거두었다(1승 6패). 결국 부진을 극복하지 못한 채 2012년 8월 케스파 랭킹 22위로까지 떨어지고 말았다. 하지만 그 후 2012년 8월 18일 김정우 마저 김택용의 스타2 승리 제물이 되었다. 엘리전까지 가는 혈전 끝에 스타Ⅱ에서 두각을 돋보이는 김정우를 격파하였고, 2012년 9월 2일 SK플래닛 스타 프로리그 시즌2 준플레이오프 2차전에서 다시 한번 염보성을 스타Ⅱ로 격파하면서 조금씩 기세를 올렸다.", "이소룡의 아버지는 유명한 광둥 경극 배우였으며, 아버지의 뒤를 이어 아주 어린 나이부터 영화를 접하게 되었고, 생후 3개월에 《금문녀》라는 영화로 데뷔하였다. 그가 18세가 되었을 때 이미 그는 스무 편의 영화에 출연한 상태였다."]}, {"source_sentence": "페니스가 없는 여성의 심리적 반응은 어떠한가?", "sentences": ["PIRA는 무장해제위원회(Decommingsioning Commission)에 의해 2005년 10월 무장투쟁을 포기했음을 확인받았으며, 우익 민주연합당(DUP)를 제외한 정당들도 이를 인정했다. 단, DUP에서는 증거가 없다며 무장투쟁포기사실을 인정하지 않았는데, 이는 DUP가 PIRA를 통해서 존재할 수 있기 때문이다. 그 실례로 북아일랜드의 수도 벨파스트에서 발행하는 일간지에선 PIRA 지도자 오닐이 무장투쟁을 포기하자, 민주연합당 지도자 이언 페이즐리(Ian Paisley)가 \"가지마! 난 네가 필요해!\"라고 말하는 내용의 풍자만화를 실었다.", "성적 만족을 위해서라면 정신적인 사랑 없이 육체적 결합이 가능하다고 주장하였다. 정분이 없이도 성교가 가능하며 성관계는 일종의 오락 내지는 친밀행위에 지나지 않는다고 보았다. 그러나 이는 보수적인 유학자들 외에도 남성 지식인과 기독교계열의 반발을 불러왔다.", "첫째는 \"자신에게 페니스가 없는\"것을 강하게 자각하고, 완전하게 페니스가 없는 존재로 받아들일 것이다. 이것은 열등감을 가진 여자를 만든다. 이 경우 무기력한 인간이 되어버린다고 한다. 둘째는 \"자신은 페니스가 언젠가 나오고, 나는 남자\"라고 믿고, 남성적인 성격을 갖출 경우이다. 세 번째는 성기라는 대상을 선망할 때 성기를 \"페니스 → 아이\"라는 상징으로 생각하고, 아이를 손에 넣는 길을 선택하는 경우이다."]}, {"source_sentence": "신탁청은 언제 해체되었는가?", "sentences": ["신탁통치령(信託統治領, ) 혹은 신탁통치 지역(信託統治 地域)은 국제 연맹 위임통치령의 후신으로 제2차 세계 대전의 종전과 함께 국제 연맹이 유엔으로 대체됨에 따라 생겨났다.다음 11개 지역이 신탁통치령이었다. 1994년 10월 팔라우 독립을 마지막으로 신탁통치령은 소멸되었다.", "히가시코게 역()은 일본 돗토리현 야즈 군 야즈 정에 위치한 서일본 여객철도 인비 선의 철도역이다. 단선 승강장 1면 1선의 구조를 갖춘 지상역이다.", "신탁청은 1994년 12월 31일 해체될 때까지 15,102개의 기업체를 매각하고 4358개의 기업체를 재사유화했으며, 호텔, 식당, 약국 및 서점 등 소규모 사업장 25,030개를 사유화하고 46,552건의 부동산을 매각해 총 91,042건의 사유화를 기록했다. 이를 통해 666억 마르크의 매각수익을 올리고, 2111억 마르크의 투자와 150만 개의 일자리를 보장받았다. 초기에 추산되었던 기업가치가 약 6000억 마르크였던 것에 비하면 1/10 수준밖에 되지 않은 턱없이 낮은 매각수익이다. 사유화된 15,000여 기업 중 구동독인들에 의한 매입은― 주로 경영자기업인수(MBO) 혹은 종업원기업인수(EBO) ― 6%에 지나지않았고, 외국인 투자자 매입도 사유화 전체 기업 중 9% 정도로 나타났다."]}, {"source_sentence": "석신산의 탈수 반응 생성물은 무엇인가요?", "sentences": ["석신산은 푸마르산으로 산화되거나 다이에틸석시네이트(diethylsuccinate, (CHCOCHCH))와 같은 다이에스터로 전환될 수 있다. 이러한 다이에틸 에스터(diethyl ester)는 스토브 축합(Stobbe condensation) 반응의 기질이다. 석신산의 탈수는 석신산 무수물을 생성한다. 석신산은 1,4-뷰테인다이올, 말레산 무수물, 석신이미드, 2-피롤리디논 및 테트라하이드로푸란을 유도하는데 사용될 수 있다.", "2006년 ‘동의대 5·3 동지회’ 회원 등은 “동의대 사건 이후 경찰 조사 과정에서 고문 등 인권침해가 있었다”며 진실·화해를 위한 과거사 정리 위원회(이하 진실화해위)에 진실규명을 신청하였다. 이로 인해 진실화해위 소위원회는 “구타 등 인권침해가 있어 국가가 사과해야 한다”는 내용의 조사 결과 보고서를 심의·의결, 2010년 1월 19일에 열린 진실화해위 전원위원회에 상정했으나, “진실화해위는 ‘권위주의 통치’ 시기에 일어난 일을 조사 대상으로 삼는데, 동의대 사건은 노태우 정권 시절에 일어난 일이므로 조사 대상 자체가 되지 않는다”며 재적위원 과반수가 이 사건을 각하하기로 의결해 사건이 각하되었다. 다음날인 1월 20일에는 조사하지 않기로 했다고 밝힘으로서, 보고서 내용은 논의조차 되지 못한 것으로 전해졌다.", "저산소 상태에서 석신산의 축적은 활성 산소 생산의 증가에 의한 허혈 재관류 손상(reperfusion injury)과 관련이 있다. 허혈(ischemia) 동안 푸마르산은 퓨린 뉴클레오타이드의 분해 및 말산-아스파르트산 셔틀의 역방향 반응의 일부분으로부터 형성된다. 과도한 푸마르산은 석신산 탈수소효소의 역반응을 통해 석신산의 생산 및 축적을 야기한다. 재관류시 석신산은 신속하게 산화되어 활성산소의 갑작스럽고 광범위한 생성을 초래한다. 활성산소는 세포자살 기작을 촉발시키거나 단백질, 세포막, 세포소기관 등에 산화적 손상을 유발한다. 동물 모델에서 허혈성 석신산 축적의 약리학적 억제는 허혈 재관류 손상을 개선시켰다. 현재 석신산 매개 활성산소 생성의 억제는 약물 치료의 표적으로 조사 중이다."]}, {"source_sentence": "파올로 말디니는 어떤 선수인가요?", "sentences": ["체사레 말디니는 1954년부터 1966년까지 AC 밀란에서 뛰었고, 아들 파올로 말디니는 1985년부터 2009년까지 AC 밀란에서 뛰었으며, 손자 크리스티안 말디니가 2005년 10월 18일 AC 밀란 유스팀에 입단해 3부자가 모두 AC 밀란에서 활약하게 되었다.", "파올로 체사레 말디니 (, 1968년 6월 26일, 이탈리아 밀라노 ~ )는 이탈리아의 은퇴한 축구 선수로, 포지션은 왼쪽 풀백과 센터백이었다. 그는 밀란의 전설적인 수비수 였을 뿐 아니라 역대 최고 수비수로도 불릴 만큼 대단한 선수였다. 현재 밀란의 스포츠 전략 & 개발 디렉터로 활동하고 있다.", "조 주니어(Joe Junior, 본명은 Jose Maria Rodrigues, Jr.(조즈 마리아 로드리게스 주니어), 중문명(中文名)은 羅利期(뤄리지, 나이기), 1947년 7월 22일 ~ )는 영국 국적자 신분의 포르투갈계 영국인 남성으로 중화인민공화국 마카오 특별행정구에서 출생한 중화인민공화국 홍콩 특별행정구의 가수, 작사가, 영화배우, 텔레비전 연기자이다."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,062 |
jeevanions/finetuned_arctic-embedd-l
|
jeevanions
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3430",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-24T02:07:27Z |
2024-09-24T02:09:06+00:00
| 7 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-l
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3430
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are some illustrative cases that show the implementation of
the AI Bill of Rights?
sentences:
- "SECTION TITLE\nAPPENDIX\nListening to the American People \nThe White House Office\
\ of Science and Technology Policy (OSTP) led a yearlong process to seek and distill\
\ \ninput from people across the country – from impacted communities to industry\
\ stakeholders to \ntechnology developers to other experts across fields and sectors,\
\ as well as policymakers across the Federal \ngovernment – on the issue of algorithmic\
\ and data-driven harms and potential remedies. Through panel \ndiscussions, public\
\ listening sessions, private meetings, a formal request for information, and\
\ input to a \npublicly accessible and widely-publicized email address, people\
\ across the United States spoke up about \nboth the promises and potential harms\
\ of these technologies, and played a central role in shaping the \nBlueprint\
\ for an AI Bill of Rights. \nPanel Discussions to Inform the Blueprint for An\
\ AI Bill of Rights \nOSTP co-hosted a series of six panel discussions in collaboration\
\ with the Center for American Progress,"
- "existing human performance considered as a performance baseline for the algorithm\
\ to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision\
\ possibilities resulting from performance testing \nshould include the possibility\
\ of not deploying the system. \nRisk identification and mitigation. Before deployment,\
\ and in a proactive and ongoing manner, poten\ntial risks of the automated system\
\ should be identified and mitigated. Identified risks should focus on the \n\
potential for meaningful impact on people’s rights, opportunities, or access and\
\ include those to impacted \ncommunities that may not be direct users of the\
\ automated system, risks resulting from purposeful misuse of \nthe system, and\
\ other concerns identified via the consultation process. Assessment and, where\
\ possible, mea\nsurement of the impact of risks should be included and balanced\
\ such that high impact risks receive attention"
- "confidence that their rights, opportunities, and access as well as their expectations\
\ about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE:\
\ \nThis section provides real-life examples of how these guiding principles can\
\ become reality, through laws, policies, and practices. \nIt describes practical\
\ technical and sociotechnical approaches to protecting rights, opportunities,\
\ and access. \nThe examples provided are not critiques or endorsements, but rather\
\ are offered as illustrative cases to help \nprovide a concrete vision for actualizing\
\ the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses\
\ require the cooperation of and collaboration among industry, civil society,\
\ researchers, policymakers, \ntechnologists, and the public. \n14"
- source_sentence: What are the potential impacts of automated systems on data privacy?
sentences:
- "https://arxiv.org/pdf/2305.17493v2 \nSmith, A. et al. (2023) Hallucination or\
\ Confabulation? Neuroanatomy as metaphor in Large Language \nModels. PLOS Digital\
\ Health. \nhttps://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000388\
\ \nSoice, E. et al. (2023) Can large language models democratize access to dual-use\
\ biotechnology? arXiv. \nhttps://arxiv.org/abs/2306.03809 \nSolaiman, I. et al.\
\ (2023) The Gradient of Generative AI Release: Methods and Considerations. arXiv.\
\ \nhttps://arxiv.org/abs/2302.04844 \nStaab, R. et al. (2023) Beyond Memorization:\
\ Violating Privacy via Inference With Large Language \nModels. arXiv. https://arxiv.org/pdf/2310.07298\
\ \nStanford, S. et al. (2023) Whose Opinions Do Language Models Reflect? arXiv.\
\ \nhttps://arxiv.org/pdf/2303.17548 \nStrubell, E. et al. (2019) Energy and Policy\
\ Considerations for Deep Learning in NLP. arXiv. \nhttps://arxiv.org/pdf/1906.02243\
\ \nThe White House (2016) Circular No. A-130, Managing Information as a Strategic\
\ Resource."
- "and data that are considered sensitive are understood to change over time based\
\ on societal norms and context. \n36"
- "yet foreseeable, uses or impacts of automated systems. You should be \nprotected\
\ from inappropriate or irrelevant data use in the design, de\nvelopment, and\
\ deployment of automated systems, and from the \ncompounded harm of its reuse.\
\ Independent evaluation and report\ning that confirms that the system is safe\
\ and effective, including re\nporting of steps taken to mitigate potential harms,\
\ should be per\nformed and the results made public whenever possible. \n15"
- source_sentence: What is the AI Bill of Rights?
sentences:
- "BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR\
\ \nTHE AMERICAN PEOPLE \nOCTOBER 2022"
- "APPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n\
•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n\
•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute,\
\ UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing\
\ scope of technology use in providing for social welfare, including \nin fraud\
\ detection, digital ID systems, and other methods focused on improving efficiency\
\ and reducing cost. \nHowever, various panelists individually cautioned that\
\ these systems may reduce burden for government \nagencies by increasing the\
\ burden and agency of people using and interacting with these technologies. \n\
Additionally, these systems can produce feedback loops and compounded harm, collecting\
\ data from \ncommunities and using it to reinforce inequality. Various panelists\
\ suggested that these harms could be \nmitigated by ensuring community input\
\ at the beginning of the design process, providing ways to opt out of"
- "safe, secure, and resilient; (e) understandable; (f ) responsible and traceable;\
\ (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint\
\ for an AI Bill of Rights is consistent with the Executive Order. \nAffected\
\ agencies across the federal government have released AI use case inventories13\
\ and are implementing \nplans to bring those AI systems into compliance with\
\ the Executive Order or retire them. \nThe law and policy landscape for motor\
\ vehicles shows that strong safety regulations—and \nmeasures to address harms\
\ when they occur—can enhance innovation in the context of com-\nplex technologies.\
\ Cars, like automated digital systems, comprise a complex collection of components.\
\ \nThe National Highway Traffic Safety Administration,14 through its rigorous\
\ standards and independent \nevaluation, helps make sure vehicles on our roads\
\ are safe without limiting manufacturers’ ability to \ninnovate.15 At the same\
\ time, rules of the road are implemented locally to impose contextually appropriate"
- source_sentence: What are the best practices for benchmarking AI system security
and resilience?
sentences:
- "NOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\
\ for automated systems are meant to serve as a blueprint for the development\
\ of additional \ntechnical standards and practices that are tailored for particular\
\ sectors and contexts. \nAn automated system should provide demonstrably clear,\
\ timely, understandable, and accessible notice of use, and \nexplanations as\
\ to how and why a decision was made or an action was taken by the system. These\
\ expectations are \nexplained below. \nProvide clear, timely, understandable,\
\ and accessible notice of use and explanations \nGenerally accessible plain\
\ language documentation. The entity responsible for using the automated \nsystem\
\ should ensure that documentation describing the overall system (including any\
\ human components) is \npublic and easy to find. The documentation should describe,\
\ in plain language, how the system works and how"
- "content performance and impact, and work in collaboration with AI Actors \nexperienced\
\ in user research and experience. \nHuman-AI Configuration \nMG-4.1-004 Implement\
\ active learning techniques to identify instances where the model fails \nor\
\ produces unexpected outputs. \nConfabulation \nMG-4.1-005 \nShare transparency\
\ reports with internal and external stakeholders that detail \nsteps taken to\
\ update the GAI system to enhance transparency and \naccountability. \nHuman-AI\
\ Configuration; Harmful \nBias and Homogenization \nMG-4.1-006 \nTrack dataset\
\ modifications for provenance by monitoring data deletions, \nrectification requests,\
\ and other changes that may impact the verifiability of \ncontent origins. \n\
Information Integrity"
- "33 \nMEASURE 2.7: AI system security and resilience – as identified in the MAP\
\ function – are evaluated and documented. \nAction ID \nSuggested Action \nGAI\
\ Risks \nMS-2.7-001 \nApply established security measures to: Assess likelihood\
\ and magnitude of \nvulnerabilities and threats such as backdoors, compromised\
\ dependencies, data \nbreaches, eavesdropping, man-in-the-middle attacks, reverse\
\ engineering, \nautonomous agents, model theft or exposure of model weights,\
\ AI inference, \nbypass, extraction, and other baseline security concerns. \n\
Data Privacy; Information Integrity; \nInformation Security; Value Chain \nand\
\ Component Integration \nMS-2.7-002 \nBenchmark GAI system security and resilience\
\ related to content provenance \nagainst industry standards and best practices.\
\ Compare GAI system security \nfeatures and content provenance methods against\
\ industry state-of-the-art. \nInformation Integrity; Information \nSecurity \n\
MS-2.7-003 \nConduct user surveys to gather user satisfaction with the AI-generated\
\ content"
- source_sentence: How should risks or trustworthiness characteristics that cannot
be measured be documented?
sentences:
- "MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during\
\ the MAP function are selected for \nimplementation starting with the most significant\
\ AI risks. The risks or trustworthiness characteristics that will not – or cannot\
\ – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI\
\ Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital\
\ content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to\
\ analyze content provenance and detect data \nanomalies, verify the authenticity\
\ of digital signatures, and identify patterns \nassociated with misinformation\
\ or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation\
\ metrics by demographic factors to identify any \ndiscrepancies in how content\
\ provenance mechanisms work across diverse \npopulations. \nInformation Integrity;\
\ Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to\
\ evaluate structured public feedback exercises"
- "AI technology can produce varied outputs in multiple modalities and present many\
\ classes of user \ninterfaces. This leads to a broader set of AI Actors interacting\
\ with GAI systems for widely differing \napplications and contexts of use. These\
\ can include data labeling and preparation, development of GAI \nmodels, content\
\ moderation, code generation and review, text generation and editing, image and\
\ video \ngeneration, summarization, search, and chat. These activities can take\
\ place within organizational \nsettings or in the public domain. \nOrganizations\
\ can restrict AI applications that cause harm, exceed stated risk tolerances,\
\ or that conflict \nwith their tolerances or values. Governance tools and protocols\
\ that are applied to other types of AI \nsystems can be applied to GAI systems.\
\ These plans and actions include: \n• Accessibility and reasonable \naccommodations\
\ \n• AI actor credentials and qualifications \n• Alignment to organizational\
\ values \n• Auditing and assessment \n• Change-management controls"
- "existing human performance considered as a performance baseline for the algorithm\
\ to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision\
\ possibilities resulting from performance testing \nshould include the possibility\
\ of not deploying the system. \nRisk identification and mitigation. Before deployment,\
\ and in a proactive and ongoing manner, poten\ntial risks of the automated system\
\ should be identified and mitigated. Identified risks should focus on the \n\
potential for meaningful impact on people’s rights, opportunities, or access and\
\ include those to impacted \ncommunities that may not be direct users of the\
\ automated system, risks resulting from purposeful misuse of \nthe system, and\
\ other concerns identified via the consultation process. Assessment and, where\
\ possible, mea\nsurement of the impact of risks should be included and balanced\
\ such that high impact risks receive attention"
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.2807017543859649
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4649122807017544
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5350877192982456
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7192982456140351
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2807017543859649
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15497076023391812
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10701754385964912
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0719298245614035
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2807017543859649
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4649122807017544
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5350877192982456
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7192982456140351
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4797086283187805
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.40644667223614606
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.423567506926962
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.2807017543859649
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.4649122807017544
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5350877192982456
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.7192982456140351
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.2807017543859649
name: Dot Precision@1
- type: dot_precision@3
value: 0.15497076023391812
name: Dot Precision@3
- type: dot_precision@5
value: 0.10701754385964912
name: Dot Precision@5
- type: dot_precision@10
value: 0.0719298245614035
name: Dot Precision@10
- type: dot_recall@1
value: 0.2807017543859649
name: Dot Recall@1
- type: dot_recall@3
value: 0.4649122807017544
name: Dot Recall@3
- type: dot_recall@5
value: 0.5350877192982456
name: Dot Recall@5
- type: dot_recall@10
value: 0.7192982456140351
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4797086283187805
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.40644667223614606
name: Dot Mrr@10
- type: dot_map@100
value: 0.423567506926962
name: Dot Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision 9a9e5834d2e89cdd8bb72b64111dde496e4fe78c -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jeevanions/finetuned_arctic-embedd-l")
# Run inference
sentences = [
'How should risks or trustworthiness characteristics that cannot be measured be documented?',
'MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any \ndiscrepancies in how content provenance mechanisms work across diverse \npopulations. \nInformation Integrity; Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to evaluate structured public feedback exercises',
'existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2807 |
| cosine_accuracy@3 | 0.4649 |
| cosine_accuracy@5 | 0.5351 |
| cosine_accuracy@10 | 0.7193 |
| cosine_precision@1 | 0.2807 |
| cosine_precision@3 | 0.155 |
| cosine_precision@5 | 0.107 |
| cosine_precision@10 | 0.0719 |
| cosine_recall@1 | 0.2807 |
| cosine_recall@3 | 0.4649 |
| cosine_recall@5 | 0.5351 |
| cosine_recall@10 | 0.7193 |
| cosine_ndcg@10 | 0.4797 |
| cosine_mrr@10 | 0.4064 |
| **cosine_map@100** | **0.4236** |
| dot_accuracy@1 | 0.2807 |
| dot_accuracy@3 | 0.4649 |
| dot_accuracy@5 | 0.5351 |
| dot_accuracy@10 | 0.7193 |
| dot_precision@1 | 0.2807 |
| dot_precision@3 | 0.155 |
| dot_precision@5 | 0.107 |
| dot_precision@10 | 0.0719 |
| dot_recall@1 | 0.2807 |
| dot_recall@3 | 0.4649 |
| dot_recall@5 | 0.5351 |
| dot_recall@10 | 0.7193 |
| dot_ndcg@10 | 0.4797 |
| dot_mrr@10 | 0.4064 |
| dot_map@100 | 0.4236 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,430 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.71 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 172.72 tokens</li><li>max: 356 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the key steps to obtain input from stakeholder communities to identify unacceptable use in AI systems?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
| <code>How can organizations maintain an updated hierarchy of identified and expected GAI risks?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
| <code>What are some examples of unacceptable uses of AI as identified by stakeholder communities?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | cosine_map@100 |
|:------:|:-----:|:-------------:|:--------------:|
| 0.0146 | 50 | - | 0.4134 |
| 0.0292 | 100 | - | 0.4134 |
| 0.0437 | 150 | - | 0.4134 |
| 0.0583 | 200 | - | 0.4134 |
| 0.0729 | 250 | - | 0.4134 |
| 0.0875 | 300 | - | 0.4134 |
| 0.1020 | 350 | - | 0.4134 |
| 0.1166 | 400 | - | 0.4134 |
| 0.1312 | 450 | - | 0.4134 |
| 0.1458 | 500 | 0.0 | 0.4134 |
| 0.1603 | 550 | - | 0.4134 |
| 0.1749 | 600 | - | 0.4134 |
| 0.1895 | 650 | - | 0.4134 |
| 0.2041 | 700 | - | 0.4134 |
| 0.2187 | 750 | - | 0.4134 |
| 0.2332 | 800 | - | 0.4134 |
| 0.2478 | 850 | - | 0.4134 |
| 0.2624 | 900 | - | 0.4134 |
| 0.2770 | 950 | - | 0.4134 |
| 0.2915 | 1000 | 0.0 | 0.4134 |
| 0.3061 | 1050 | - | 0.4134 |
| 0.3207 | 1100 | - | 0.4134 |
| 0.3353 | 1150 | - | 0.4134 |
| 0.3499 | 1200 | - | 0.4134 |
| 0.3644 | 1250 | - | 0.4134 |
| 0.3790 | 1300 | - | 0.4134 |
| 0.3936 | 1350 | - | 0.4134 |
| 0.4082 | 1400 | - | 0.4134 |
| 0.4227 | 1450 | - | 0.4134 |
| 0.4373 | 1500 | 0.0 | 0.4134 |
| 0.4519 | 1550 | - | 0.4134 |
| 0.4665 | 1600 | - | 0.4134 |
| 0.4810 | 1650 | - | 0.4134 |
| 0.4956 | 1700 | - | 0.4134 |
| 0.5102 | 1750 | - | 0.4134 |
| 0.5248 | 1800 | - | 0.4134 |
| 0.5394 | 1850 | - | 0.4134 |
| 0.5539 | 1900 | - | 0.4134 |
| 0.5685 | 1950 | - | 0.4134 |
| 0.5831 | 2000 | 0.0 | 0.4135 |
| 0.5977 | 2050 | - | 0.4135 |
| 0.6122 | 2100 | - | 0.4135 |
| 0.6268 | 2150 | - | 0.4135 |
| 0.6414 | 2200 | - | 0.4135 |
| 0.6560 | 2250 | - | 0.4135 |
| 0.6706 | 2300 | - | 0.4135 |
| 0.6851 | 2350 | - | 0.4135 |
| 0.6997 | 2400 | - | 0.4135 |
| 0.7143 | 2450 | - | 0.4134 |
| 0.7289 | 2500 | 0.0 | 0.4134 |
| 0.7434 | 2550 | - | 0.4134 |
| 0.7580 | 2600 | - | 0.4134 |
| 0.7726 | 2650 | - | 0.4134 |
| 0.7872 | 2700 | - | 0.4134 |
| 0.8017 | 2750 | - | 0.4134 |
| 0.8163 | 2800 | - | 0.4134 |
| 0.8309 | 2850 | - | 0.4135 |
| 0.8455 | 2900 | - | 0.4135 |
| 0.8601 | 2950 | - | 0.4135 |
| 0.8746 | 3000 | 0.0 | 0.4135 |
| 0.8892 | 3050 | - | 0.4135 |
| 0.9038 | 3100 | - | 0.4135 |
| 0.9184 | 3150 | - | 0.4135 |
| 0.9329 | 3200 | - | 0.4135 |
| 0.9475 | 3250 | - | 0.4135 |
| 0.9621 | 3300 | - | 0.4135 |
| 0.9767 | 3350 | - | 0.4135 |
| 0.9913 | 3400 | - | 0.4135 |
| 1.0 | 3430 | - | 0.4135 |
| 1.0058 | 3450 | - | 0.4135 |
| 1.0204 | 3500 | 0.0 | 0.4135 |
| 1.0350 | 3550 | - | 0.4135 |
| 1.0496 | 3600 | - | 0.4135 |
| 1.0641 | 3650 | - | 0.4135 |
| 1.0787 | 3700 | - | 0.4135 |
| 1.0933 | 3750 | - | 0.4135 |
| 1.1079 | 3800 | - | 0.4135 |
| 1.1224 | 3850 | - | 0.4135 |
| 1.1370 | 3900 | - | 0.4179 |
| 1.1516 | 3950 | - | 0.4179 |
| 1.1662 | 4000 | 0.0 | 0.4179 |
| 1.1808 | 4050 | - | 0.4179 |
| 1.1953 | 4100 | - | 0.4179 |
| 1.2099 | 4150 | - | 0.4179 |
| 1.2245 | 4200 | - | 0.4179 |
| 1.2391 | 4250 | - | 0.4179 |
| 1.2536 | 4300 | - | 0.4179 |
| 1.2682 | 4350 | - | 0.4179 |
| 1.2828 | 4400 | - | 0.4179 |
| 1.2974 | 4450 | - | 0.4179 |
| 1.3120 | 4500 | 0.0 | 0.4179 |
| 1.3265 | 4550 | - | 0.4179 |
| 1.3411 | 4600 | - | 0.4179 |
| 1.3557 | 4650 | - | 0.4179 |
| 1.3703 | 4700 | - | 0.4179 |
| 1.3848 | 4750 | - | 0.4179 |
| 1.3994 | 4800 | - | 0.4179 |
| 1.4140 | 4850 | - | 0.4179 |
| 1.4286 | 4900 | - | 0.4179 |
| 1.4431 | 4950 | - | 0.4179 |
| 1.4577 | 5000 | 0.0 | 0.4179 |
| 1.4723 | 5050 | - | 0.4179 |
| 1.4869 | 5100 | - | 0.4179 |
| 1.5015 | 5150 | - | 0.4179 |
| 1.5160 | 5200 | - | 0.4179 |
| 1.5306 | 5250 | - | 0.4179 |
| 1.5452 | 5300 | - | 0.4179 |
| 1.5598 | 5350 | - | 0.4179 |
| 1.5743 | 5400 | - | 0.4179 |
| 1.5889 | 5450 | - | 0.4179 |
| 1.6035 | 5500 | 0.0 | 0.4179 |
| 1.6181 | 5550 | - | 0.4179 |
| 1.6327 | 5600 | - | 0.4179 |
| 1.6472 | 5650 | - | 0.4179 |
| 1.6618 | 5700 | - | 0.4179 |
| 1.6764 | 5750 | - | 0.4179 |
| 1.6910 | 5800 | - | 0.4179 |
| 1.7055 | 5850 | - | 0.4179 |
| 1.7201 | 5900 | - | 0.4179 |
| 1.7347 | 5950 | - | 0.4179 |
| 1.7493 | 6000 | 0.0 | 0.4179 |
| 1.7638 | 6050 | - | 0.4179 |
| 1.7784 | 6100 | - | 0.4179 |
| 1.7930 | 6150 | - | 0.4179 |
| 1.8076 | 6200 | - | 0.4179 |
| 1.8222 | 6250 | - | 0.4179 |
| 1.8367 | 6300 | - | 0.4179 |
| 1.8513 | 6350 | - | 0.4179 |
| 1.8659 | 6400 | - | 0.4179 |
| 1.8805 | 6450 | - | 0.4179 |
| 1.8950 | 6500 | 0.0 | 0.4179 |
| 1.9096 | 6550 | - | 0.4179 |
| 1.9242 | 6600 | - | 0.4179 |
| 1.9388 | 6650 | - | 0.4179 |
| 1.9534 | 6700 | - | 0.4179 |
| 1.9679 | 6750 | - | 0.4179 |
| 1.9825 | 6800 | - | 0.4179 |
| 1.9971 | 6850 | - | 0.4179 |
| 2.0 | 6860 | - | 0.4179 |
| 2.0117 | 6900 | - | 0.4179 |
| 2.0262 | 6950 | - | 0.4179 |
| 2.0408 | 7000 | 0.0 | 0.4179 |
| 2.0554 | 7050 | - | 0.4179 |
| 2.0700 | 7100 | - | 0.4179 |
| 2.0845 | 7150 | - | 0.4179 |
| 2.0991 | 7200 | - | 0.4179 |
| 2.1137 | 7250 | - | 0.4179 |
| 2.1283 | 7300 | - | 0.4179 |
| 2.1429 | 7350 | - | 0.4179 |
| 2.1574 | 7400 | - | 0.4179 |
| 2.1720 | 7450 | - | 0.4179 |
| 2.1866 | 7500 | 0.0 | 0.4179 |
| 2.2012 | 7550 | - | 0.4179 |
| 2.2157 | 7600 | - | 0.4179 |
| 2.2303 | 7650 | - | 0.4179 |
| 2.2449 | 7700 | - | 0.4179 |
| 2.2595 | 7750 | - | 0.4179 |
| 2.2741 | 7800 | - | 0.4179 |
| 2.2886 | 7850 | - | 0.4179 |
| 2.3032 | 7900 | - | 0.4179 |
| 2.3178 | 7950 | - | 0.4179 |
| 2.3324 | 8000 | 0.0 | 0.4179 |
| 2.3469 | 8050 | - | 0.4179 |
| 2.3615 | 8100 | - | 0.4179 |
| 2.3761 | 8150 | - | 0.4179 |
| 2.3907 | 8200 | - | 0.4179 |
| 2.4052 | 8250 | - | 0.4179 |
| 2.4198 | 8300 | - | 0.4179 |
| 2.4344 | 8350 | - | 0.4179 |
| 2.4490 | 8400 | - | 0.4179 |
| 2.4636 | 8450 | - | 0.4179 |
| 2.4781 | 8500 | 0.0 | 0.4179 |
| 2.4927 | 8550 | - | 0.4179 |
| 2.5073 | 8600 | - | 0.4179 |
| 2.5219 | 8650 | - | 0.4179 |
| 2.5364 | 8700 | - | 0.4179 |
| 2.5510 | 8750 | - | 0.4179 |
| 2.5656 | 8800 | - | 0.4179 |
| 2.5802 | 8850 | - | 0.4179 |
| 2.5948 | 8900 | - | 0.4179 |
| 2.6093 | 8950 | - | 0.4179 |
| 2.6239 | 9000 | 0.0 | 0.4179 |
| 2.6385 | 9050 | - | 0.4179 |
| 2.6531 | 9100 | - | 0.4179 |
| 2.6676 | 9150 | - | 0.4179 |
| 2.6822 | 9200 | - | 0.4179 |
| 2.6968 | 9250 | - | 0.4223 |
| 2.7114 | 9300 | - | 0.4223 |
| 2.7259 | 9350 | - | 0.4223 |
| 2.7405 | 9400 | - | 0.4223 |
| 2.7551 | 9450 | - | 0.4223 |
| 2.7697 | 9500 | 0.0 | 0.4223 |
| 2.7843 | 9550 | - | 0.4223 |
| 2.7988 | 9600 | - | 0.4223 |
| 2.8134 | 9650 | - | 0.4223 |
| 2.8280 | 9700 | - | 0.4223 |
| 2.8426 | 9750 | - | 0.4223 |
| 2.8571 | 9800 | - | 0.4223 |
| 2.8717 | 9850 | - | 0.4223 |
| 2.8863 | 9900 | - | 0.4223 |
| 2.9009 | 9950 | - | 0.4223 |
| 2.9155 | 10000 | 0.0 | 0.4223 |
| 2.9300 | 10050 | - | 0.4223 |
| 2.9446 | 10100 | - | 0.4223 |
| 2.9592 | 10150 | - | 0.4223 |
| 2.9738 | 10200 | - | 0.4223 |
| 2.9883 | 10250 | - | 0.4223 |
| 3.0 | 10290 | - | 0.4223 |
| 3.0029 | 10300 | - | 0.4223 |
| 3.0175 | 10350 | - | 0.4223 |
| 3.0321 | 10400 | - | 0.4223 |
| 3.0466 | 10450 | - | 0.4223 |
| 3.0612 | 10500 | 0.0 | 0.4223 |
| 3.0758 | 10550 | - | 0.4223 |
| 3.0904 | 10600 | - | 0.4223 |
| 3.1050 | 10650 | - | 0.4223 |
| 3.1195 | 10700 | - | 0.4223 |
| 3.1341 | 10750 | - | 0.4223 |
| 3.1487 | 10800 | - | 0.4223 |
| 3.1633 | 10850 | - | 0.4223 |
| 3.1778 | 10900 | - | 0.4223 |
| 3.1924 | 10950 | - | 0.4223 |
| 3.2070 | 11000 | 0.0 | 0.4223 |
| 3.2216 | 11050 | - | 0.4223 |
| 3.2362 | 11100 | - | 0.4223 |
| 3.2507 | 11150 | - | 0.4223 |
| 3.2653 | 11200 | - | 0.4223 |
| 3.2799 | 11250 | - | 0.4223 |
| 3.2945 | 11300 | - | 0.4223 |
| 3.3090 | 11350 | - | 0.4223 |
| 3.3236 | 11400 | - | 0.4223 |
| 3.3382 | 11450 | - | 0.4223 |
| 3.3528 | 11500 | 0.0 | 0.4223 |
| 3.3673 | 11550 | - | 0.4223 |
| 3.3819 | 11600 | - | 0.4223 |
| 3.3965 | 11650 | - | 0.4223 |
| 3.4111 | 11700 | - | 0.4223 |
| 3.4257 | 11750 | - | 0.4223 |
| 3.4402 | 11800 | - | 0.4223 |
| 3.4548 | 11850 | - | 0.4223 |
| 3.4694 | 11900 | - | 0.4223 |
| 3.4840 | 11950 | - | 0.4223 |
| 3.4985 | 12000 | 0.0 | 0.4223 |
| 3.5131 | 12050 | - | 0.4223 |
| 3.5277 | 12100 | - | 0.4223 |
| 3.5423 | 12150 | - | 0.4223 |
| 3.5569 | 12200 | - | 0.4223 |
| 3.5714 | 12250 | - | 0.4223 |
| 3.5860 | 12300 | - | 0.4223 |
| 3.6006 | 12350 | - | 0.4223 |
| 3.6152 | 12400 | - | 0.4223 |
| 3.6297 | 12450 | - | 0.4223 |
| 3.6443 | 12500 | 0.0 | 0.4223 |
| 3.6589 | 12550 | - | 0.4223 |
| 3.6735 | 12600 | - | 0.4223 |
| 3.6880 | 12650 | - | 0.4223 |
| 3.7026 | 12700 | - | 0.4223 |
| 3.7172 | 12750 | - | 0.4223 |
| 3.7318 | 12800 | - | 0.4223 |
| 3.7464 | 12850 | - | 0.4223 |
| 3.7609 | 12900 | - | 0.4223 |
| 3.7755 | 12950 | - | 0.4223 |
| 3.7901 | 13000 | 0.0 | 0.4223 |
| 3.8047 | 13050 | - | 0.4223 |
| 3.8192 | 13100 | - | 0.4226 |
| 3.8338 | 13150 | - | 0.4226 |
| 3.8484 | 13200 | - | 0.4226 |
| 3.8630 | 13250 | - | 0.4226 |
| 3.8776 | 13300 | - | 0.4226 |
| 3.8921 | 13350 | - | 0.4226 |
| 3.9067 | 13400 | - | 0.4226 |
| 3.9213 | 13450 | - | 0.4226 |
| 3.9359 | 13500 | 0.0 | 0.4226 |
| 3.9504 | 13550 | - | 0.4226 |
| 3.9650 | 13600 | - | 0.4226 |
| 3.9796 | 13650 | - | 0.4226 |
| 3.9942 | 13700 | - | 0.4226 |
| 4.0 | 13720 | - | 0.4226 |
| 4.0087 | 13750 | - | 0.4226 |
| 4.0233 | 13800 | - | 0.4226 |
| 4.0379 | 13850 | - | 0.4226 |
| 4.0525 | 13900 | - | 0.4226 |
| 4.0671 | 13950 | - | 0.4226 |
| 4.0816 | 14000 | 0.0 | 0.4226 |
| 4.0962 | 14050 | - | 0.4226 |
| 4.1108 | 14100 | - | 0.4226 |
| 4.1254 | 14150 | - | 0.4226 |
| 4.1399 | 14200 | - | 0.4226 |
| 4.1545 | 14250 | - | 0.4226 |
| 4.1691 | 14300 | - | 0.4226 |
| 4.1837 | 14350 | - | 0.4226 |
| 4.1983 | 14400 | - | 0.4226 |
| 4.2128 | 14450 | - | 0.4226 |
| 4.2274 | 14500 | 0.0 | 0.4226 |
| 4.2420 | 14550 | - | 0.4226 |
| 4.2566 | 14600 | - | 0.4226 |
| 4.2711 | 14650 | - | 0.4226 |
| 4.2857 | 14700 | - | 0.4226 |
| 4.3003 | 14750 | - | 0.4226 |
| 4.3149 | 14800 | - | 0.4226 |
| 4.3294 | 14850 | - | 0.4226 |
| 4.3440 | 14900 | - | 0.4226 |
| 4.3586 | 14950 | - | 0.4226 |
| 4.3732 | 15000 | 0.0 | 0.4226 |
| 4.3878 | 15050 | - | 0.4226 |
| 4.4023 | 15100 | - | 0.4226 |
| 4.4169 | 15150 | - | 0.4226 |
| 4.4315 | 15200 | - | 0.4226 |
| 4.4461 | 15250 | - | 0.4226 |
| 4.4606 | 15300 | - | 0.4226 |
| 4.4752 | 15350 | - | 0.4226 |
| 4.4898 | 15400 | - | 0.4226 |
| 4.5044 | 15450 | - | 0.4226 |
| 4.5190 | 15500 | 0.0 | 0.4226 |
| 4.5335 | 15550 | - | 0.4226 |
| 4.5481 | 15600 | - | 0.4226 |
| 4.5627 | 15650 | - | 0.4226 |
| 4.5773 | 15700 | - | 0.4226 |
| 4.5918 | 15750 | - | 0.4226 |
| 4.6064 | 15800 | - | 0.4226 |
| 4.6210 | 15850 | - | 0.4226 |
| 4.6356 | 15900 | - | 0.4226 |
| 4.6501 | 15950 | - | 0.4226 |
| 4.6647 | 16000 | 0.0 | 0.4226 |
| 4.6793 | 16050 | - | 0.4226 |
| 4.6939 | 16100 | - | 0.4226 |
| 4.7085 | 16150 | - | 0.4226 |
| 4.7230 | 16200 | - | 0.4226 |
| 4.7376 | 16250 | - | 0.4226 |
| 4.7522 | 16300 | - | 0.4226 |
| 4.7668 | 16350 | - | 0.4226 |
| 4.7813 | 16400 | - | 0.4226 |
| 4.7959 | 16450 | - | 0.4226 |
| 4.8105 | 16500 | 0.0 | 0.4226 |
| 4.8251 | 16550 | - | 0.4226 |
| 4.8397 | 16600 | - | 0.4226 |
| 4.8542 | 16650 | - | 0.4226 |
| 4.8688 | 16700 | - | 0.4226 |
| 4.8834 | 16750 | - | 0.4226 |
| 4.8980 | 16800 | - | 0.4226 |
| 4.9125 | 16850 | - | 0.4226 |
| 4.9271 | 16900 | - | 0.4226 |
| 4.9417 | 16950 | - | 0.4226 |
| 4.9563 | 17000 | 0.0 | 0.4226 |
| 4.9708 | 17050 | - | 0.4226 |
| 4.9854 | 17100 | - | 0.4226 |
| 5.0 | 17150 | - | 0.4226 |
| 0.0146 | 50 | - | 0.4226 |
| 0.0292 | 100 | - | 0.4226 |
| 0.0437 | 150 | - | 0.4226 |
| 0.0583 | 200 | - | 0.4226 |
| 0.0729 | 250 | - | 0.4226 |
| 0.0875 | 300 | - | 0.4226 |
| 0.1020 | 350 | - | 0.4226 |
| 0.1166 | 400 | - | 0.4226 |
| 0.1312 | 450 | - | 0.4226 |
| 0.1458 | 500 | 0.0 | 0.4226 |
| 0.1603 | 550 | - | 0.4226 |
| 0.1749 | 600 | - | 0.4226 |
| 0.1895 | 650 | - | 0.4226 |
| 0.2041 | 700 | - | 0.4226 |
| 0.2187 | 750 | - | 0.4226 |
| 0.2332 | 800 | - | 0.4226 |
| 0.2478 | 850 | - | 0.4226 |
| 0.2624 | 900 | - | 0.4226 |
| 0.2770 | 950 | - | 0.4226 |
| 0.2915 | 1000 | 0.0 | 0.4227 |
| 0.3061 | 1050 | - | 0.4227 |
| 0.3207 | 1100 | - | 0.4227 |
| 0.3353 | 1150 | - | 0.4227 |
| 0.3499 | 1200 | - | 0.4227 |
| 0.3644 | 1250 | - | 0.4227 |
| 0.3790 | 1300 | - | 0.4227 |
| 0.3936 | 1350 | - | 0.4227 |
| 0.4082 | 1400 | - | 0.4227 |
| 0.4227 | 1450 | - | 0.4227 |
| 0.4373 | 1500 | 0.0 | 0.4227 |
| 0.4519 | 1550 | - | 0.4227 |
| 0.4665 | 1600 | - | 0.4227 |
| 0.4810 | 1650 | - | 0.4227 |
| 0.4956 | 1700 | - | 0.4227 |
| 0.5102 | 1750 | - | 0.4227 |
| 0.5248 | 1800 | - | 0.4227 |
| 0.5394 | 1850 | - | 0.4227 |
| 0.5539 | 1900 | - | 0.4227 |
| 0.5685 | 1950 | - | 0.4227 |
| 0.5831 | 2000 | 0.0 | 0.4227 |
| 0.5977 | 2050 | - | 0.4227 |
| 0.6122 | 2100 | - | 0.4227 |
| 0.6268 | 2150 | - | 0.4227 |
| 0.6414 | 2200 | - | 0.4227 |
| 0.6560 | 2250 | - | 0.4227 |
| 0.6706 | 2300 | - | 0.4227 |
| 0.6851 | 2350 | - | 0.4227 |
| 0.6997 | 2400 | - | 0.4227 |
| 0.7143 | 2450 | - | 0.4227 |
| 0.7289 | 2500 | 0.0 | 0.4227 |
| 0.7434 | 2550 | - | 0.4227 |
| 0.7580 | 2600 | - | 0.4227 |
| 0.7726 | 2650 | - | 0.4227 |
| 0.7872 | 2700 | - | 0.4227 |
| 0.8017 | 2750 | - | 0.4227 |
| 0.8163 | 2800 | - | 0.4227 |
| 0.8309 | 2850 | - | 0.4227 |
| 0.8455 | 2900 | - | 0.4227 |
| 0.8601 | 2950 | - | 0.4227 |
| 0.8746 | 3000 | 0.0 | 0.4227 |
| 0.8892 | 3050 | - | 0.4227 |
| 0.9038 | 3100 | - | 0.4227 |
| 0.9184 | 3150 | - | 0.4227 |
| 0.9329 | 3200 | - | 0.4227 |
| 0.9475 | 3250 | - | 0.4227 |
| 0.9621 | 3300 | - | 0.4227 |
| 0.9767 | 3350 | - | 0.4227 |
| 0.9913 | 3400 | - | 0.4227 |
| 1.0 | 3430 | - | 0.4227 |
| 1.0058 | 3450 | - | 0.4227 |
| 1.0204 | 3500 | 0.0 | 0.4227 |
| 1.0350 | 3550 | - | 0.4227 |
| 1.0496 | 3600 | - | 0.4227 |
| 1.0641 | 3650 | - | 0.4227 |
| 1.0787 | 3700 | - | 0.4227 |
| 1.0933 | 3750 | - | 0.4227 |
| 1.1079 | 3800 | - | 0.4227 |
| 1.1224 | 3850 | - | 0.4227 |
| 1.1370 | 3900 | - | 0.4227 |
| 1.1516 | 3950 | - | 0.4227 |
| 1.1662 | 4000 | 0.0 | 0.4227 |
| 1.1808 | 4050 | - | 0.4227 |
| 1.1953 | 4100 | - | 0.4227 |
| 1.2099 | 4150 | - | 0.4231 |
| 1.2245 | 4200 | - | 0.4231 |
| 1.2391 | 4250 | - | 0.4231 |
| 1.2536 | 4300 | - | 0.4231 |
| 1.2682 | 4350 | - | 0.4231 |
| 1.2828 | 4400 | - | 0.4231 |
| 1.2974 | 4450 | - | 0.4231 |
| 1.3120 | 4500 | 0.0 | 0.4231 |
| 1.3265 | 4550 | - | 0.4231 |
| 1.3411 | 4600 | - | 0.4231 |
| 1.3557 | 4650 | - | 0.4232 |
| 1.3703 | 4700 | - | 0.4232 |
| 1.3848 | 4750 | - | 0.4232 |
| 1.3994 | 4800 | - | 0.4232 |
| 1.4140 | 4850 | - | 0.4232 |
| 1.4286 | 4900 | - | 0.4232 |
| 1.4431 | 4950 | - | 0.4232 |
| 1.4577 | 5000 | 0.0 | 0.4232 |
| 1.4723 | 5050 | - | 0.4232 |
| 1.4869 | 5100 | - | 0.4232 |
| 1.5015 | 5150 | - | 0.4232 |
| 1.5160 | 5200 | - | 0.4232 |
| 1.5306 | 5250 | - | 0.4232 |
| 1.5452 | 5300 | - | 0.4233 |
| 1.5598 | 5350 | - | 0.4233 |
| 1.5743 | 5400 | - | 0.4233 |
| 1.5889 | 5450 | - | 0.4233 |
| 1.6035 | 5500 | 0.0 | 0.4233 |
| 1.6181 | 5550 | - | 0.4233 |
| 1.6327 | 5600 | - | 0.4233 |
| 1.6472 | 5650 | - | 0.4233 |
| 1.6618 | 5700 | - | 0.4233 |
| 1.6764 | 5750 | - | 0.4233 |
| 1.6910 | 5800 | - | 0.4233 |
| 1.7055 | 5850 | - | 0.4233 |
| 1.7201 | 5900 | - | 0.4233 |
| 1.7347 | 5950 | - | 0.4233 |
| 1.7493 | 6000 | 0.0 | 0.4233 |
| 1.7638 | 6050 | - | 0.4234 |
| 1.7784 | 6100 | - | 0.4234 |
| 1.7930 | 6150 | - | 0.4234 |
| 1.8076 | 6200 | - | 0.4234 |
| 1.8222 | 6250 | - | 0.4234 |
| 1.8367 | 6300 | - | 0.4234 |
| 1.8513 | 6350 | - | 0.4234 |
| 1.8659 | 6400 | - | 0.4234 |
| 1.8805 | 6450 | - | 0.4234 |
| 1.8950 | 6500 | 0.0 | 0.4234 |
| 1.9096 | 6550 | - | 0.4234 |
| 1.9242 | 6600 | - | 0.4234 |
| 1.9388 | 6650 | - | 0.4234 |
| 1.9534 | 6700 | - | 0.4234 |
| 1.9679 | 6750 | - | 0.4234 |
| 1.9825 | 6800 | - | 0.4234 |
| 1.9971 | 6850 | - | 0.4234 |
| 2.0 | 6860 | - | 0.4234 |
| 2.0117 | 6900 | - | 0.4234 |
| 2.0262 | 6950 | - | 0.4234 |
| 2.0408 | 7000 | 0.0 | 0.4234 |
| 2.0554 | 7050 | - | 0.4234 |
| 2.0700 | 7100 | - | 0.4234 |
| 2.0845 | 7150 | - | 0.4234 |
| 2.0991 | 7200 | - | 0.4234 |
| 2.1137 | 7250 | - | 0.4234 |
| 2.1283 | 7300 | - | 0.4234 |
| 2.1429 | 7350 | - | 0.4234 |
| 2.1574 | 7400 | - | 0.4234 |
| 2.1720 | 7450 | - | 0.4234 |
| 2.1866 | 7500 | 0.0 | 0.4234 |
| 2.2012 | 7550 | - | 0.4234 |
| 2.2157 | 7600 | - | 0.4234 |
| 2.2303 | 7650 | - | 0.4234 |
| 2.2449 | 7700 | - | 0.4234 |
| 2.2595 | 7750 | - | 0.4234 |
| 2.2741 | 7800 | - | 0.4234 |
| 2.2886 | 7850 | - | 0.4234 |
| 2.3032 | 7900 | - | 0.4234 |
| 2.3178 | 7950 | - | 0.4234 |
| 2.3324 | 8000 | 0.0 | 0.4234 |
| 2.3469 | 8050 | - | 0.4234 |
| 2.3615 | 8100 | - | 0.4234 |
| 2.3761 | 8150 | - | 0.4234 |
| 2.3907 | 8200 | - | 0.4234 |
| 2.4052 | 8250 | - | 0.4234 |
| 2.4198 | 8300 | - | 0.4234 |
| 2.4344 | 8350 | - | 0.4234 |
| 2.4490 | 8400 | - | 0.4234 |
| 2.4636 | 8450 | - | 0.4234 |
| 2.4781 | 8500 | 0.0 | 0.4234 |
| 2.4927 | 8550 | - | 0.4234 |
| 2.5073 | 8600 | - | 0.4234 |
| 2.5219 | 8650 | - | 0.4234 |
| 2.5364 | 8700 | - | 0.4234 |
| 2.5510 | 8750 | - | 0.4234 |
| 2.5656 | 8800 | - | 0.4234 |
| 2.5802 | 8850 | - | 0.4234 |
| 2.5948 | 8900 | - | 0.4234 |
| 2.6093 | 8950 | - | 0.4234 |
| 2.6239 | 9000 | 0.0 | 0.4234 |
| 2.6385 | 9050 | - | 0.4234 |
| 2.6531 | 9100 | - | 0.4234 |
| 2.6676 | 9150 | - | 0.4234 |
| 2.6822 | 9200 | - | 0.4234 |
| 2.6968 | 9250 | - | 0.4234 |
| 2.7114 | 9300 | - | 0.4234 |
| 2.7259 | 9350 | - | 0.4234 |
| 2.7405 | 9400 | - | 0.4234 |
| 2.7551 | 9450 | - | 0.4234 |
| 2.7697 | 9500 | 0.0 | 0.4234 |
| 2.7843 | 9550 | - | 0.4234 |
| 2.7988 | 9600 | - | 0.4234 |
| 2.8134 | 9650 | - | 0.4234 |
| 2.8280 | 9700 | - | 0.4234 |
| 2.8426 | 9750 | - | 0.4234 |
| 2.8571 | 9800 | - | 0.4234 |
| 2.8717 | 9850 | - | 0.4234 |
| 2.8863 | 9900 | - | 0.4234 |
| 2.9009 | 9950 | - | 0.4234 |
| 2.9155 | 10000 | 0.0 | 0.4234 |
| 2.9300 | 10050 | - | 0.4234 |
| 2.9446 | 10100 | - | 0.4234 |
| 2.9592 | 10150 | - | 0.4234 |
| 2.9738 | 10200 | - | 0.4234 |
| 2.9883 | 10250 | - | 0.4234 |
| 3.0 | 10290 | - | 0.4234 |
| 3.0029 | 10300 | - | 0.4234 |
| 3.0175 | 10350 | - | 0.4234 |
| 3.0321 | 10400 | - | 0.4234 |
| 3.0466 | 10450 | - | 0.4234 |
| 3.0612 | 10500 | 0.0 | 0.4234 |
| 3.0758 | 10550 | - | 0.4234 |
| 3.0904 | 10600 | - | 0.4234 |
| 3.1050 | 10650 | - | 0.4234 |
| 3.1195 | 10700 | - | 0.4234 |
| 3.1341 | 10750 | - | 0.4234 |
| 3.1487 | 10800 | - | 0.4234 |
| 3.1633 | 10850 | - | 0.4234 |
| 3.1778 | 10900 | - | 0.4234 |
| 3.1924 | 10950 | - | 0.4234 |
| 3.2070 | 11000 | 0.0 | 0.4234 |
| 3.2216 | 11050 | - | 0.4234 |
| 3.2362 | 11100 | - | 0.4234 |
| 3.2507 | 11150 | - | 0.4234 |
| 3.2653 | 11200 | - | 0.4234 |
| 3.2799 | 11250 | - | 0.4234 |
| 3.2945 | 11300 | - | 0.4234 |
| 3.3090 | 11350 | - | 0.4234 |
| 3.3236 | 11400 | - | 0.4234 |
| 3.3382 | 11450 | - | 0.4234 |
| 3.3528 | 11500 | 0.0 | 0.4234 |
| 3.3673 | 11550 | - | 0.4234 |
| 3.3819 | 11600 | - | 0.4234 |
| 3.3965 | 11650 | - | 0.4234 |
| 3.4111 | 11700 | - | 0.4234 |
| 3.4257 | 11750 | - | 0.4234 |
| 3.4402 | 11800 | - | 0.4234 |
| 3.4548 | 11850 | - | 0.4235 |
| 3.4694 | 11900 | - | 0.4235 |
| 3.4840 | 11950 | - | 0.4235 |
| 3.4985 | 12000 | 0.0 | 0.4235 |
| 3.5131 | 12050 | - | 0.4235 |
| 3.5277 | 12100 | - | 0.4235 |
| 3.5423 | 12150 | - | 0.4235 |
| 3.5569 | 12200 | - | 0.4235 |
| 3.5714 | 12250 | - | 0.4235 |
| 3.5860 | 12300 | - | 0.4235 |
| 3.6006 | 12350 | - | 0.4235 |
| 3.6152 | 12400 | - | 0.4235 |
| 3.6297 | 12450 | - | 0.4235 |
| 3.6443 | 12500 | 0.0 | 0.4235 |
| 3.6589 | 12550 | - | 0.4235 |
| 3.6735 | 12600 | - | 0.4235 |
| 3.6880 | 12650 | - | 0.4235 |
| 3.7026 | 12700 | - | 0.4235 |
| 3.7172 | 12750 | - | 0.4235 |
| 3.7318 | 12800 | - | 0.4235 |
| 3.7464 | 12850 | - | 0.4235 |
| 3.7609 | 12900 | - | 0.4235 |
| 3.7755 | 12950 | - | 0.4235 |
| 3.7901 | 13000 | 0.0 | 0.4235 |
| 3.8047 | 13050 | - | 0.4235 |
| 3.8192 | 13100 | - | 0.4235 |
| 3.8338 | 13150 | - | 0.4235 |
| 3.8484 | 13200 | - | 0.4235 |
| 3.8630 | 13250 | - | 0.4235 |
| 3.8776 | 13300 | - | 0.4235 |
| 3.8921 | 13350 | - | 0.4235 |
| 3.9067 | 13400 | - | 0.4235 |
| 3.9213 | 13450 | - | 0.4235 |
| 3.9359 | 13500 | 0.0 | 0.4235 |
| 3.9504 | 13550 | - | 0.4235 |
| 3.9650 | 13600 | - | 0.4235 |
| 3.9796 | 13650 | - | 0.4235 |
| 3.9942 | 13700 | - | 0.4235 |
| 4.0 | 13720 | - | 0.4235 |
| 4.0087 | 13750 | - | 0.4235 |
| 4.0233 | 13800 | - | 0.4235 |
| 4.0379 | 13850 | - | 0.4235 |
| 4.0525 | 13900 | - | 0.4235 |
| 4.0671 | 13950 | - | 0.4235 |
| 4.0816 | 14000 | 0.0 | 0.4236 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.14.4
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision 9a9e5834d2e89cdd8bb72b64111dde496e4fe78c -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jeevanions/finetuned_arctic-embedd-l")
# Run inference
sentences = [
'How should risks or trustworthiness characteristics that cannot be measured be documented?',
'MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any \ndiscrepancies in how content provenance mechanisms work across diverse \npopulations. \nInformation Integrity; Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to evaluate structured public feedback exercises',
'existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\xad\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\xad\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2807 |
| cosine_accuracy@3 | 0.4649 |
| cosine_accuracy@5 | 0.5351 |
| cosine_accuracy@10 | 0.7193 |
| cosine_precision@1 | 0.2807 |
| cosine_precision@3 | 0.155 |
| cosine_precision@5 | 0.107 |
| cosine_precision@10 | 0.0719 |
| cosine_recall@1 | 0.2807 |
| cosine_recall@3 | 0.4649 |
| cosine_recall@5 | 0.5351 |
| cosine_recall@10 | 0.7193 |
| cosine_ndcg@10 | 0.4797 |
| cosine_mrr@10 | 0.4064 |
| **cosine_map@100** | **0.4236** |
| dot_accuracy@1 | 0.2807 |
| dot_accuracy@3 | 0.4649 |
| dot_accuracy@5 | 0.5351 |
| dot_accuracy@10 | 0.7193 |
| dot_precision@1 | 0.2807 |
| dot_precision@3 | 0.155 |
| dot_precision@5 | 0.107 |
| dot_precision@10 | 0.0719 |
| dot_recall@1 | 0.2807 |
| dot_recall@3 | 0.4649 |
| dot_recall@5 | 0.5351 |
| dot_recall@10 | 0.7193 |
| dot_ndcg@10 | 0.4797 |
| dot_mrr@10 | 0.4064 |
| dot_map@100 | 0.4236 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,430 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.71 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 172.72 tokens</li><li>max: 356 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the key steps to obtain input from stakeholder communities to identify unacceptable use in AI systems?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
| <code>How can organizations maintain an updated hierarchy of identified and expected GAI risks?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
| <code>What are some examples of unacceptable uses of AI as identified by stakeholder communities?</code> | <code>15 <br>GV-1.3-004 Obtain input from stakeholder communities to identify unacceptable use, in <br>accordance with activities in the AI RMF Map function. <br>CBRN Information or Capabilities; <br>Obscene, Degrading, and/or <br>Abusive Content; Harmful Bias <br>and Homogenization; Dangerous, <br>Violent, or Hateful Content <br>GV-1.3-005 <br>Maintain an updated hierarchy of identified and expected GAI risks connected to <br>contexts of GAI model advancement and use, potentially including specialized risk <br>levels for GAI systems that address issues such as model collapse and algorithmic <br>monoculture. <br>Harmful Bias and Homogenization <br>GV-1.3-006 <br>Reevaluate organizational risk tolerances to account for unacceptable negative risk <br>(such as where significant negative impacts are imminent, severe harms are <br>actually occurring, or large-scale risks could occur); and broad GAI negative risks, <br>including: Immature safety or risk cultures related to AI and GAI design, <br>development and deployment, public information integrity risks, including impacts</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | cosine_map@100 |
|:------:|:-----:|:-------------:|:--------------:|
| 0.0146 | 50 | - | 0.4134 |
| 0.0292 | 100 | - | 0.4134 |
| 0.0437 | 150 | - | 0.4134 |
| 0.0583 | 200 | - | 0.4134 |
| 0.0729 | 250 | - | 0.4134 |
| 0.0875 | 300 | - | 0.4134 |
| 0.1020 | 350 | - | 0.4134 |
| 0.1166 | 400 | - | 0.4134 |
| 0.1312 | 450 | - | 0.4134 |
| 0.1458 | 500 | 0.0 | 0.4134 |
| 0.1603 | 550 | - | 0.4134 |
| 0.1749 | 600 | - | 0.4134 |
| 0.1895 | 650 | - | 0.4134 |
| 0.2041 | 700 | - | 0.4134 |
| 0.2187 | 750 | - | 0.4134 |
| 0.2332 | 800 | - | 0.4134 |
| 0.2478 | 850 | - | 0.4134 |
| 0.2624 | 900 | - | 0.4134 |
| 0.2770 | 950 | - | 0.4134 |
| 0.2915 | 1000 | 0.0 | 0.4134 |
| 0.3061 | 1050 | - | 0.4134 |
| 0.3207 | 1100 | - | 0.4134 |
| 0.3353 | 1150 | - | 0.4134 |
| 0.3499 | 1200 | - | 0.4134 |
| 0.3644 | 1250 | - | 0.4134 |
| 0.3790 | 1300 | - | 0.4134 |
| 0.3936 | 1350 | - | 0.4134 |
| 0.4082 | 1400 | - | 0.4134 |
| 0.4227 | 1450 | - | 0.4134 |
| 0.4373 | 1500 | 0.0 | 0.4134 |
| 0.4519 | 1550 | - | 0.4134 |
| 0.4665 | 1600 | - | 0.4134 |
| 0.4810 | 1650 | - | 0.4134 |
| 0.4956 | 1700 | - | 0.4134 |
| 0.5102 | 1750 | - | 0.4134 |
| 0.5248 | 1800 | - | 0.4134 |
| 0.5394 | 1850 | - | 0.4134 |
| 0.5539 | 1900 | - | 0.4134 |
| 0.5685 | 1950 | - | 0.4134 |
| 0.5831 | 2000 | 0.0 | 0.4135 |
| 0.5977 | 2050 | - | 0.4135 |
| 0.6122 | 2100 | - | 0.4135 |
| 0.6268 | 2150 | - | 0.4135 |
| 0.6414 | 2200 | - | 0.4135 |
| 0.6560 | 2250 | - | 0.4135 |
| 0.6706 | 2300 | - | 0.4135 |
| 0.6851 | 2350 | - | 0.4135 |
| 0.6997 | 2400 | - | 0.4135 |
| 0.7143 | 2450 | - | 0.4134 |
| 0.7289 | 2500 | 0.0 | 0.4134 |
| 0.7434 | 2550 | - | 0.4134 |
| 0.7580 | 2600 | - | 0.4134 |
| 0.7726 | 2650 | - | 0.4134 |
| 0.7872 | 2700 | - | 0.4134 |
| 0.8017 | 2750 | - | 0.4134 |
| 0.8163 | 2800 | - | 0.4134 |
| 0.8309 | 2850 | - | 0.4135 |
| 0.8455 | 2900 | - | 0.4135 |
| 0.8601 | 2950 | - | 0.4135 |
| 0.8746 | 3000 | 0.0 | 0.4135 |
| 0.8892 | 3050 | - | 0.4135 |
| 0.9038 | 3100 | - | 0.4135 |
| 0.9184 | 3150 | - | 0.4135 |
| 0.9329 | 3200 | - | 0.4135 |
| 0.9475 | 3250 | - | 0.4135 |
| 0.9621 | 3300 | - | 0.4135 |
| 0.9767 | 3350 | - | 0.4135 |
| 0.9913 | 3400 | - | 0.4135 |
| 1.0 | 3430 | - | 0.4135 |
| 1.0058 | 3450 | - | 0.4135 |
| 1.0204 | 3500 | 0.0 | 0.4135 |
| 1.0350 | 3550 | - | 0.4135 |
| 1.0496 | 3600 | - | 0.4135 |
| 1.0641 | 3650 | - | 0.4135 |
| 1.0787 | 3700 | - | 0.4135 |
| 1.0933 | 3750 | - | 0.4135 |
| 1.1079 | 3800 | - | 0.4135 |
| 1.1224 | 3850 | - | 0.4135 |
| 1.1370 | 3900 | - | 0.4179 |
| 1.1516 | 3950 | - | 0.4179 |
| 1.1662 | 4000 | 0.0 | 0.4179 |
| 1.1808 | 4050 | - | 0.4179 |
| 1.1953 | 4100 | - | 0.4179 |
| 1.2099 | 4150 | - | 0.4179 |
| 1.2245 | 4200 | - | 0.4179 |
| 1.2391 | 4250 | - | 0.4179 |
| 1.2536 | 4300 | - | 0.4179 |
| 1.2682 | 4350 | - | 0.4179 |
| 1.2828 | 4400 | - | 0.4179 |
| 1.2974 | 4450 | - | 0.4179 |
| 1.3120 | 4500 | 0.0 | 0.4179 |
| 1.3265 | 4550 | - | 0.4179 |
| 1.3411 | 4600 | - | 0.4179 |
| 1.3557 | 4650 | - | 0.4179 |
| 1.3703 | 4700 | - | 0.4179 |
| 1.3848 | 4750 | - | 0.4179 |
| 1.3994 | 4800 | - | 0.4179 |
| 1.4140 | 4850 | - | 0.4179 |
| 1.4286 | 4900 | - | 0.4179 |
| 1.4431 | 4950 | - | 0.4179 |
| 1.4577 | 5000 | 0.0 | 0.4179 |
| 1.4723 | 5050 | - | 0.4179 |
| 1.4869 | 5100 | - | 0.4179 |
| 1.5015 | 5150 | - | 0.4179 |
| 1.5160 | 5200 | - | 0.4179 |
| 1.5306 | 5250 | - | 0.4179 |
| 1.5452 | 5300 | - | 0.4179 |
| 1.5598 | 5350 | - | 0.4179 |
| 1.5743 | 5400 | - | 0.4179 |
| 1.5889 | 5450 | - | 0.4179 |
| 1.6035 | 5500 | 0.0 | 0.4179 |
| 1.6181 | 5550 | - | 0.4179 |
| 1.6327 | 5600 | - | 0.4179 |
| 1.6472 | 5650 | - | 0.4179 |
| 1.6618 | 5700 | - | 0.4179 |
| 1.6764 | 5750 | - | 0.4179 |
| 1.6910 | 5800 | - | 0.4179 |
| 1.7055 | 5850 | - | 0.4179 |
| 1.7201 | 5900 | - | 0.4179 |
| 1.7347 | 5950 | - | 0.4179 |
| 1.7493 | 6000 | 0.0 | 0.4179 |
| 1.7638 | 6050 | - | 0.4179 |
| 1.7784 | 6100 | - | 0.4179 |
| 1.7930 | 6150 | - | 0.4179 |
| 1.8076 | 6200 | - | 0.4179 |
| 1.8222 | 6250 | - | 0.4179 |
| 1.8367 | 6300 | - | 0.4179 |
| 1.8513 | 6350 | - | 0.4179 |
| 1.8659 | 6400 | - | 0.4179 |
| 1.8805 | 6450 | - | 0.4179 |
| 1.8950 | 6500 | 0.0 | 0.4179 |
| 1.9096 | 6550 | - | 0.4179 |
| 1.9242 | 6600 | - | 0.4179 |
| 1.9388 | 6650 | - | 0.4179 |
| 1.9534 | 6700 | - | 0.4179 |
| 1.9679 | 6750 | - | 0.4179 |
| 1.9825 | 6800 | - | 0.4179 |
| 1.9971 | 6850 | - | 0.4179 |
| 2.0 | 6860 | - | 0.4179 |
| 2.0117 | 6900 | - | 0.4179 |
| 2.0262 | 6950 | - | 0.4179 |
| 2.0408 | 7000 | 0.0 | 0.4179 |
| 2.0554 | 7050 | - | 0.4179 |
| 2.0700 | 7100 | - | 0.4179 |
| 2.0845 | 7150 | - | 0.4179 |
| 2.0991 | 7200 | - | 0.4179 |
| 2.1137 | 7250 | - | 0.4179 |
| 2.1283 | 7300 | - | 0.4179 |
| 2.1429 | 7350 | - | 0.4179 |
| 2.1574 | 7400 | - | 0.4179 |
| 2.1720 | 7450 | - | 0.4179 |
| 2.1866 | 7500 | 0.0 | 0.4179 |
| 2.2012 | 7550 | - | 0.4179 |
| 2.2157 | 7600 | - | 0.4179 |
| 2.2303 | 7650 | - | 0.4179 |
| 2.2449 | 7700 | - | 0.4179 |
| 2.2595 | 7750 | - | 0.4179 |
| 2.2741 | 7800 | - | 0.4179 |
| 2.2886 | 7850 | - | 0.4179 |
| 2.3032 | 7900 | - | 0.4179 |
| 2.3178 | 7950 | - | 0.4179 |
| 2.3324 | 8000 | 0.0 | 0.4179 |
| 2.3469 | 8050 | - | 0.4179 |
| 2.3615 | 8100 | - | 0.4179 |
| 2.3761 | 8150 | - | 0.4179 |
| 2.3907 | 8200 | - | 0.4179 |
| 2.4052 | 8250 | - | 0.4179 |
| 2.4198 | 8300 | - | 0.4179 |
| 2.4344 | 8350 | - | 0.4179 |
| 2.4490 | 8400 | - | 0.4179 |
| 2.4636 | 8450 | - | 0.4179 |
| 2.4781 | 8500 | 0.0 | 0.4179 |
| 2.4927 | 8550 | - | 0.4179 |
| 2.5073 | 8600 | - | 0.4179 |
| 2.5219 | 8650 | - | 0.4179 |
| 2.5364 | 8700 | - | 0.4179 |
| 2.5510 | 8750 | - | 0.4179 |
| 2.5656 | 8800 | - | 0.4179 |
| 2.5802 | 8850 | - | 0.4179 |
| 2.5948 | 8900 | - | 0.4179 |
| 2.6093 | 8950 | - | 0.4179 |
| 2.6239 | 9000 | 0.0 | 0.4179 |
| 2.6385 | 9050 | - | 0.4179 |
| 2.6531 | 9100 | - | 0.4179 |
| 2.6676 | 9150 | - | 0.4179 |
| 2.6822 | 9200 | - | 0.4179 |
| 2.6968 | 9250 | - | 0.4223 |
| 2.7114 | 9300 | - | 0.4223 |
| 2.7259 | 9350 | - | 0.4223 |
| 2.7405 | 9400 | - | 0.4223 |
| 2.7551 | 9450 | - | 0.4223 |
| 2.7697 | 9500 | 0.0 | 0.4223 |
| 2.7843 | 9550 | - | 0.4223 |
| 2.7988 | 9600 | - | 0.4223 |
| 2.8134 | 9650 | - | 0.4223 |
| 2.8280 | 9700 | - | 0.4223 |
| 2.8426 | 9750 | - | 0.4223 |
| 2.8571 | 9800 | - | 0.4223 |
| 2.8717 | 9850 | - | 0.4223 |
| 2.8863 | 9900 | - | 0.4223 |
| 2.9009 | 9950 | - | 0.4223 |
| 2.9155 | 10000 | 0.0 | 0.4223 |
| 2.9300 | 10050 | - | 0.4223 |
| 2.9446 | 10100 | - | 0.4223 |
| 2.9592 | 10150 | - | 0.4223 |
| 2.9738 | 10200 | - | 0.4223 |
| 2.9883 | 10250 | - | 0.4223 |
| 3.0 | 10290 | - | 0.4223 |
| 3.0029 | 10300 | - | 0.4223 |
| 3.0175 | 10350 | - | 0.4223 |
| 3.0321 | 10400 | - | 0.4223 |
| 3.0466 | 10450 | - | 0.4223 |
| 3.0612 | 10500 | 0.0 | 0.4223 |
| 3.0758 | 10550 | - | 0.4223 |
| 3.0904 | 10600 | - | 0.4223 |
| 3.1050 | 10650 | - | 0.4223 |
| 3.1195 | 10700 | - | 0.4223 |
| 3.1341 | 10750 | - | 0.4223 |
| 3.1487 | 10800 | - | 0.4223 |
| 3.1633 | 10850 | - | 0.4223 |
| 3.1778 | 10900 | - | 0.4223 |
| 3.1924 | 10950 | - | 0.4223 |
| 3.2070 | 11000 | 0.0 | 0.4223 |
| 3.2216 | 11050 | - | 0.4223 |
| 3.2362 | 11100 | - | 0.4223 |
| 3.2507 | 11150 | - | 0.4223 |
| 3.2653 | 11200 | - | 0.4223 |
| 3.2799 | 11250 | - | 0.4223 |
| 3.2945 | 11300 | - | 0.4223 |
| 3.3090 | 11350 | - | 0.4223 |
| 3.3236 | 11400 | - | 0.4223 |
| 3.3382 | 11450 | - | 0.4223 |
| 3.3528 | 11500 | 0.0 | 0.4223 |
| 3.3673 | 11550 | - | 0.4223 |
| 3.3819 | 11600 | - | 0.4223 |
| 3.3965 | 11650 | - | 0.4223 |
| 3.4111 | 11700 | - | 0.4223 |
| 3.4257 | 11750 | - | 0.4223 |
| 3.4402 | 11800 | - | 0.4223 |
| 3.4548 | 11850 | - | 0.4223 |
| 3.4694 | 11900 | - | 0.4223 |
| 3.4840 | 11950 | - | 0.4223 |
| 3.4985 | 12000 | 0.0 | 0.4223 |
| 3.5131 | 12050 | - | 0.4223 |
| 3.5277 | 12100 | - | 0.4223 |
| 3.5423 | 12150 | - | 0.4223 |
| 3.5569 | 12200 | - | 0.4223 |
| 3.5714 | 12250 | - | 0.4223 |
| 3.5860 | 12300 | - | 0.4223 |
| 3.6006 | 12350 | - | 0.4223 |
| 3.6152 | 12400 | - | 0.4223 |
| 3.6297 | 12450 | - | 0.4223 |
| 3.6443 | 12500 | 0.0 | 0.4223 |
| 3.6589 | 12550 | - | 0.4223 |
| 3.6735 | 12600 | - | 0.4223 |
| 3.6880 | 12650 | - | 0.4223 |
| 3.7026 | 12700 | - | 0.4223 |
| 3.7172 | 12750 | - | 0.4223 |
| 3.7318 | 12800 | - | 0.4223 |
| 3.7464 | 12850 | - | 0.4223 |
| 3.7609 | 12900 | - | 0.4223 |
| 3.7755 | 12950 | - | 0.4223 |
| 3.7901 | 13000 | 0.0 | 0.4223 |
| 3.8047 | 13050 | - | 0.4223 |
| 3.8192 | 13100 | - | 0.4226 |
| 3.8338 | 13150 | - | 0.4226 |
| 3.8484 | 13200 | - | 0.4226 |
| 3.8630 | 13250 | - | 0.4226 |
| 3.8776 | 13300 | - | 0.4226 |
| 3.8921 | 13350 | - | 0.4226 |
| 3.9067 | 13400 | - | 0.4226 |
| 3.9213 | 13450 | - | 0.4226 |
| 3.9359 | 13500 | 0.0 | 0.4226 |
| 3.9504 | 13550 | - | 0.4226 |
| 3.9650 | 13600 | - | 0.4226 |
| 3.9796 | 13650 | - | 0.4226 |
| 3.9942 | 13700 | - | 0.4226 |
| 4.0 | 13720 | - | 0.4226 |
| 4.0087 | 13750 | - | 0.4226 |
| 4.0233 | 13800 | - | 0.4226 |
| 4.0379 | 13850 | - | 0.4226 |
| 4.0525 | 13900 | - | 0.4226 |
| 4.0671 | 13950 | - | 0.4226 |
| 4.0816 | 14000 | 0.0 | 0.4226 |
| 4.0962 | 14050 | - | 0.4226 |
| 4.1108 | 14100 | - | 0.4226 |
| 4.1254 | 14150 | - | 0.4226 |
| 4.1399 | 14200 | - | 0.4226 |
| 4.1545 | 14250 | - | 0.4226 |
| 4.1691 | 14300 | - | 0.4226 |
| 4.1837 | 14350 | - | 0.4226 |
| 4.1983 | 14400 | - | 0.4226 |
| 4.2128 | 14450 | - | 0.4226 |
| 4.2274 | 14500 | 0.0 | 0.4226 |
| 4.2420 | 14550 | - | 0.4226 |
| 4.2566 | 14600 | - | 0.4226 |
| 4.2711 | 14650 | - | 0.4226 |
| 4.2857 | 14700 | - | 0.4226 |
| 4.3003 | 14750 | - | 0.4226 |
| 4.3149 | 14800 | - | 0.4226 |
| 4.3294 | 14850 | - | 0.4226 |
| 4.3440 | 14900 | - | 0.4226 |
| 4.3586 | 14950 | - | 0.4226 |
| 4.3732 | 15000 | 0.0 | 0.4226 |
| 4.3878 | 15050 | - | 0.4226 |
| 4.4023 | 15100 | - | 0.4226 |
| 4.4169 | 15150 | - | 0.4226 |
| 4.4315 | 15200 | - | 0.4226 |
| 4.4461 | 15250 | - | 0.4226 |
| 4.4606 | 15300 | - | 0.4226 |
| 4.4752 | 15350 | - | 0.4226 |
| 4.4898 | 15400 | - | 0.4226 |
| 4.5044 | 15450 | - | 0.4226 |
| 4.5190 | 15500 | 0.0 | 0.4226 |
| 4.5335 | 15550 | - | 0.4226 |
| 4.5481 | 15600 | - | 0.4226 |
| 4.5627 | 15650 | - | 0.4226 |
| 4.5773 | 15700 | - | 0.4226 |
| 4.5918 | 15750 | - | 0.4226 |
| 4.6064 | 15800 | - | 0.4226 |
| 4.6210 | 15850 | - | 0.4226 |
| 4.6356 | 15900 | - | 0.4226 |
| 4.6501 | 15950 | - | 0.4226 |
| 4.6647 | 16000 | 0.0 | 0.4226 |
| 4.6793 | 16050 | - | 0.4226 |
| 4.6939 | 16100 | - | 0.4226 |
| 4.7085 | 16150 | - | 0.4226 |
| 4.7230 | 16200 | - | 0.4226 |
| 4.7376 | 16250 | - | 0.4226 |
| 4.7522 | 16300 | - | 0.4226 |
| 4.7668 | 16350 | - | 0.4226 |
| 4.7813 | 16400 | - | 0.4226 |
| 4.7959 | 16450 | - | 0.4226 |
| 4.8105 | 16500 | 0.0 | 0.4226 |
| 4.8251 | 16550 | - | 0.4226 |
| 4.8397 | 16600 | - | 0.4226 |
| 4.8542 | 16650 | - | 0.4226 |
| 4.8688 | 16700 | - | 0.4226 |
| 4.8834 | 16750 | - | 0.4226 |
| 4.8980 | 16800 | - | 0.4226 |
| 4.9125 | 16850 | - | 0.4226 |
| 4.9271 | 16900 | - | 0.4226 |
| 4.9417 | 16950 | - | 0.4226 |
| 4.9563 | 17000 | 0.0 | 0.4226 |
| 4.9708 | 17050 | - | 0.4226 |
| 4.9854 | 17100 | - | 0.4226 |
| 5.0 | 17150 | - | 0.4226 |
| 0.0146 | 50 | - | 0.4226 |
| 0.0292 | 100 | - | 0.4226 |
| 0.0437 | 150 | - | 0.4226 |
| 0.0583 | 200 | - | 0.4226 |
| 0.0729 | 250 | - | 0.4226 |
| 0.0875 | 300 | - | 0.4226 |
| 0.1020 | 350 | - | 0.4226 |
| 0.1166 | 400 | - | 0.4226 |
| 0.1312 | 450 | - | 0.4226 |
| 0.1458 | 500 | 0.0 | 0.4226 |
| 0.1603 | 550 | - | 0.4226 |
| 0.1749 | 600 | - | 0.4226 |
| 0.1895 | 650 | - | 0.4226 |
| 0.2041 | 700 | - | 0.4226 |
| 0.2187 | 750 | - | 0.4226 |
| 0.2332 | 800 | - | 0.4226 |
| 0.2478 | 850 | - | 0.4226 |
| 0.2624 | 900 | - | 0.4226 |
| 0.2770 | 950 | - | 0.4226 |
| 0.2915 | 1000 | 0.0 | 0.4227 |
| 0.3061 | 1050 | - | 0.4227 |
| 0.3207 | 1100 | - | 0.4227 |
| 0.3353 | 1150 | - | 0.4227 |
| 0.3499 | 1200 | - | 0.4227 |
| 0.3644 | 1250 | - | 0.4227 |
| 0.3790 | 1300 | - | 0.4227 |
| 0.3936 | 1350 | - | 0.4227 |
| 0.4082 | 1400 | - | 0.4227 |
| 0.4227 | 1450 | - | 0.4227 |
| 0.4373 | 1500 | 0.0 | 0.4227 |
| 0.4519 | 1550 | - | 0.4227 |
| 0.4665 | 1600 | - | 0.4227 |
| 0.4810 | 1650 | - | 0.4227 |
| 0.4956 | 1700 | - | 0.4227 |
| 0.5102 | 1750 | - | 0.4227 |
| 0.5248 | 1800 | - | 0.4227 |
| 0.5394 | 1850 | - | 0.4227 |
| 0.5539 | 1900 | - | 0.4227 |
| 0.5685 | 1950 | - | 0.4227 |
| 0.5831 | 2000 | 0.0 | 0.4227 |
| 0.5977 | 2050 | - | 0.4227 |
| 0.6122 | 2100 | - | 0.4227 |
| 0.6268 | 2150 | - | 0.4227 |
| 0.6414 | 2200 | - | 0.4227 |
| 0.6560 | 2250 | - | 0.4227 |
| 0.6706 | 2300 | - | 0.4227 |
| 0.6851 | 2350 | - | 0.4227 |
| 0.6997 | 2400 | - | 0.4227 |
| 0.7143 | 2450 | - | 0.4227 |
| 0.7289 | 2500 | 0.0 | 0.4227 |
| 0.7434 | 2550 | - | 0.4227 |
| 0.7580 | 2600 | - | 0.4227 |
| 0.7726 | 2650 | - | 0.4227 |
| 0.7872 | 2700 | - | 0.4227 |
| 0.8017 | 2750 | - | 0.4227 |
| 0.8163 | 2800 | - | 0.4227 |
| 0.8309 | 2850 | - | 0.4227 |
| 0.8455 | 2900 | - | 0.4227 |
| 0.8601 | 2950 | - | 0.4227 |
| 0.8746 | 3000 | 0.0 | 0.4227 |
| 0.8892 | 3050 | - | 0.4227 |
| 0.9038 | 3100 | - | 0.4227 |
| 0.9184 | 3150 | - | 0.4227 |
| 0.9329 | 3200 | - | 0.4227 |
| 0.9475 | 3250 | - | 0.4227 |
| 0.9621 | 3300 | - | 0.4227 |
| 0.9767 | 3350 | - | 0.4227 |
| 0.9913 | 3400 | - | 0.4227 |
| 1.0 | 3430 | - | 0.4227 |
| 1.0058 | 3450 | - | 0.4227 |
| 1.0204 | 3500 | 0.0 | 0.4227 |
| 1.0350 | 3550 | - | 0.4227 |
| 1.0496 | 3600 | - | 0.4227 |
| 1.0641 | 3650 | - | 0.4227 |
| 1.0787 | 3700 | - | 0.4227 |
| 1.0933 | 3750 | - | 0.4227 |
| 1.1079 | 3800 | - | 0.4227 |
| 1.1224 | 3850 | - | 0.4227 |
| 1.1370 | 3900 | - | 0.4227 |
| 1.1516 | 3950 | - | 0.4227 |
| 1.1662 | 4000 | 0.0 | 0.4227 |
| 1.1808 | 4050 | - | 0.4227 |
| 1.1953 | 4100 | - | 0.4227 |
| 1.2099 | 4150 | - | 0.4231 |
| 1.2245 | 4200 | - | 0.4231 |
| 1.2391 | 4250 | - | 0.4231 |
| 1.2536 | 4300 | - | 0.4231 |
| 1.2682 | 4350 | - | 0.4231 |
| 1.2828 | 4400 | - | 0.4231 |
| 1.2974 | 4450 | - | 0.4231 |
| 1.3120 | 4500 | 0.0 | 0.4231 |
| 1.3265 | 4550 | - | 0.4231 |
| 1.3411 | 4600 | - | 0.4231 |
| 1.3557 | 4650 | - | 0.4232 |
| 1.3703 | 4700 | - | 0.4232 |
| 1.3848 | 4750 | - | 0.4232 |
| 1.3994 | 4800 | - | 0.4232 |
| 1.4140 | 4850 | - | 0.4232 |
| 1.4286 | 4900 | - | 0.4232 |
| 1.4431 | 4950 | - | 0.4232 |
| 1.4577 | 5000 | 0.0 | 0.4232 |
| 1.4723 | 5050 | - | 0.4232 |
| 1.4869 | 5100 | - | 0.4232 |
| 1.5015 | 5150 | - | 0.4232 |
| 1.5160 | 5200 | - | 0.4232 |
| 1.5306 | 5250 | - | 0.4232 |
| 1.5452 | 5300 | - | 0.4233 |
| 1.5598 | 5350 | - | 0.4233 |
| 1.5743 | 5400 | - | 0.4233 |
| 1.5889 | 5450 | - | 0.4233 |
| 1.6035 | 5500 | 0.0 | 0.4233 |
| 1.6181 | 5550 | - | 0.4233 |
| 1.6327 | 5600 | - | 0.4233 |
| 1.6472 | 5650 | - | 0.4233 |
| 1.6618 | 5700 | - | 0.4233 |
| 1.6764 | 5750 | - | 0.4233 |
| 1.6910 | 5800 | - | 0.4233 |
| 1.7055 | 5850 | - | 0.4233 |
| 1.7201 | 5900 | - | 0.4233 |
| 1.7347 | 5950 | - | 0.4233 |
| 1.7493 | 6000 | 0.0 | 0.4233 |
| 1.7638 | 6050 | - | 0.4234 |
| 1.7784 | 6100 | - | 0.4234 |
| 1.7930 | 6150 | - | 0.4234 |
| 1.8076 | 6200 | - | 0.4234 |
| 1.8222 | 6250 | - | 0.4234 |
| 1.8367 | 6300 | - | 0.4234 |
| 1.8513 | 6350 | - | 0.4234 |
| 1.8659 | 6400 | - | 0.4234 |
| 1.8805 | 6450 | - | 0.4234 |
| 1.8950 | 6500 | 0.0 | 0.4234 |
| 1.9096 | 6550 | - | 0.4234 |
| 1.9242 | 6600 | - | 0.4234 |
| 1.9388 | 6650 | - | 0.4234 |
| 1.9534 | 6700 | - | 0.4234 |
| 1.9679 | 6750 | - | 0.4234 |
| 1.9825 | 6800 | - | 0.4234 |
| 1.9971 | 6850 | - | 0.4234 |
| 2.0 | 6860 | - | 0.4234 |
| 2.0117 | 6900 | - | 0.4234 |
| 2.0262 | 6950 | - | 0.4234 |
| 2.0408 | 7000 | 0.0 | 0.4234 |
| 2.0554 | 7050 | - | 0.4234 |
| 2.0700 | 7100 | - | 0.4234 |
| 2.0845 | 7150 | - | 0.4234 |
| 2.0991 | 7200 | - | 0.4234 |
| 2.1137 | 7250 | - | 0.4234 |
| 2.1283 | 7300 | - | 0.4234 |
| 2.1429 | 7350 | - | 0.4234 |
| 2.1574 | 7400 | - | 0.4234 |
| 2.1720 | 7450 | - | 0.4234 |
| 2.1866 | 7500 | 0.0 | 0.4234 |
| 2.2012 | 7550 | - | 0.4234 |
| 2.2157 | 7600 | - | 0.4234 |
| 2.2303 | 7650 | - | 0.4234 |
| 2.2449 | 7700 | - | 0.4234 |
| 2.2595 | 7750 | - | 0.4234 |
| 2.2741 | 7800 | - | 0.4234 |
| 2.2886 | 7850 | - | 0.4234 |
| 2.3032 | 7900 | - | 0.4234 |
| 2.3178 | 7950 | - | 0.4234 |
| 2.3324 | 8000 | 0.0 | 0.4234 |
| 2.3469 | 8050 | - | 0.4234 |
| 2.3615 | 8100 | - | 0.4234 |
| 2.3761 | 8150 | - | 0.4234 |
| 2.3907 | 8200 | - | 0.4234 |
| 2.4052 | 8250 | - | 0.4234 |
| 2.4198 | 8300 | - | 0.4234 |
| 2.4344 | 8350 | - | 0.4234 |
| 2.4490 | 8400 | - | 0.4234 |
| 2.4636 | 8450 | - | 0.4234 |
| 2.4781 | 8500 | 0.0 | 0.4234 |
| 2.4927 | 8550 | - | 0.4234 |
| 2.5073 | 8600 | - | 0.4234 |
| 2.5219 | 8650 | - | 0.4234 |
| 2.5364 | 8700 | - | 0.4234 |
| 2.5510 | 8750 | - | 0.4234 |
| 2.5656 | 8800 | - | 0.4234 |
| 2.5802 | 8850 | - | 0.4234 |
| 2.5948 | 8900 | - | 0.4234 |
| 2.6093 | 8950 | - | 0.4234 |
| 2.6239 | 9000 | 0.0 | 0.4234 |
| 2.6385 | 9050 | - | 0.4234 |
| 2.6531 | 9100 | - | 0.4234 |
| 2.6676 | 9150 | - | 0.4234 |
| 2.6822 | 9200 | - | 0.4234 |
| 2.6968 | 9250 | - | 0.4234 |
| 2.7114 | 9300 | - | 0.4234 |
| 2.7259 | 9350 | - | 0.4234 |
| 2.7405 | 9400 | - | 0.4234 |
| 2.7551 | 9450 | - | 0.4234 |
| 2.7697 | 9500 | 0.0 | 0.4234 |
| 2.7843 | 9550 | - | 0.4234 |
| 2.7988 | 9600 | - | 0.4234 |
| 2.8134 | 9650 | - | 0.4234 |
| 2.8280 | 9700 | - | 0.4234 |
| 2.8426 | 9750 | - | 0.4234 |
| 2.8571 | 9800 | - | 0.4234 |
| 2.8717 | 9850 | - | 0.4234 |
| 2.8863 | 9900 | - | 0.4234 |
| 2.9009 | 9950 | - | 0.4234 |
| 2.9155 | 10000 | 0.0 | 0.4234 |
| 2.9300 | 10050 | - | 0.4234 |
| 2.9446 | 10100 | - | 0.4234 |
| 2.9592 | 10150 | - | 0.4234 |
| 2.9738 | 10200 | - | 0.4234 |
| 2.9883 | 10250 | - | 0.4234 |
| 3.0 | 10290 | - | 0.4234 |
| 3.0029 | 10300 | - | 0.4234 |
| 3.0175 | 10350 | - | 0.4234 |
| 3.0321 | 10400 | - | 0.4234 |
| 3.0466 | 10450 | - | 0.4234 |
| 3.0612 | 10500 | 0.0 | 0.4234 |
| 3.0758 | 10550 | - | 0.4234 |
| 3.0904 | 10600 | - | 0.4234 |
| 3.1050 | 10650 | - | 0.4234 |
| 3.1195 | 10700 | - | 0.4234 |
| 3.1341 | 10750 | - | 0.4234 |
| 3.1487 | 10800 | - | 0.4234 |
| 3.1633 | 10850 | - | 0.4234 |
| 3.1778 | 10900 | - | 0.4234 |
| 3.1924 | 10950 | - | 0.4234 |
| 3.2070 | 11000 | 0.0 | 0.4234 |
| 3.2216 | 11050 | - | 0.4234 |
| 3.2362 | 11100 | - | 0.4234 |
| 3.2507 | 11150 | - | 0.4234 |
| 3.2653 | 11200 | - | 0.4234 |
| 3.2799 | 11250 | - | 0.4234 |
| 3.2945 | 11300 | - | 0.4234 |
| 3.3090 | 11350 | - | 0.4234 |
| 3.3236 | 11400 | - | 0.4234 |
| 3.3382 | 11450 | - | 0.4234 |
| 3.3528 | 11500 | 0.0 | 0.4234 |
| 3.3673 | 11550 | - | 0.4234 |
| 3.3819 | 11600 | - | 0.4234 |
| 3.3965 | 11650 | - | 0.4234 |
| 3.4111 | 11700 | - | 0.4234 |
| 3.4257 | 11750 | - | 0.4234 |
| 3.4402 | 11800 | - | 0.4234 |
| 3.4548 | 11850 | - | 0.4235 |
| 3.4694 | 11900 | - | 0.4235 |
| 3.4840 | 11950 | - | 0.4235 |
| 3.4985 | 12000 | 0.0 | 0.4235 |
| 3.5131 | 12050 | - | 0.4235 |
| 3.5277 | 12100 | - | 0.4235 |
| 3.5423 | 12150 | - | 0.4235 |
| 3.5569 | 12200 | - | 0.4235 |
| 3.5714 | 12250 | - | 0.4235 |
| 3.5860 | 12300 | - | 0.4235 |
| 3.6006 | 12350 | - | 0.4235 |
| 3.6152 | 12400 | - | 0.4235 |
| 3.6297 | 12450 | - | 0.4235 |
| 3.6443 | 12500 | 0.0 | 0.4235 |
| 3.6589 | 12550 | - | 0.4235 |
| 3.6735 | 12600 | - | 0.4235 |
| 3.6880 | 12650 | - | 0.4235 |
| 3.7026 | 12700 | - | 0.4235 |
| 3.7172 | 12750 | - | 0.4235 |
| 3.7318 | 12800 | - | 0.4235 |
| 3.7464 | 12850 | - | 0.4235 |
| 3.7609 | 12900 | - | 0.4235 |
| 3.7755 | 12950 | - | 0.4235 |
| 3.7901 | 13000 | 0.0 | 0.4235 |
| 3.8047 | 13050 | - | 0.4235 |
| 3.8192 | 13100 | - | 0.4235 |
| 3.8338 | 13150 | - | 0.4235 |
| 3.8484 | 13200 | - | 0.4235 |
| 3.8630 | 13250 | - | 0.4235 |
| 3.8776 | 13300 | - | 0.4235 |
| 3.8921 | 13350 | - | 0.4235 |
| 3.9067 | 13400 | - | 0.4235 |
| 3.9213 | 13450 | - | 0.4235 |
| 3.9359 | 13500 | 0.0 | 0.4235 |
| 3.9504 | 13550 | - | 0.4235 |
| 3.9650 | 13600 | - | 0.4235 |
| 3.9796 | 13650 | - | 0.4235 |
| 3.9942 | 13700 | - | 0.4235 |
| 4.0 | 13720 | - | 0.4235 |
| 4.0087 | 13750 | - | 0.4235 |
| 4.0233 | 13800 | - | 0.4235 |
| 4.0379 | 13850 | - | 0.4235 |
| 4.0525 | 13900 | - | 0.4235 |
| 4.0671 | 13950 | - | 0.4235 |
| 4.0816 | 14000 | 0.0 | 0.4236 |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 2.14.4
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "Snowflake/snowflake-arctic-embed-l", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100", "dot_accuracy@1", "dot_accuracy@3", "dot_accuracy@5", "dot_accuracy@10", "dot_precision@1", "dot_precision@3", "dot_precision@5", "dot_precision@10", "dot_recall@1", "dot_recall@3", "dot_recall@5", "dot_recall@10", "dot_ndcg@10", "dot_mrr@10", "dot_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:3430", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "What are some illustrative cases that show the implementation of the AI Bill of Rights?", "sentences": ["SECTION TITLE\nAPPENDIX\nListening to the American People \nThe White House Office of Science and Technology Policy (OSTP) led a yearlong process to seek and distill \ninput from people across the country – from impacted communities to industry stakeholders to \ntechnology developers to other experts across fields and sectors, as well as policymakers across the Federal \ngovernment – on the issue of algorithmic and data-driven harms and potential remedies. Through panel \ndiscussions, public listening sessions, private meetings, a formal request for information, and input to a \npublicly accessible and widely-publicized email address, people across the United States spoke up about \nboth the promises and potential harms of these technologies, and played a central role in shaping the \nBlueprint for an AI Bill of Rights. \nPanel Discussions to Inform the Blueprint for An AI Bill of Rights \nOSTP co-hosted a series of six panel discussions in collaboration with the Center for American Progress,", "existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention", "confidence that their rights, opportunities, and access as well as their expectations about technologies are respected. \n3\nHOW THESE PRINCIPLES CAN MOVE INTO PRACTICE: \nThis section provides real-life examples of how these guiding principles can become reality, through laws, policies, and practices. \nIt describes practical technical and sociotechnical approaches to protecting rights, opportunities, and access. \nThe examples provided are not critiques or endorsements, but rather are offered as illustrative cases to help \nprovide a concrete vision for actualizing the Blueprint for an AI Bill of Rights. Effectively implementing these \nprocesses require the cooperation of and collaboration among industry, civil society, researchers, policymakers, \ntechnologists, and the public. \n14"]}, {"source_sentence": "What are the potential impacts of automated systems on data privacy?", "sentences": ["https://arxiv.org/pdf/2305.17493v2 \nSmith, A. et al. (2023) Hallucination or Confabulation? Neuroanatomy as metaphor in Large Language \nModels. PLOS Digital Health. \nhttps://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000388 \nSoice, E. et al. (2023) Can large language models democratize access to dual-use biotechnology? arXiv. \nhttps://arxiv.org/abs/2306.03809 \nSolaiman, I. et al. (2023) The Gradient of Generative AI Release: Methods and Considerations. arXiv. \nhttps://arxiv.org/abs/2302.04844 \nStaab, R. et al. (2023) Beyond Memorization: Violating Privacy via Inference With Large Language \nModels. arXiv. https://arxiv.org/pdf/2310.07298 \nStanford, S. et al. (2023) Whose Opinions Do Language Models Reflect? arXiv. \nhttps://arxiv.org/pdf/2303.17548 \nStrubell, E. et al. (2019) Energy and Policy Considerations for Deep Learning in NLP. arXiv. \nhttps://arxiv.org/pdf/1906.02243 \nThe White House (2016) Circular No. A-130, Managing Information as a Strategic Resource.", "and data that are considered sensitive are understood to change over time based on societal norms and context. \n36", "yet foreseeable, uses or impacts of automated systems. You should be \nprotected from inappropriate or irrelevant data use in the design, de\nvelopment, and deployment of automated systems, and from the \ncompounded harm of its reuse. Independent evaluation and report\ning that confirms that the system is safe and effective, including re\nporting of steps taken to mitigate potential harms, should be per\nformed and the results made public whenever possible. \n15"]}, {"source_sentence": "What is the AI Bill of Rights?", "sentences": ["BLUEPRINT FOR AN \nAI BILL OF \nRIGHTS \nMAKING AUTOMATED \nSYSTEMS WORK FOR \nTHE AMERICAN PEOPLE \nOCTOBER 2022", "APPENDIX\n•\nJulia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance\n•\nDr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center\n•\nJ. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and\nUWA Law School\nPanelists separately described the increasing scope of technology use in providing for social welfare, including \nin fraud detection, digital ID systems, and other methods focused on improving efficiency and reducing cost. \nHowever, various panelists individually cautioned that these systems may reduce burden for government \nagencies by increasing the burden and agency of people using and interacting with these technologies. \nAdditionally, these systems can produce feedback loops and compounded harm, collecting data from \ncommunities and using it to reinforce inequality. Various panelists suggested that these harms could be \nmitigated by ensuring community input at the beginning of the design process, providing ways to opt out of", "safe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transpar-\nent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. \nAffected agencies across the federal government have released AI use case inventories13 and are implementing \nplans to bring those AI systems into compliance with the Executive Order or retire them. \nThe law and policy landscape for motor vehicles shows that strong safety regulations—and \nmeasures to address harms when they occur—can enhance innovation in the context of com-\nplex technologies. Cars, like automated digital systems, comprise a complex collection of components. \nThe National Highway Traffic Safety Administration,14 through its rigorous standards and independent \nevaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to \ninnovate.15 At the same time, rules of the road are implemented locally to impose contextually appropriate"]}, {"source_sentence": "What are the best practices for benchmarking AI system security and resilience?", "sentences": ["NOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and \nexplanations as to how and why a decision was made or an action was taken by the system. These expectations are \nexplained below. \nProvide clear, timely, understandable, and accessible notice of use and explanations \nGenerally accessible plain language documentation. The entity responsible for using the automated \nsystem should ensure that documentation describing the overall system (including any human components) is \npublic and easy to find. The documentation should describe, in plain language, how the system works and how", "content performance and impact, and work in collaboration with AI Actors \nexperienced in user research and experience. \nHuman-AI Configuration \nMG-4.1-004 Implement active learning techniques to identify instances where the model fails \nor produces unexpected outputs. \nConfabulation \nMG-4.1-005 \nShare transparency reports with internal and external stakeholders that detail \nsteps taken to update the GAI system to enhance transparency and \naccountability. \nHuman-AI Configuration; Harmful \nBias and Homogenization \nMG-4.1-006 \nTrack dataset modifications for provenance by monitoring data deletions, \nrectification requests, and other changes that may impact the verifiability of \ncontent origins. \nInformation Integrity", "33 \nMEASURE 2.7: AI system security and resilience – as identified in the MAP function – are evaluated and documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-2.7-001 \nApply established security measures to: Assess likelihood and magnitude of \nvulnerabilities and threats such as backdoors, compromised dependencies, data \nbreaches, eavesdropping, man-in-the-middle attacks, reverse engineering, \nautonomous agents, model theft or exposure of model weights, AI inference, \nbypass, extraction, and other baseline security concerns. \nData Privacy; Information Integrity; \nInformation Security; Value Chain \nand Component Integration \nMS-2.7-002 \nBenchmark GAI system security and resilience related to content provenance \nagainst industry standards and best practices. Compare GAI system security \nfeatures and content provenance methods against industry state-of-the-art. \nInformation Integrity; Information \nSecurity \nMS-2.7-003 \nConduct user surveys to gather user satisfaction with the AI-generated content"]}, {"source_sentence": "How should risks or trustworthiness characteristics that cannot be measured be documented?", "sentences": ["MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for \nimplementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be \nmeasured are properly documented. \nAction ID \nSuggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate tools designed to analyze content provenance and detect data \nanomalies, verify the authenticity of digital signatures, and identify patterns \nassociated with misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate evaluation metrics by demographic factors to identify any \ndiscrepancies in how content provenance mechanisms work across diverse \npopulations. \nInformation Integrity; Harmful \nBias and Homogenization \nMS-1.1-004 Develop a suite of metrics to evaluate structured public feedback exercises", "AI technology can produce varied outputs in multiple modalities and present many classes of user \ninterfaces. This leads to a broader set of AI Actors interacting with GAI systems for widely differing \napplications and contexts of use. These can include data labeling and preparation, development of GAI \nmodels, content moderation, code generation and review, text generation and editing, image and video \ngeneration, summarization, search, and chat. These activities can take place within organizational \nsettings or in the public domain. \nOrganizations can restrict AI applications that cause harm, exceed stated risk tolerances, or that conflict \nwith their tolerances or values. Governance tools and protocols that are applied to other types of AI \nsystems can be applied to GAI systems. These plans and actions include: \n• Accessibility and reasonable \naccommodations \n• AI actor credentials and qualifications \n• Alignment to organizational values \n• Auditing and assessment \n• Change-management controls", "existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, \nand as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing \nshould include the possibility of not deploying the system. \nRisk identification and mitigation. Before deployment, and in a proactive and ongoing manner, poten\ntial risks of the automated system should be identified and mitigated. Identified risks should focus on the \npotential for meaningful impact on people’s rights, opportunities, or access and include those to impacted \ncommunities that may not be direct users of the automated system, risks resulting from purposeful misuse of \nthe system, and other concerns identified via the consultation process. Assessment and, where possible, mea\nsurement of the impact of risks should be included and balanced such that high impact risks receive attention"]}], "model-index": [{"name": "SentenceTransformer based on Snowflake/snowflake-arctic-embed-l", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "Unknown", "type": "unknown"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.2807017543859649, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.4649122807017544, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.5350877192982456, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7192982456140351, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.2807017543859649, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.15497076023391812, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.10701754385964912, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.0719298245614035, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.2807017543859649, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.4649122807017544, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.5350877192982456, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7192982456140351, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.4797086283187805, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.40644667223614606, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.423567506926962, "name": "Cosine Map@100"}, {"type": "dot_accuracy@1", "value": 0.2807017543859649, "name": "Dot Accuracy@1"}, {"type": "dot_accuracy@3", "value": 0.4649122807017544, "name": "Dot Accuracy@3"}, {"type": "dot_accuracy@5", "value": 0.5350877192982456, "name": "Dot Accuracy@5"}, {"type": "dot_accuracy@10", "value": 0.7192982456140351, "name": "Dot Accuracy@10"}, {"type": "dot_precision@1", "value": 0.2807017543859649, "name": "Dot Precision@1"}, {"type": "dot_precision@3", "value": 0.15497076023391812, "name": "Dot Precision@3"}, {"type": "dot_precision@5", "value": 0.10701754385964912, "name": "Dot Precision@5"}, {"type": "dot_precision@10", "value": 0.0719298245614035, "name": "Dot Precision@10"}, {"type": "dot_recall@1", "value": 0.2807017543859649, "name": "Dot Recall@1"}, {"type": "dot_recall@3", "value": 0.4649122807017544, "name": "Dot Recall@3"}, {"type": "dot_recall@5", "value": 0.5350877192982456, "name": "Dot Recall@5"}, {"type": "dot_recall@10", "value": 0.7192982456140351, "name": "Dot Recall@10"}, {"type": "dot_ndcg@10", "value": 0.4797086283187805, "name": "Dot Ndcg@10"}, {"type": "dot_mrr@10", "value": 0.40644667223614606, "name": "Dot Mrr@10"}, {"type": "dot_map@100", "value": 0.423567506926962, "name": "Dot Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | 45,063 |
Triangle104/Athena-1-7B-Q5_K_S-GGUF
|
Triangle104
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Spestly/Athena-1-7B",
"base_model:quantized:Spestly/Athena-1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-12-26T13:06:53Z |
2024-12-26T13:08:00+00:00
| 1 | 0 |
---
base_model: Spestly/Athena-1-7B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
---
# Triangle104/Athena-1-7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`Spestly/Athena-1-7B`](https://huggingface.co/Spestly/Athena-1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-1-7B) for more details on the model.
---
Model details:
-
Athena-1 is a fine-tuned, instruction-following large language model derived from Qwen/Qwen2.5-7B-Instruct.
Designed to balance efficiency and performance, Athena 7B provides
powerful text-generation capabilities, making it suitable for a variety
of real-world applications, including conversational AI, content
creation, and structured data processing.
Key Features
🚀 Enhanced Performance
Instruction Following: Fine-tuned for excellent adherence to user prompts and instructions.
Coding and Mathematics: Proficient in solving coding problems and mathematical reasoning.
Lightweight: At 7.62 billion parameters, Athena-1-7B offers powerful performance while maintaining efficiency.
📖 Long-Context Understanding
Context Length: Supports up to 128K tokens, ensuring accurate handling of large documents or conversations.
Token Generation: Can generate up to 8K tokens of output.
🌍 Multilingual Support
Supports 29+ languages, including:
English, Chinese, French, Spanish, Portuguese, German, Italian, Russian
Japanese, Korean, Vietnamese, Thai, Arabic, and more.
📊 Structured Data & Outputs
Structured Data Interpretation: Understands and processes structured formats like tables and JSON.
Structured Output Generation: Generates well-formatted outputs, including JSON and other structured formats.
Model Details
Base Model: Qwen/Qwen2.5-7B-Instruct
Architecture: Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
Parameters: 7.62B total (6.53B non-embedding).
Layers: 28
Attention Heads: 28 for Q, 4 for KV.
Context Length: Up to 131,072 tokens.
Applications
Athena-1 is designed for a broad range of use cases:
Conversational AI: Create natural, human-like chatbot experiences.
Code Generation: Generate, debug, or explain code snippets.
Mathematical Problem Solving: Assist with complex calculations and reasoning.
Document Processing: Summarize or analyze large documents.
Multilingual Applications: Support for diverse languages for translation and global use cases.
Structured Data: Process and generate structured data, including tables and JSON.
Quickstart
Here’s how you can use Athena 7B for quick text generation:
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="Spestly/Athena-1-7B")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Spestly/Athena-1-7B")
model = AutoModelForCausalLM.from_pretrained("Spestly/Athena-1-7B")
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/Athena-1-7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`Spestly/Athena-1-7B`](https://huggingface.co/Spestly/Athena-1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-1-7B) for more details on the model.
---
Model details:
-
Athena-1 is a fine-tuned, instruction-following large language model derived from Qwen/Qwen2.5-7B-Instruct.
Designed to balance efficiency and performance, Athena 7B provides
powerful text-generation capabilities, making it suitable for a variety
of real-world applications, including conversational AI, content
creation, and structured data processing.
Key Features
🚀 Enhanced Performance
Instruction Following: Fine-tuned for excellent adherence to user prompts and instructions.
Coding and Mathematics: Proficient in solving coding problems and mathematical reasoning.
Lightweight: At 7.62 billion parameters, Athena-1-7B offers powerful performance while maintaining efficiency.
📖 Long-Context Understanding
Context Length: Supports up to 128K tokens, ensuring accurate handling of large documents or conversations.
Token Generation: Can generate up to 8K tokens of output.
🌍 Multilingual Support
Supports 29+ languages, including:
English, Chinese, French, Spanish, Portuguese, German, Italian, Russian
Japanese, Korean, Vietnamese, Thai, Arabic, and more.
📊 Structured Data & Outputs
Structured Data Interpretation: Understands and processes structured formats like tables and JSON.
Structured Output Generation: Generates well-formatted outputs, including JSON and other structured formats.
Model Details
Base Model: Qwen/Qwen2.5-7B-Instruct
Architecture: Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias.
Parameters: 7.62B total (6.53B non-embedding).
Layers: 28
Attention Heads: 28 for Q, 4 for KV.
Context Length: Up to 131,072 tokens.
Applications
Athena-1 is designed for a broad range of use cases:
Conversational AI: Create natural, human-like chatbot experiences.
Code Generation: Generate, debug, or explain code snippets.
Mathematical Problem Solving: Assist with complex calculations and reasoning.
Document Processing: Summarize or analyze large documents.
Multilingual Applications: Support for diverse languages for translation and global use cases.
Structured Data: Process and generate structured data, including tables and JSON.
Quickstart
Here’s how you can use Athena 7B for quick text generation:
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="Spestly/Athena-1-7B")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Spestly/Athena-1-7B")
model = AutoModelForCausalLM.from_pretrained("Spestly/Athena-1-7B")
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Athena-1-7B-Q5_K_S-GGUF --hf-file athena-1-7b-q5_k_s.gguf -c 2048
```
|
{"base_model": "Spestly/Athena-1-7B", "language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "qwen2", "trl", "llama-cpp", "gguf-my-repo"]}
|
task
|
[
"TRANSLATION"
] | 45,064 |
ksaml/marian-finetuned-kde4-cs2sv
|
ksaml
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-03T17:25:49Z |
2023-03-04T10:13:28+00:00
| 19 | 0 |
---
datasets:
- kde4
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-cs2sv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-cs2sv
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-cs-sv](https://huggingface.co/Helsinki-NLP/opus-mt-cs-sv) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-cs2sv
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-cs-sv](https://huggingface.co/Helsinki-NLP/opus-mt-cs-sv) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
{"datasets": ["kde4"], "license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-cs2sv", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,065 |
davidkim205/komt-mistral-7b-v1-dpo
|
davidkim205
|
text-generation
|
[
"peft",
"safetensors",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"llama-2-chat",
"text-generation",
"en",
"ko",
"arxiv:2308.06502",
"arxiv:2308.06259",
"region:us"
] | 2023-11-29T09:21:52Z |
2023-11-29T09:45:13+00:00
| 12 | 8 |
---
language:
- en
- ko
library_name: peft
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-2-chat
inference: false
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : The komt-mistral-7b-v1-dpo is is a fine-tuned version of the komt-mistral-7b-v1(original model : Mistral-7B-Instruct-v0.1).
## Dataset
* maywell/ko_Ultrafeedback_binarized
- https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]</s>
```
## Usage
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
from transformers import TextStreamer, GenerationConfig
model='davidkim205/komt-mistral-7b-v1'
peft_model_name = 'davidkim205/komt-mistral-7b-v1-dpo'
config = PeftConfig.from_pretrained(peft_model_name)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
config.base_model_name_or_path =model
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto")
model = PeftModel.from_pretrained(model, peft_model_name)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
)
q = f"[INST]{x} [/INST]"
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"[/INST]"
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
result = gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘')
print('##########')
print(result)
```
output
```
제주도 1박2일 1인 여행 코스
제주도는 한국에서 가장 먼 섬인 동남아시아 최대 섬으로, 멋진 해변, 아름다운 자연경관, 절경 납땜 절벽, 한국 최대 규모의 복합리조트 등 다양한 관광 명소가 풍부하게 있어 1박2일로 혼자 여행하시는 여러분들을 위해 아래와 같은 코스를 제안해 드리겠습니다.
▷ 코스 1 : 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람
- 코스 설명 : 제주 동남쪽 해안의 명소인 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람 순으로 구성된 코스입니다. 아침에 일찍 일어나 일출봉에 도착하여 일출을 감상하고, 아침 식사를 하고 절벽 등반을 즐기며 휴식을 취합니다. 오후에는 일출봉 야간 경관 관람을 즐기며 휴식과 휴식을 취합니다.
▷ 코스 2 : 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠
- 코스 설명 : 제주 남부의 명소인 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠 순으로 구성된 코스입니다. 아침에 일찍 일어나 한라산 케이블카를 타고 높은 고지에 위치한 한라산 정상으로 올라가서 탐험을 즐기며 아침 식사를 합니다. 오후에는 오미자 바위를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
▷ 코스 3 : 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길
- 코스 설명 : 제주 서부의 명소인 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길 순으로 구성된 코스입니다. 아침에 일찍 일어나 대하늘길에서 탐험을 즐기며 아침 식사를 합니다. 오후에는 삼거리를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
|------------------------------------------|---------| ------------ |------------|
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
| **komt-mistral-7b-v1-dpo (open)(ours)** | **142** | **3.83** | **76.75%** |
| null |
Non_BioNLP
|
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt
* **Model Architecture** : The komt-mistral-7b-v1-dpo is is a fine-tuned version of the komt-mistral-7b-v1(original model : Mistral-7B-Instruct-v0.1).
## Dataset
* maywell/ko_Ultrafeedback_binarized
- https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized
## Hardware and Software
- nvidia driver : 535.54.03
- CUDA Version: 12.2
## Training
Refer https://github.com/davidkim205/komt
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]</s>
```
## Usage
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
from transformers import TextStreamer, GenerationConfig
model='davidkim205/komt-mistral-7b-v1'
peft_model_name = 'davidkim205/komt-mistral-7b-v1-dpo'
config = PeftConfig.from_pretrained(peft_model_name)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
config.base_model_name_or_path =model
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, quantization_config=bnb_config, device_map="auto")
model = PeftModel.from_pretrained(model, peft_model_name)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.8,
top_p=0.8,
top_k=100,
max_new_tokens=1024,
early_stopping=True,
do_sample=True,
)
q = f"[INST]{x} [/INST]"
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"[/INST]"
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
result = gen('제주도를 1박2일로 혼자 여행하려고 하는데 여행 코스를 만들어줘')
print('##########')
print(result)
```
output
```
제주도 1박2일 1인 여행 코스
제주도는 한국에서 가장 먼 섬인 동남아시아 최대 섬으로, 멋진 해변, 아름다운 자연경관, 절경 납땜 절벽, 한국 최대 규모의 복합리조트 등 다양한 관광 명소가 풍부하게 있어 1박2일로 혼자 여행하시는 여러분들을 위해 아래와 같은 코스를 제안해 드리겠습니다.
▷ 코스 1 : 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람
- 코스 설명 : 제주 동남쪽 해안의 명소인 성산일출봉, 용눈이절벽, 성산일출봉 야간 경관 관람 순으로 구성된 코스입니다. 아침에 일찍 일어나 일출봉에 도착하여 일출을 감상하고, 아침 식사를 하고 절벽 등반을 즐기며 휴식을 취합니다. 오후에는 일출봉 야간 경관 관람을 즐기며 휴식과 휴식을 취합니다.
▷ 코스 2 : 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠
- 코스 설명 : 제주 남부의 명소인 한라산, 한라산 케이블카, 오미자 바위, 신라 이젠 순으로 구성된 코스입니다. 아침에 일찍 일어나 한라산 케이블카를 타고 높은 고지에 위치한 한라산 정상으로 올라가서 탐험을 즐기며 아침 식사를 합니다. 오후에는 오미자 바위를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
▷ 코스 3 : 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길
- 코스 설명 : 제주 서부의 명소인 대하늘길, 삼거리, 곰돌라비, 칠동굴, 광안절, 칠금절, 해넘이길, 바다지상 길 순으로 구성된 코스입니다. 아침에 일찍 일어나 대하늘길에서 탐험을 즐기며 아침 식사를 합니다. 오후에는 삼거리를 찾아 휴식과 휴식을 취하고, 일출봉 야간 경관 관람을 즐기며 휴식을 취합니다.
```
## Evaluation
For objective model evaluation, we initially used EleutherAI's lm-evaluation-harness but obtained unsatisfactory results. Consequently, we conducted evaluations using ChatGPT, a widely used model, as described in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06502.pdf) and [Three Ways of Using Large Language Models to Evaluate Chat](https://arxiv.org/pdf/2308.06259.pdf) .
| model | score | average(0~5) | percentage |
|------------------------------------------|---------| ------------ |------------|
| gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
| naver Cue(close) | 140 | 3.78 | 75.67% |
| clova X(close) | 136 | 3.67 | 73.51% |
| WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
| Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
| Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
| nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
| kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
| beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
| **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
| **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
| **komt-llama-30b-v1 (open)(ours)** | **129** | **3.16** | **63.24%** |
| **komt-mistral-7b-v1 (open)(ours)** | **131** | **3.54** | **70.81%** |
| **komt-mistral-7b-v1-dpo (open)(ours)** | **142** | **3.83** | **76.75%** |
|
{"language": ["en", "ko"], "library_name": "peft", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-2", "llama-2-chat"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 45,066 |
openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf
|
openthaigpt
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"openthaigpt",
"th",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-18T13:10:35Z |
2023-12-19T15:26:24+00:00
| 2,073 | 2 |
---
language:
- th
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- openthaigpt
- llama
---
# 🇹🇭 OpenThaiGPT 13b 1.0.0-beta Chat with 16 bits in Huggingface's format.
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce
https://openthaigpt.aieat.or.th/" width="200px">
🇹🇭 OpenThaiGPT 13b Version 1.0.0-beta is a Thai language 13B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 10,000 most popular Thai words vocabularies into LLM's dictionary for turbo speed.
## Licenses
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Codes and Weight
**Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
**Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
**Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf
## Sponsors
<img src="https://hf.fast360.xyz/production/uploads/5fcd9c426d942eaf4d1ebd30/42d-GioSs4evIdNuMAaPB.png" width="600px">
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## Description
Prompt format is Llama2
```
<s>[INST] <<SYS>>
system_prompt
<</SYS>>
question [/INST]
```
System prompt:
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
## How to use
1. install VLLM (https://github.com/vllm-project/vllm)
2. python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
3. run inference (CURL example)
```
curl --request POST \
--url http://localhost:8000/generate \
--header "Content-Type: application/json" \
--data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'
```
### Authors
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
<i>Disclaimer: Provided responses are not guaranteed.</i>
| null |
Non_BioNLP
|
# 🇹🇭 OpenThaiGPT 13b 1.0.0-beta Chat with 16 bits in Huggingface's format.
<img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce
https://openthaigpt.aieat.or.th/" width="200px">
🇹🇭 OpenThaiGPT 13b Version 1.0.0-beta is a Thai language 13B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 10,000 most popular Thai words vocabularies into LLM's dictionary for turbo speed.
## Licenses
**Source Code**: License Apache Software License 2.0.<br>
**Weight**: Research and **Commercial uses**.<br>
## Codes and Weight
**Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br>
**Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br>
**Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf
## Sponsors
<img src="https://hf.fast360.xyz/production/uploads/5fcd9c426d942eaf4d1ebd30/42d-GioSs4evIdNuMAaPB.png" width="600px">
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
## Description
Prompt format is Llama2
```
<s>[INST] <<SYS>>
system_prompt
<</SYS>>
question [/INST]
```
System prompt:
You are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด
## How to use
1. install VLLM (https://github.com/vllm-project/vllm)
2. python -m vllm.entrypoints.api_server --model /path/to/model --tensor-parallel-size num_gpus
3. run inference (CURL example)
```
curl --request POST \
--url http://localhost:8000/generate \
--header "Content-Type: application/json" \
--data '{"prompt": "<s>[INST] <<SYS>>\nYou are a question answering assistant. Answer the question as truthful and helpful as possible คุณคือผู้ช่วยตอบคำถาม จงตอบคำถามอย่างถูกต้องและมีประโยชน์ที่สุด\n<</SYS>>\n\nอยากลดความอ้วนต้องทำอย่างไร [/INST]","use_beam_search": false, "temperature": 0.1, "max_tokens": 512, "top_p": 0.75, "top_k": 40, "frequency_penalty": 0.3 "stop": "</s>"}'
```
### Authors
* Kobkrit Viriyayudhakorn ([email protected])
* Sumeth Yuenyong ([email protected])
* Thaweewat Rugsujarit ([email protected])
* Jillaphat Jaroenkantasima ([email protected])
* Norapat Buppodom ([email protected])
* Koravich Sangkaew ([email protected])
* Peerawat Rojratchadakorn ([email protected])
* Surapon Nonesung ([email protected])
* Chanon Utupon ([email protected])
* Sadhis Wongprayoon ([email protected])
* Nucharee Thongthungwong ([email protected])
* Chawakorn Phiantham ([email protected])
* Patteera Triamamornwooth ([email protected])
* Nattarika Juntarapaoraya ([email protected])
* Kriangkrai Saetan ([email protected])
* Pitikorn Khlaisamniang ([email protected])
<i>Disclaimer: Provided responses are not guaranteed.</i>
|
{"language": ["th", "en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["openthaigpt", "llama"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,067 |
WhiteAngelss/entity-sentiment-analysis
|
WhiteAngelss
|
token-classification
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-31T13:48:44Z |
2024-08-05T17:25:06+00:00
| 10 | 0 |
---
license: mit
---
---
language: tr
tags:
-- entity-sentiment-analysis
- text-classification
- sentiment-analysis
datasets:
- ctoraman/atis-ner-turkish
- akdeniz27/bert-base-turkish-cased-ner
metrics:
- accuracy
- f1
model-index:
- name: WhiteAngelss/entity-sentiment-analysis
results:
- task:
type: token-classification
name: Varlık Tanıma (Named Entity Recognition)
dataset:
name: ctoraman/atis-ner-turkish
metrics:
- name: F1
type: f1
value: 0.92
- task:
type: text-classification
name: Duygu Analizi (Sentiment Analysis)
dataset:
name: akdeniz27/bert-base-turkish-cased-ner
metrics:
- name: Doğruluk (Accuracy)
type: accuracy
value: 0.88
---
# WhiteAngelss/entity-sentiment-analysis
## Model Açıklaması
Bu model, Türkçe metinlerde varlık tanıma ve duygu analizi gerçekleştirir. `akdeniz27/bert-base-turkish-cased-ner` modelinden ince ayar yapılarak ve `ctoraman/atis-ner-turkish` veri seti üzerinde eğitilmiştir.
## Kullanım Amacı
Model, müşteri yorumlarını analiz etmek, varlıkları tanımlamak ve bu varlıklarla ilişkili duyguları belirlemek için kullanılabilir.
## Sınırlamalar ve Yanlılıklar
- Model, öncelikle Türkçe metinler üzerinde eğitildiği için diğer dillerde iyi performans göstermeyebilir.
- Eğitim verilerindeki yanlılıklar, modelin tahminlerini etkileyebilir.
## Eğitim Verisi
Model, `ctoraman/atis-ner-turkish` ve `akdeniz27/bert-base-turkish-cased-ner` veri setleri kullanılarak eğitildi.
## Değerlendirme Sonuçları
Model, varlık tanıma için 0.92 F1 skoru ve duygu analizi için 0.88 doğruluk elde etti.
| null |
Non_BioNLP
|
---
language: tr
tags:
-- entity-sentiment-analysis
- text-classification
- sentiment-analysis
datasets:
- ctoraman/atis-ner-turkish
- akdeniz27/bert-base-turkish-cased-ner
metrics:
- accuracy
- f1
model-index:
- name: WhiteAngelss/entity-sentiment-analysis
results:
- task:
type: token-classification
name: Varlık Tanıma (Named Entity Recognition)
dataset:
name: ctoraman/atis-ner-turkish
metrics:
- name: F1
type: f1
value: 0.92
- task:
type: text-classification
name: Duygu Analizi (Sentiment Analysis)
dataset:
name: akdeniz27/bert-base-turkish-cased-ner
metrics:
- name: Doğruluk (Accuracy)
type: accuracy
value: 0.88
---
# WhiteAngelss/entity-sentiment-analysis
## Model Açıklaması
Bu model, Türkçe metinlerde varlık tanıma ve duygu analizi gerçekleştirir. `akdeniz27/bert-base-turkish-cased-ner` modelinden ince ayar yapılarak ve `ctoraman/atis-ner-turkish` veri seti üzerinde eğitilmiştir.
## Kullanım Amacı
Model, müşteri yorumlarını analiz etmek, varlıkları tanımlamak ve bu varlıklarla ilişkili duyguları belirlemek için kullanılabilir.
## Sınırlamalar ve Yanlılıklar
- Model, öncelikle Türkçe metinler üzerinde eğitildiği için diğer dillerde iyi performans göstermeyebilir.
- Eğitim verilerindeki yanlılıklar, modelin tahminlerini etkileyebilir.
## Eğitim Verisi
Model, `ctoraman/atis-ner-turkish` ve `akdeniz27/bert-base-turkish-cased-ner` veri setleri kullanılarak eğitildi.
## Değerlendirme Sonuçları
Model, varlık tanıma için 0.92 F1 skoru ve duygu analizi için 0.88 doğruluk elde etti.
|
{"license": "mit"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,068 |
ali619/distilbert-base-uncased-finetuned-emotion-detector-from-text
|
ali619
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-05T17:17:16Z |
2023-11-05T18:51:25+00:00
| 99 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion-detector-from-text
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9345
name: Accuracy
- type: f1
value: 0.9346813045403889
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-detector-from-text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Accuracy: 0.9345
- F1: 0.9347
## Model description
This model is trained on english tweets and can classify emotions in text files.
## Intended uses & limitations
More information needed
## Training and evaluation data
16,000 train samples
2,000 validation samples
2,000 test samples
## Training procedure
Finetunning distilbert-base-uncased
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1038 | 1.0 | 250 | 0.1757 | 0.9325 | 0.9329 |
| 0.094 | 2.0 | 500 | 0.1628 | 0.9345 | 0.9347 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-detector-from-text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1628
- Accuracy: 0.9345
- F1: 0.9347
## Model description
This model is trained on english tweets and can classify emotions in text files.
## Intended uses & limitations
More information needed
## Training and evaluation data
16,000 train samples
2,000 validation samples
2,000 test samples
## Training procedure
Finetunning distilbert-base-uncased
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1038 | 1.0 | 250 | 0.1757 | 0.9325 | 0.9329 |
| 0.094 | 2.0 | 500 | 0.1628 | 0.9345 | 0.9347 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion-detector-from-text", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9345, "name": "Accuracy"}, {"type": "f1", "value": 0.9346813045403889, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,069 |
TransferGraph/phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:phailyoor/distilbert-base-uncased-finetuned-yahd",
"base_model:adapter:phailyoor/distilbert-base-uncased-finetuned-yahd",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-29T13:40:22Z |
2024-02-29T13:40:24+00:00
| 0 | 0 |
---
base_model: phailyoor/distilbert-base-uncased-finetuned-yahd
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.721
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [phailyoor/distilbert-base-uncased-finetuned-yahd](https://huggingface.co/phailyoor/distilbert-base-uncased-finetuned-yahd) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.431 | None | 0 |
| 0.7 | 0.6095 | 0 |
| 0.709 | 0.5125 | 1 |
| 0.697 | 0.4633 | 2 |
| 0.721 | 0.4396 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [phailyoor/distilbert-base-uncased-finetuned-yahd](https://huggingface.co/phailyoor/distilbert-base-uncased-finetuned-yahd) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.431 | None | 0 |
| 0.7 | 0.6095 | 0 |
| 0.709 | 0.5125 | 1 |
| 0.697 | 0.4633 | 2 |
| 0.721 | 0.4396 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "phailyoor/distilbert-base-uncased-finetuned-yahd", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "phailyoor_distilbert-base-uncased-finetuned-yahd-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "hate", "split": "validation", "args": "hate"}, "metrics": [{"type": "accuracy", "value": 0.721, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,070 |
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2021-06-23T06:19:18+00:00
| 12 | 1 |
---
tags:
- summarization
widget:
- text: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot
&& pr . Match >= pr . PendingSnapshot }
---
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
| null |
Non_BioNLP
|
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 large model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/large_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Training procedure
### Multi-task Pretraining
The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
|
task
|
[
"SUMMARIZATION"
] | 45,071 |
p1atdev/peft_qwen_14b_chat_int4_xlsum
|
p1atdev
|
summarization
|
[
"peft",
"summarization",
"ja",
"dataset:p1atdev/instruction_xlsum_ja",
"region:us"
] | 2023-10-02T14:55:28Z |
2023-10-02T23:46:40+00:00
| 0 | 0 |
---
datasets:
- p1atdev/instruction_xlsum_ja
language:
- ja
library_name: peft
pipeline_tag: summarization
---
| null |
Non_BioNLP
|
{"datasets": ["p1atdev/instruction_xlsum_ja"], "language": ["ja"], "library_name": "peft", "pipeline_tag": "summarization"}
|
task
|
[
"SUMMARIZATION"
] | 45,072 |
|
julien-cpsn/BERT-llm-generic
|
julien-cpsn
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:julien-cpsn/autotrain-data-BERT-llm-generic",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-23T10:04:02Z |
2024-01-23T10:04:17+00:00
| 3 | 0 |
---
datasets:
- julien-cpsn/autotrain-data-BERT-llm-generic
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.09664379805326462
f1_macro: 0.9714796738949586
f1_micro: 0.9743589743589743
f1_weighted: 0.974192594235853
precision_macro: 0.9726759726759727
precision_micro: 0.9743589743589743
precision_weighted: 0.9742847242847243
recall_macro: 0.9706012378426171
recall_micro: 0.9743589743589743
recall_weighted: 0.9743589743589743
accuracy: 0.9743589743589743
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.09664379805326462
f1_macro: 0.9714796738949586
f1_micro: 0.9743589743589743
f1_weighted: 0.974192594235853
precision_macro: 0.9726759726759727
precision_micro: 0.9743589743589743
precision_weighted: 0.9742847242847243
recall_macro: 0.9706012378426171
recall_micro: 0.9743589743589743
recall_weighted: 0.9743589743589743
accuracy: 0.9743589743589743
|
{"datasets": ["julien-cpsn/autotrain-data-BERT-llm-generic"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,073 |
giotvr/bertimbau_large_plue_mnli_fine_tuned
|
giotvr
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"nli",
"pt",
"dataset:assin2",
"arxiv:1911.02116",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-12-04T23:35:06Z |
2023-12-04T23:44:44+00:00
| 98 | 0 |
---
datasets:
- assin2
language:
- pt
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- nli
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a **[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned model** on 5K (premise, hypothesis) sentence pairsfrom
the **PLUE/MNLI (Portuguese translation of the SNLI's GLUE benchmark)** corpus. The original references are:
[Unsupervised Cross-Lingual Representation Learning At Scale](https://arxiv.org/pdf/1911.02116), [PLUE](https://huggingface.co/datasets/dlb/plue), respectivelly. This model is suitable for Portuguese.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Giovani Tavares and Felipe Ribas Serras
- **Oriented By:** Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
- **Model type:** Transformer-based text classifier
- **Language(s) (NLP):** Portuguese
- **License:** mit
- **Finetuned from model** [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference)
- **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) performs Natural
Language Inference (NLI), which is a text classification task.
<!-- <div id="assin_function">
**Definition 1.** Given a pair of sentences $$(premise, hypothesis)$, let $\hat{f}^{(xlmr\_base)}$ be the fine-tuned models' inference function:
$$
\hat{f}^{(xlmr\_base)} =
\begin{cases}
ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\
PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\
NONE & \text{otherwise}
\end{cases}
$$
</div> -->
The *(premise, hypothesis)* entailment definition used is the same as the one found in Salvatore's paper [1].
Therefore, this fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) classifies pairs of sentences in the form *(premise, hypothesis)* into the classes *entailment*, *neutral* and *contradiction*.
<!-- ## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## Demo
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_path = "giotvr/bertimbau_large_plue_mnli_fine_tuned"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)
with torch.no_grad():
logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")
```
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
This model should be used for scientific purposes only. It was not tested for production environments.
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed] -->
## Fine-Tuning Details
### Fine-Tuning Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
---
- **Train Dataset**: [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue) <br>
- **Evaluation Dataset used for Hyperparameter Tuning:** [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s validation split
- **Test Datasets:**
- [ASSIN](https://huggingface.co/datasets/assin)'s test split
- [ASSIN2](https://huggingface.co/datasets/assin2)'s test split
- [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue/viewer/mnli_matched)'s validation matched split
---
This is a fine tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) using the [ASSIN2 (Avaliação de Similaridade Semântica e Inferência textual)](https://huggingface.co/datasets/assin2) dataset. [ASSIN2](https://huggingface.co/datasets/assin2) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment or neutral
relationship between the members of such pairs. Such corpus is balanced with 7k *ptbr* (Brazilian Portuguese) sentence pairs.
### Fine-Tuning Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model's fine-tuning procedure can be summarized in three major subsequent tasks:
<ol type="i">
<li>**Data Processing:**</li> [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards;
<li>**Hyperparameter Tuning:**</li>[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)'s hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
<li>**Final Model Loading and Testing:**</li>
using the *cross-tests* approach described in the [this section](#evaluation), the models' performance were measured using different datasets and metrics.
</ol>
<!-- ##### Column Renaming
The **Hugging Face**'s ```transformers``` module's ```DataCollator``` used by its ```Trainer``` requires that the ```class label``` column of the collated dataset to be called ```label```. [ASSIN](https://huggingface.co/datasets/assin)'s class label column for each hypothesis/premise pair is called ```entailment_judgement```. Therefore, as the first step of the data preprocessing pipeline the column ```entailment_judgement``` was renamed to ```label``` so that the **Hugging Face**'s ```transformers``` module's ```Trainer``` could be used. -->
#### Hyperparameter Tuning
<!-- The model's training hyperparameters were chosen according to the following definition:
<div id="hyperparameter_tuning">
**Definition 2.** Let $Hyperparms= \{i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr\_base)}\}$ and $\hat{f}^{(xlmr\_base)}$ be the model's inference function defined in [Definition 1](#assin_function) :
$$
Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))
$$
</div> -->
The following hyperparameters were tested in order to maximize the evaluation accuracy.
- **Number of Training Epochs:** $(4,5,6)$
- **Per Device Train Batch Size:** $(8,16,32)$
- **Learning Rate:** $(3e−5, 2e−5, 3e−5)$
The hyperparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn).
#### Training Hyperparameters
The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values:
- **Number of Training Epochs:** $6$
- **Per Device Train Batch Size:** $16$
- **Learning Rate:** $5e-5$
## Evaluation
### ASSIN
Testing this model in ASSIN's test split required some translation of the *NONE* and *PARAPHRASE* classes found in it, because such classes are not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*, and *PARAPHRASE* was considered *entailment* in both ways: from premise to hypothesis and from hypothesis to premise. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### ASSIN2
Testing this model in ASSIN2's test split required some translation of the *NONE* classe found in it, because such class is not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### PLUE/MNLI
Testing this model in PLUE/MNLI's test set was straightforward as it was fine-tuned in its training set.
More information on how such mapping is performed can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.
### Results
| test set | accuracy | f1 score | precision | recall |
|----------|----------|----------|-----------|--------|
| assin |0.72 |0.67 |0.63 |0.73 |
| assin2 |0.87 |0.87 |0.88 |0.87 |
| plue/mnli|0.84 |0.83 |0.84 |0.84 |
## Model Examination
<!-- Relevant interpretability work for the model goes here -->
Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.
<!--## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->
<!-- ## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
**BibTeX:**
```bibtex
@article{tcc_paper,
author = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
title = {Modelos Transformer para Inferência de Linguagem Natural em Português},
pages = {x--y},
year = {2023}
}
``` -->
## References
[1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)
<!--[2][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)
[3][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->
| null |
Non_BioNLP
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a **[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned model** on 5K (premise, hypothesis) sentence pairsfrom
the **PLUE/MNLI (Portuguese translation of the SNLI's GLUE benchmark)** corpus. The original references are:
[Unsupervised Cross-Lingual Representation Learning At Scale](https://arxiv.org/pdf/1911.02116), [PLUE](https://huggingface.co/datasets/dlb/plue), respectivelly. This model is suitable for Portuguese.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Giovani Tavares and Felipe Ribas Serras
- **Oriented By:** Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
- **Model type:** Transformer-based text classifier
- **Language(s) (NLP):** Portuguese
- **License:** mit
- **Finetuned from model** [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference)
- **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
This fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) performs Natural
Language Inference (NLI), which is a text classification task.
<!-- <div id="assin_function">
**Definition 1.** Given a pair of sentences $$(premise, hypothesis)$, let $\hat{f}^{(xlmr\_base)}$ be the fine-tuned models' inference function:
$$
\hat{f}^{(xlmr\_base)} =
\begin{cases}
ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\
PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\
NONE & \text{otherwise}
\end{cases}
$$
</div> -->
The *(premise, hypothesis)* entailment definition used is the same as the one found in Salvatore's paper [1].
Therefore, this fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) classifies pairs of sentences in the form *(premise, hypothesis)* into the classes *entailment*, *neutral* and *contradiction*.
<!-- ## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## Demo
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_path = "giotvr/bertimbau_large_plue_mnli_fine_tuned"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)
with torch.no_grad():
logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")
```
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
This model should be used for scientific purposes only. It was not tested for production environments.
<!-- ## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed] -->
## Fine-Tuning Details
### Fine-Tuning Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
---
- **Train Dataset**: [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue) <br>
- **Evaluation Dataset used for Hyperparameter Tuning:** [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s validation split
- **Test Datasets:**
- [ASSIN](https://huggingface.co/datasets/assin)'s test split
- [ASSIN2](https://huggingface.co/datasets/assin2)'s test split
- [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue/viewer/mnli_matched)'s validation matched split
---
This is a fine tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) using the [ASSIN2 (Avaliação de Similaridade Semântica e Inferência textual)](https://huggingface.co/datasets/assin2) dataset. [ASSIN2](https://huggingface.co/datasets/assin2) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment or neutral
relationship between the members of such pairs. Such corpus is balanced with 7k *ptbr* (Brazilian Portuguese) sentence pairs.
### Fine-Tuning Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model's fine-tuning procedure can be summarized in three major subsequent tasks:
<ol type="i">
<li>**Data Processing:**</li> [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards;
<li>**Hyperparameter Tuning:**</li>[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)'s hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
<li>**Final Model Loading and Testing:**</li>
using the *cross-tests* approach described in the [this section](#evaluation), the models' performance were measured using different datasets and metrics.
</ol>
<!-- ##### Column Renaming
The **Hugging Face**'s ```transformers``` module's ```DataCollator``` used by its ```Trainer``` requires that the ```class label``` column of the collated dataset to be called ```label```. [ASSIN](https://huggingface.co/datasets/assin)'s class label column for each hypothesis/premise pair is called ```entailment_judgement```. Therefore, as the first step of the data preprocessing pipeline the column ```entailment_judgement``` was renamed to ```label``` so that the **Hugging Face**'s ```transformers``` module's ```Trainer``` could be used. -->
#### Hyperparameter Tuning
<!-- The model's training hyperparameters were chosen according to the following definition:
<div id="hyperparameter_tuning">
**Definition 2.** Let $Hyperparms= \{i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr\_base)}\}$ and $\hat{f}^{(xlmr\_base)}$ be the model's inference function defined in [Definition 1](#assin_function) :
$$
Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))
$$
</div> -->
The following hyperparameters were tested in order to maximize the evaluation accuracy.
- **Number of Training Epochs:** $(4,5,6)$
- **Per Device Train Batch Size:** $(8,16,32)$
- **Learning Rate:** $(3e−5, 2e−5, 3e−5)$
The hyperparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn).
#### Training Hyperparameters
The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values:
- **Number of Training Epochs:** $6$
- **Per Device Train Batch Size:** $16$
- **Learning Rate:** $5e-5$
## Evaluation
### ASSIN
Testing this model in ASSIN's test split required some translation of the *NONE* and *PARAPHRASE* classes found in it, because such classes are not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*, and *PARAPHRASE* was considered *entailment* in both ways: from premise to hypothesis and from hypothesis to premise. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### ASSIN2
Testing this model in ASSIN2's test split required some translation of the *NONE* classe found in it, because such class is not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### PLUE/MNLI
Testing this model in PLUE/MNLI's test set was straightforward as it was fine-tuned in its training set.
More information on how such mapping is performed can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.
### Results
| test set | accuracy | f1 score | precision | recall |
|----------|----------|----------|-----------|--------|
| assin |0.72 |0.67 |0.63 |0.73 |
| assin2 |0.87 |0.87 |0.88 |0.87 |
| plue/mnli|0.84 |0.83 |0.84 |0.84 |
## Model Examination
<!-- Relevant interpretability work for the model goes here -->
Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.
<!--## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->
<!-- ## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
**BibTeX:**
```bibtex
@article{tcc_paper,
author = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
title = {Modelos Transformer para Inferência de Linguagem Natural em Português},
pages = {x--y},
year = {2023}
}
``` -->
## References
[1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)
<!--[2][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)
[3][Andrade, G. T. (2023) Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->
|
{"datasets": ["assin2"], "language": ["pt"], "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["nli"]}
|
task
|
[
"TEXT_CLASSIFICATION",
"TEXTUAL_ENTAILMENT",
"TRANSLATION"
] | 45,074 |
VaggP/bge-fine-tuned
|
VaggP
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4370",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-13T14:19:00Z |
2024-10-13T14:19:37+00:00
| 10 | 0 |
---
base_model: BAAI/bge-large-en-v1.5
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4370
- loss:CosineSimilarityLoss
widget:
- source_sentence: '
Construct: Recognise a linear graph from its shape
Subject: Finding the Gradient and Intercept of a Line from the Equation
Question: Use a graphing program (e.g. Desmos) to plot the following pairs of
functions.
\[
y=3 \text { and } y=-2
\]
Tom says both functions are linear
Katie says both functions are vertical lines
Who is correct?
Incorrect Answer: Neither is correct
Correct Answer: Only
Tom
'
sentences:
- Believes the coefficent of x in an expanded quadratic comes from multiplying the
two numbers in the brackets
- Does not know the properties of a linear graph
- Misremembers the quadratic formula
- source_sentence: '
Construct: Multiply two decimals together with the same number of decimal places
Subject: Multiplying and Dividing with Decimals
Question: \( 0.6 \times 0.4= \)
Incorrect Answer: \( 2.4 \)
Correct Answer: \( 0.24 \)
'
sentences:
- When asked to solve simultaneous equations, believes they can just find values
that work in one equation
- Believes the solutions of a quadratic equation are the constants in the factorised
form
- When multiplying decimals, divides by the wrong power of 10 when reinserting the
decimal
- source_sentence: '
Construct: Estimate the volume or capacity of an object
Subject: Volume of Prisms
Question: Each of these measurements matches one of these objects. ![An image
of 4 objects and 4 measurements. The objects are an egg cup, a cereal box, a chest
of drawers and a piggy bank. And, the measurements are 87 cm^3, 1013 cm^3, 4172
cm^3 and 197,177 cm^3.]() Which measurement most likely matches the egg cup?
Incorrect Answer: \( 197177 \mathrm{~cm}^{3} \)
Correct Answer: \( 87 \mathrm{~cm}^{3} \)
'
sentences:
- Confuses quadratic and exponential graphs
- Cannot estimate the relative volume order, for different objects
- Does not know how many days are in a leap year
- source_sentence: '
Construct: Carry out division problems involving one negative integer
Subject: Multiplying and Dividing Negative Numbers
Question: \( 12 \div(-4)= \)
Incorrect Answer: \( 3 \)
Correct Answer: \( -3 \)
'
sentences:
- Believes dividing a positive by a negative gives a positive answer
- Believes -a is always smaller than a, ignoring the possibility that a is negative
- Subtracts instead of divides
- source_sentence: '
Construct: Construct frequency tables
Subject: Frequency tables
Question: Dave has recorded the number of pets his classmates have in the frequency
table on the right. \begin{tabular}{|c|c|}
\hline Number of pets & Frequency \\
\hline \( 0 \) & \( 4 \) \\
\hline \( 1 \) & \( 6 \) \\
\hline \( 2 \) & \( 3 \) \\
\hline \( 3 \) & \( 2 \) \\
\hline \( 4 \) & \( 5 \) \\
\hline
\end{tabular} If Dave wanted to work out the total number of pets own by his classmates,
what would be a useful column to include?
Incorrect Answer: Number of pets -
Frequency
Correct Answer: Number of pets \( x \) Frequency
'
sentences:
- Subtracts rather than multiplies when calculating total frequency
- Does not follow the arrows through a function machine, changes the order of the
operations asked.
- 'Believes the intersection in a prime factor venn diagram does not contribute
to the size of the number represented by a circle '
---
# SentenceTransformer based on BAAI/bge-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("VaggP/bge-fine-tuned")
# Run inference
sentences = [
'\nConstruct: Construct frequency tables\nSubject: Frequency tables\nQuestion: Dave has recorded the number of pets his classmates have in the frequency table on the right. \\begin{tabular}{|c|c|}\n\\hline Number of pets & Frequency \\\\\n\\hline \\( 0 \\) & \\( 4 \\) \\\\\n\\hline \\( 1 \\) & \\( 6 \\) \\\\\n\\hline \\( 2 \\) & \\( 3 \\) \\\\\n\\hline \\( 3 \\) & \\( 2 \\) \\\\\n\\hline \\( 4 \\) & \\( 5 \\) \\\\\n\\hline\n\\end{tabular} If Dave wanted to work out the total number of pets own by his classmates, what would be a useful column to include?\nIncorrect Answer: Number of pets -\nFrequency\nCorrect Answer: Number of pets \\( x \\) Frequency\n',
'Subtracts rather than multiplies when calculating total frequency',
'Does not follow the arrows through a function machine, changes the order of the operations asked.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,370 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 38 tokens</li><li>mean: 98.75 tokens</li><li>max: 414 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.91 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|:-----------------|
| <code><br>Construct: Construct a pictogram involving fractions of symbols<br>Subject: Pictogram<br>Question: This pictogram shows the different types of music Bob has in his music collection.<br>Bob has \( 2 \) rave CDs.<br><br>How would he display this on the pictogram? ![A pictogram showing the number of CDs Bob has in his musical collection. Pop has 3 and a half symbols, rock has 2 symbols, blues has 2 and a quarter symbols, jazz has 3 and a quarter symbols and classical has 1 and three-quarter symbols. Each symbol represents 4 CDs.]()<br>Incorrect Answer: ![\( 00 \)]()<br>Correct Answer: ![\( 0 \)]()<br></code> | <code>When interpreting a pictogram, thinks each symbol stands for 1</code> | <code>1.0</code> |
| <code><br>Construct: Use brackets to write function machines as calculations<br>Subject: Writing Expressions<br>Question: Tom and Katie are arguing about the result of this Function Machine:<br>Tom says the output is: \( 3 n-12 \)<br>Katie says the output is: \( 3(n-4) \)<br>Who is correct? ![A function machine with input n and operations subtract 4, multiply by 3]()<br>Incorrect Answer: Only Tom<br>Correct Answer: Both Tom and Katie<br></code> | <code>Does not think a factorised expression is equivalent to its multiplied out form</code> | <code>1.0</code> |
| <code><br>Construct: Interpret linear sections of real life graphs<br>Subject: Real Life Graphs<br>Question: The graph on the right shows the mass of sand in a bucket over time<br><br>What might the horizontal section represent? ![A graph with time (secs) on the horizontal axis and mass (g) on the vertical axis. The graph starts at the origin, travels in a straight line up and right, travels horizontally, then travels in a straight line down and right back to the x-axis, more steeply than the start. ]()<br>Incorrect Answer: Sand is being tipped out<br>Correct Answer: The bucket is full<br></code> | <code>Believes a horizontal line can show a constant rate of change</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.9141 | 500 | 0.0055 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on BAAI/bge-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("VaggP/bge-fine-tuned")
# Run inference
sentences = [
'\nConstruct: Construct frequency tables\nSubject: Frequency tables\nQuestion: Dave has recorded the number of pets his classmates have in the frequency table on the right. \\begin{tabular}{|c|c|}\n\\hline Number of pets & Frequency \\\\\n\\hline \\( 0 \\) & \\( 4 \\) \\\\\n\\hline \\( 1 \\) & \\( 6 \\) \\\\\n\\hline \\( 2 \\) & \\( 3 \\) \\\\\n\\hline \\( 3 \\) & \\( 2 \\) \\\\\n\\hline \\( 4 \\) & \\( 5 \\) \\\\\n\\hline\n\\end{tabular} If Dave wanted to work out the total number of pets own by his classmates, what would be a useful column to include?\nIncorrect Answer: Number of pets -\nFrequency\nCorrect Answer: Number of pets \\( x \\) Frequency\n',
'Subtracts rather than multiplies when calculating total frequency',
'Does not follow the arrows through a function machine, changes the order of the operations asked.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,370 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 38 tokens</li><li>mean: 98.75 tokens</li><li>max: 414 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.91 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------|:-----------------|
| <code><br>Construct: Construct a pictogram involving fractions of symbols<br>Subject: Pictogram<br>Question: This pictogram shows the different types of music Bob has in his music collection.<br>Bob has \( 2 \) rave CDs.<br><br>How would he display this on the pictogram? ![A pictogram showing the number of CDs Bob has in his musical collection. Pop has 3 and a half symbols, rock has 2 symbols, blues has 2 and a quarter symbols, jazz has 3 and a quarter symbols and classical has 1 and three-quarter symbols. Each symbol represents 4 CDs.]()<br>Incorrect Answer: ![\( 00 \)]()<br>Correct Answer: ![\( 0 \)]()<br></code> | <code>When interpreting a pictogram, thinks each symbol stands for 1</code> | <code>1.0</code> |
| <code><br>Construct: Use brackets to write function machines as calculations<br>Subject: Writing Expressions<br>Question: Tom and Katie are arguing about the result of this Function Machine:<br>Tom says the output is: \( 3 n-12 \)<br>Katie says the output is: \( 3(n-4) \)<br>Who is correct? ![A function machine with input n and operations subtract 4, multiply by 3]()<br>Incorrect Answer: Only Tom<br>Correct Answer: Both Tom and Katie<br></code> | <code>Does not think a factorised expression is equivalent to its multiplied out form</code> | <code>1.0</code> |
| <code><br>Construct: Interpret linear sections of real life graphs<br>Subject: Real Life Graphs<br>Question: The graph on the right shows the mass of sand in a bucket over time<br><br>What might the horizontal section represent? ![A graph with time (secs) on the horizontal axis and mass (g) on the vertical axis. The graph starts at the origin, travels in a straight line up and right, travels horizontally, then travels in a straight line down and right back to the x-axis, more steeply than the start. ]()<br>Incorrect Answer: Sand is being tipped out<br>Correct Answer: The bucket is full<br></code> | <code>Believes a horizontal line can show a constant rate of change</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.9141 | 500 | 0.0055 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.2.0
- Transformers: 4.45.1
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "BAAI/bge-large-en-v1.5", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:4370", "loss:CosineSimilarityLoss"], "widget": [{"source_sentence": "\nConstruct: Recognise a linear graph from its shape\nSubject: Finding the Gradient and Intercept of a Line from the Equation\nQuestion: Use a graphing program (e.g. Desmos) to plot the following pairs of functions.\n\\[\ny=3 \\text { and } y=-2\n\\]\n\nTom says both functions are linear\n\nKatie says both functions are vertical lines\n\nWho is correct?\nIncorrect Answer: Neither is correct\nCorrect Answer: Only\nTom\n", "sentences": ["Believes the coefficent of x in an expanded quadratic comes from multiplying the two numbers in the brackets", "Does not know the properties of a linear graph", "Misremembers the quadratic formula"]}, {"source_sentence": "\nConstruct: Multiply two decimals together with the same number of decimal places\nSubject: Multiplying and Dividing with Decimals\nQuestion: \\( 0.6 \\times 0.4= \\)\nIncorrect Answer: \\( 2.4 \\)\nCorrect Answer: \\( 0.24 \\)\n", "sentences": ["When asked to solve simultaneous equations, believes they can just find values that work in one equation", "Believes the solutions of a quadratic equation are the constants in the factorised form", "When multiplying decimals, divides by the wrong power of 10 when reinserting the decimal"]}, {"source_sentence": "\nConstruct: Estimate the volume or capacity of an object\nSubject: Volume of Prisms\nQuestion: Each of these measurements matches one of these objects. ![An image of 4 objects and 4 measurements. The objects are an egg cup, a cereal box, a chest of drawers and a piggy bank. And, the measurements are 87 cm^3, 1013 cm^3, 4172 cm^3 and 197,177 cm^3.]() Which measurement most likely matches the egg cup?\nIncorrect Answer: \\( 197177 \\mathrm{~cm}^{3} \\)\nCorrect Answer: \\( 87 \\mathrm{~cm}^{3} \\)\n", "sentences": ["Confuses quadratic and exponential graphs", "Cannot estimate the relative volume order, for different objects", "Does not know how many days are in a leap year"]}, {"source_sentence": "\nConstruct: Carry out division problems involving one negative integer\nSubject: Multiplying and Dividing Negative Numbers\nQuestion: \\( 12 \\div(-4)= \\)\nIncorrect Answer: \\( 3 \\)\nCorrect Answer: \\( -3 \\)\n", "sentences": ["Believes dividing a positive by a negative gives a positive answer", "Believes -a is always smaller than a, ignoring the possibility that a is negative", "Subtracts instead of divides"]}, {"source_sentence": "\nConstruct: Construct frequency tables\nSubject: Frequency tables\nQuestion: Dave has recorded the number of pets his classmates have in the frequency table on the right. \\begin{tabular}{|c|c|}\n\\hline Number of pets & Frequency \\\\\n\\hline \\( 0 \\) & \\( 4 \\) \\\\\n\\hline \\( 1 \\) & \\( 6 \\) \\\\\n\\hline \\( 2 \\) & \\( 3 \\) \\\\\n\\hline \\( 3 \\) & \\( 2 \\) \\\\\n\\hline \\( 4 \\) & \\( 5 \\) \\\\\n\\hline\n\\end{tabular} If Dave wanted to work out the total number of pets own by his classmates, what would be a useful column to include?\nIncorrect Answer: Number of pets -\nFrequency\nCorrect Answer: Number of pets \\( x \\) Frequency\n", "sentences": ["Subtracts rather than multiplies when calculating total frequency", "Does not follow the arrows through a function machine, changes the order of the operations asked.", "Believes the intersection in a prime factor venn diagram does not contribute to the size of the number represented by a circle "]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,075 |
JustFrederik/nllb-200-distilled-1.3B-ct2
|
JustFrederik
|
translation
|
[
"transformers",
"nllb",
"translation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug",
"bg",
"ca",
"ceb",
"cs",
"cjk",
"ckb",
"crh",
"cy",
"da",
"de",
"dik",
"dyu",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fj",
"fi",
"fon",
"fr",
"fur",
"fuv",
"gaz",
"gd",
"ga",
"gl",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hne",
"hr",
"hu",
"hy",
"ig",
"ilo",
"id",
"is",
"it",
"jv",
"ja",
"kab",
"kac",
"kam",
"kn",
"ks",
"ka",
"kk",
"kbp",
"kea",
"khk",
"km",
"ki",
"rw",
"ky",
"kmb",
"kmr",
"knc",
"kg",
"ko",
"lo",
"lij",
"li",
"ln",
"lt",
"lmo",
"ltg",
"lb",
"lua",
"lg",
"luo",
"lus",
"lvs",
"mag",
"mai",
"ml",
"mar",
"min",
"mk",
"mt",
"mni",
"mos",
"mi",
"my",
"nl",
"nn",
"nb",
"npi",
"nso",
"nus",
"ny",
"oc",
"ory",
"pag",
"pa",
"pap",
"pbt",
"pes",
"plt",
"pl",
"pt",
"prs",
"quy",
"ro",
"rn",
"ru",
"sg",
"sa",
"sat",
"scn",
"shn",
"si",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"st",
"es",
"sc",
"sr",
"ss",
"su",
"sv",
"swh",
"szl",
"ta",
"taq",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"tpi",
"tn",
"ts",
"tk",
"tum",
"tr",
"tw",
"tzm",
"ug",
"uk",
"umb",
"ur",
"uzn",
"vec",
"vi",
"war",
"wo",
"xh",
"ydd",
"yo",
"yue",
"zh",
"zsm",
"zu",
"dataset:flores-200",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | 2023-05-14T21:39:14Z |
2023-05-14T21:52:51+00:00
| 16 | 0 |
---
datasets:
- flores-200
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- als
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ayr
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fj
- fi
- fon
- fr
- fur
- fuv
- gaz
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kk
- kbp
- kea
- khk
- km
- ki
- rw
- ky
- kmb
- kmr
- knc
- kg
- ko
- lo
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- lvs
- mag
- mai
- ml
- mar
- min
- mk
- mt
- mni
- mos
- mi
- my
- nl
- nn
- nb
- npi
- nso
- nus
- ny
- oc
- ory
- pag
- pa
- pap
- pbt
- pes
- plt
- pl
- pt
- prs
- quy
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- sc
- sr
- ss
- su
- sv
- swh
- szl
- ta
- taq
- tt
- te
- tg
- tl
- th
- ti
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uzn
- vec
- vi
- war
- wo
- xh
- ydd
- yo
- yue
- zh
- zsm
- zu
license: cc-by-nc-4.0
metrics:
- bleu
- spbleu
- chrf++
tags:
- nllb
- translation
language_details: ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng,
ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl,
bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn,
bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn,
dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn,
est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn,
fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn,
ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn,
kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn,
kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn,
kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn,
lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn,
mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn,
nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya,
pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn,
ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr,
sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn,
spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn,
szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn,
twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn,
vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans,
zho_Hant, zul_Latn
---
https://huggingface.co/facebook/nllb-200-distilled-1.3B
```
ct2-transformers-converter --model facebook/nllb-200-distilled-1.3B --output_dir converted/nllb-200-distilled-1.3B-ct2
```
| null |
Non_BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/facebook/nllb-200-distilled-1.3B\n```\nct2-transformers-converter --model facebook/nllb-200-distilled-1.3B --output_dir converted/nllb-200-distilled-1.3B-ct2\n```" |
{"datasets": ["flores-200"], "language": ["ace", "acm", "acq", "aeb", "af", "ajp", "ak", "als", "am", "apc", "ar", "ars", "ary", "arz", "as", "ast", "awa", "ayr", "azb", "azj", "ba", "bm", "ban", "be", "bem", "bn", "bho", "bjn", "bo", "bs", "bug", "bg", "ca", "ceb", "cs", "cjk", "ckb", "crh", "cy", "da", "de", "dik", "dyu", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fj", "fi", "fon", "fr", "fur", "fuv", "gaz", "gd", "ga", "gl", "gn", "gu", "ht", "ha", "he", "hi", "hne", "hr", "hu", "hy", "ig", "ilo", "id", "is", "it", "jv", "ja", "kab", "kac", "kam", "kn", "ks", "ka", "kk", "kbp", "kea", "khk", "km", "ki", "rw", "ky", "kmb", "kmr", "knc", "kg", "ko", "lo", "lij", "li", "ln", "lt", "lmo", "ltg", "lb", "lua", "lg", "luo", "lus", "lvs", "mag", "mai", "ml", "mar", "min", "mk", "mt", "mni", "mos", "mi", "my", "nl", "nn", "nb", "npi", "nso", "nus", "ny", "oc", "ory", "pag", "pa", "pap", "pbt", "pes", "plt", "pl", "pt", "prs", "quy", "ro", "rn", "ru", "sg", "sa", "sat", "scn", "shn", "si", "sk", "sl", "sm", "sn", "sd", "so", "st", "es", "sc", "sr", "ss", "su", "sv", "swh", "szl", "ta", "taq", "tt", "te", "tg", "tl", "th", "ti", "tpi", "tn", "ts", "tk", "tum", "tr", "tw", "tzm", "ug", "uk", "umb", "ur", "uzn", "vec", "vi", "war", "wo", "xh", "ydd", "yo", "yue", "zh", "zsm", "zu"], "license": "cc-by-nc-4.0", "metrics": ["bleu", "spbleu", "chrf++"], "tags": ["nllb", "translation"], "language_details": "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn"}
|
task
|
[
"TRANSLATION"
] | 45,076 |
tmnam20/xlm-roberta-base-vtoc-10
|
tmnam20
|
text-classification
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-16T11:33:12Z |
2024-01-16T11:35:02+00:00
| 5 | 0 |
---
base_model: xlm-roberta-base
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-vtoc-10
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- type: accuracy
value: 0.829601310759148
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-vtoc-10
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6232
- Accuracy: 0.8296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5738 | 2.19 | 500 | 0.6383 | 0.8241 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-vtoc-10
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6232
- Accuracy: 0.8296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5738 | 2.19 | 500 | 0.6383 | 0.8241 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "xlm-roberta-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-vtoc-10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/VTOC", "type": "tmnam20/VieGLUE", "config": "vtoc", "split": "validation", "args": "vtoc"}, "metrics": [{"type": "accuracy", "value": 0.829601310759148, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,078 |
immadarkmatter/immadarkmatter_Summarizer
|
immadarkmatter
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:gazeta",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-04-23T23:49:11Z |
2023-04-23T23:56:55+00:00
| 8 | 0 |
---
datasets:
- gazeta
tags:
- generated_from_trainer
model-index:
- name: immadarkmatter_Summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# immadarkmatter_Summarizer
This model is a fine-tuned version of [UrukHan/t5-russian-summarization](https://huggingface.co/UrukHan/t5-russian-summarization) on the gazeta dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# immadarkmatter_Summarizer
This model is a fine-tuned version of [UrukHan/t5-russian-summarization](https://huggingface.co/UrukHan/t5-russian-summarization) on the gazeta dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
{"datasets": ["gazeta"], "tags": ["generated_from_trainer"], "model-index": [{"name": "immadarkmatter_Summarizer", "results": []}]}
|
task
|
[
"SUMMARIZATION"
] | 45,079 |
mrovejaxd/goemotions_bertspanish_finetunig_d
|
mrovejaxd
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-02T13:15:37Z |
2023-05-24T06:05:53+00:00
| 62 | 0 |
---
datasets:
- go_emotions
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: goemotions_bertspanish_finetunig_d
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- type: accuracy
value: 0.5125
name: Accuracy
- type: f1
value: 0.3757437789402451
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_d
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8151
- Accuracy: 0.5125
- F1: 0.3757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_d
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8151
- Accuracy: 0.5125
- F1: 0.3757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"datasets": ["go_emotions"], "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "goemotions_bertspanish_finetunig_d", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "go_emotions", "type": "go_emotions", "config": "simplified", "split": "test", "args": "simplified"}, "metrics": [{"type": "accuracy", "value": 0.5125, "name": "Accuracy"}, {"type": "f1", "value": 0.3757437789402451, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,080 |
nichonifroa/distilbert-base-uncased-distilled-clinc
|
nichonifroa
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-04T08:28:05Z |
2023-08-04T08:30:00+00:00
| 17 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.94
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.903 | 1.0 | 318 | 0.5766 | 0.7310 |
| 0.4492 | 2.0 | 636 | 0.2856 | 0.8771 |
| 0.2535 | 3.0 | 954 | 0.1800 | 0.9226 |
| 0.1767 | 4.0 | 1272 | 0.1398 | 0.9310 |
| 0.1424 | 5.0 | 1590 | 0.1212 | 0.9335 |
| 0.1245 | 6.0 | 1908 | 0.1118 | 0.9381 |
| 0.1143 | 7.0 | 2226 | 0.1063 | 0.9432 |
| 0.1077 | 8.0 | 2544 | 0.1030 | 0.9426 |
| 0.1041 | 9.0 | 2862 | 0.1012 | 0.9403 |
| 0.1021 | 10.0 | 3180 | 0.1005 | 0.94 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.903 | 1.0 | 318 | 0.5766 | 0.7310 |
| 0.4492 | 2.0 | 636 | 0.2856 | 0.8771 |
| 0.2535 | 3.0 | 954 | 0.1800 | 0.9226 |
| 0.1767 | 4.0 | 1272 | 0.1398 | 0.9310 |
| 0.1424 | 5.0 | 1590 | 0.1212 | 0.9335 |
| 0.1245 | 6.0 | 1908 | 0.1118 | 0.9381 |
| 0.1143 | 7.0 | 2226 | 0.1063 | 0.9432 |
| 0.1077 | 8.0 | 2544 | 0.1030 | 0.9426 |
| 0.1041 | 9.0 | 2862 | 0.1012 | 0.9403 |
| 0.1021 | 10.0 | 3180 | 0.1005 | 0.94 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.94, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,081 |
mankowitz/bert-large-uncased_med-ner
|
mankowitz
| null |
[
"Transformers PHP",
"onnx",
"bert",
"region:us"
] | 2024-09-03T10:20:02Z |
2024-09-03T10:35:07+00:00
| 8 | 0 |
---
library_name: Transformers PHP
tags:
- onnx
---
https://huggingface.co/samrawal/bert-large-uncased_med-ner with ONNX weights to be compatible with Transformers PHP
A Named Entity Recognition model for medication entities (`medication name`, `dosage`, `duration`, `frequency`, `reason`).
The model has been trained on the i2b2 (now n2c2) dataset for the 2009 - Medication task. Please visit the n2c2 site to request access to the dataset.
---
| null |
BioNLP
| ERROR: type should be string, got "\nhttps://huggingface.co/samrawal/bert-large-uncased_med-ner with ONNX weights to be compatible with Transformers PHP\n\nA Named Entity Recognition model for medication entities (`medication name`, `dosage`, `duration`, `frequency`, `reason`).\n\nThe model has been trained on the i2b2 (now n2c2) dataset for the 2009 - Medication task. Please visit the n2c2 site to request access to the dataset.\n---\n" |
{"library_name": "Transformers PHP", "tags": ["onnx"]}
|
task
|
[
"NAMED_ENTITY_RECOGNITION"
] | 45,082 |
espnet/owsm_ctc_v3.2_ft_1B
|
espnet
|
automatic-speech-recognition
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"speech-translation",
"language-identification",
"multilingual",
"dataset:owsm_v3.2_ctc",
"arxiv:2406.09282",
"arxiv:2401.16658",
"arxiv:2309.13876",
"base_model:espnet/owsm_ctc_v3.2_ft_1B",
"base_model:finetune:espnet/owsm_ctc_v3.2_ft_1B",
"license:cc-by-4.0",
"region:us"
] | 2024-09-24T18:25:20Z |
2025-02-06T19:35:26+00:00
| 113 | 3 |
---
base_model:
- espnet/owsm_ctc_v3.2_ft_1B
datasets:
- owsm_v3.2_ctc
language: multilingual
license: cc-by-4.0
tags:
- espnet
- audio
- automatic-speech-recognition
- speech-translation
- language-identification
---
[OWSM-CTC](https://aclanthology.org/2024.acl-long.549/) (Peng et al., ACL 2024) is an encoder-only speech foundation model based on hierarchical multi-task self-conditioned CTC.
This model is trained on 180k hours of public audio data for multilingual speech recognition, any-to-any speech translation, and language identification, which follows the design of the project, [Open Whisper-style Speech Model (OWSM)](https://www.wavlab.org/activities/2024/owsm/).
This model is initialized with [OWSM-CTC v3.1](https://huggingface.co/pyf98/owsm_ctc_v3.1_1B) and then fine-tuned on [v3.2 data](https://arxiv.org/abs/2406.09282) for 225k steps.
To use the pre-trained model, please install `espnet` and `espnet_model_zoo`. The requirements are:
```
librosa
torch
espnet
espnet_model_zoo
```
**The recipe can be found in ESPnet:** https://github.com/espnet/espnet/tree/master/egs2/owsm_ctc_v3.1/s2t1
### Example script for batched inference
`Speech2TextGreedySearch` now provides a unified batched inference method `batch_decode`. It performs CTC greedy decoding for a batch of short-form or long-form audios. If an audio is shorter than 30s, it will be padded to 30s; otherwise it will be split into overlapped segments (same as the "long-form ASR/ST" method below).
```python
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
use_flash_attn=False, # set to True for better efficiency if flash attn is installed and dtype is float16 or bfloat16
lang_sym='<eng>',
task_sym='<asr>',
)
res = s2t.batch_decode(
"audio.wav", # a single audio (path or 1-D array/tensor) as input
batch_size=16,
context_len_in_secs=4,
) # res is a single str, i.e., the predicted text without special tokens
res = s2t.batch_decode(
["audio1.wav", "audio2.wav", "audio3.wav"], # a list of audios as input
batch_size=16,
context_len_in_secs=4,
) # res is a list of str
# Please check the code of `batch_decode` for all supported inputs
```
### Example script for short-form ASR/ST/LID
Our models are trained on 16kHz audio with a fixed duration of 30s. When using the pre-trained model, please ensure the input speech is 16kHz and pad or truncate it to 30s.
```python
import librosa
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
# NOTE: OWSM-CTC is trained on 16kHz audio with a fixed 30s duration. Please ensure your input has the correct sample rate; otherwise resample it to 16k before feeding it to the model
speech, rate = librosa.load("xxx.wav", sr=16000)
speech = librosa.util.fix_length(speech, size=(16000 * 30))
res = s2t(speech)[0]
print(res)
```
### Example script for long-form ASR/ST
```python
import soundfile as sf
import torch
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
context_len_in_secs = 4 # left and right context when doing buffered inference
batch_size = 32 # depends on the GPU memory
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device='cuda' if torch.cuda.is_available() else 'cpu',
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
speech, rate = sf.read(
"xxx.wav"
)
text = s2t.decode_long_batched_buffered(
speech,
batch_size=batch_size,
context_len_in_secs=context_len_in_secs,
)
print(text)
```
### Example of CTC forced alignment using `ctc-segmentation`
CTC segmentation can be efficiently applied to audio of an arbitrary length.
```python
import soundfile as sf
from espnet2.bin.s2t_ctc_align import CTCSegmentation
from espnet_model_zoo.downloader import ModelDownloader
# Download model first
d = ModelDownloader()
downloaded = d.download_and_unpack("espnet/owsm_ctc_v3.2_ft_1B")
aligner = CTCSegmentation(
**downloaded,
fs=16000,
ngpu=1,
batch_size=32, # batched parallel decoding; reduce it if your GPU memory is smaller
kaldi_style_text=True,
time_stamps="auto", # "auto" can be more accurate than "fixed" when converting token index to timestamp
lang_sym="<eng>",
task_sym="<asr>",
context_len_in_secs=2, # left and right context in buffered decoding
)
speech, rate = sf.read(
"./test_utils/ctc_align_test.wav"
)
print(f"speech duration: {len(speech) / rate : .2f} seconds")
text = """
utt1 THE SALE OF THE HOTELS
utt2 IS PART OF HOLIDAY'S STRATEGY
utt3 TO SELL OFF ASSETS
utt4 AND CONCENTRATE ON PROPERTY MANAGEMENT
"""
segments = aligner(speech, text)
print(segments)
```
## Citations
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
| null |
Non_BioNLP
|
[OWSM-CTC](https://aclanthology.org/2024.acl-long.549/) (Peng et al., ACL 2024) is an encoder-only speech foundation model based on hierarchical multi-task self-conditioned CTC.
This model is trained on 180k hours of public audio data for multilingual speech recognition, any-to-any speech translation, and language identification, which follows the design of the project, [Open Whisper-style Speech Model (OWSM)](https://www.wavlab.org/activities/2024/owsm/).
This model is initialized with [OWSM-CTC v3.1](https://huggingface.co/pyf98/owsm_ctc_v3.1_1B) and then fine-tuned on [v3.2 data](https://arxiv.org/abs/2406.09282) for 225k steps.
To use the pre-trained model, please install `espnet` and `espnet_model_zoo`. The requirements are:
```
librosa
torch
espnet
espnet_model_zoo
```
**The recipe can be found in ESPnet:** https://github.com/espnet/espnet/tree/master/egs2/owsm_ctc_v3.1/s2t1
### Example script for batched inference
`Speech2TextGreedySearch` now provides a unified batched inference method `batch_decode`. It performs CTC greedy decoding for a batch of short-form or long-form audios. If an audio is shorter than 30s, it will be padded to 30s; otherwise it will be split into overlapped segments (same as the "long-form ASR/ST" method below).
```python
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
use_flash_attn=False, # set to True for better efficiency if flash attn is installed and dtype is float16 or bfloat16
lang_sym='<eng>',
task_sym='<asr>',
)
res = s2t.batch_decode(
"audio.wav", # a single audio (path or 1-D array/tensor) as input
batch_size=16,
context_len_in_secs=4,
) # res is a single str, i.e., the predicted text without special tokens
res = s2t.batch_decode(
["audio1.wav", "audio2.wav", "audio3.wav"], # a list of audios as input
batch_size=16,
context_len_in_secs=4,
) # res is a list of str
# Please check the code of `batch_decode` for all supported inputs
```
### Example script for short-form ASR/ST/LID
Our models are trained on 16kHz audio with a fixed duration of 30s. When using the pre-trained model, please ensure the input speech is 16kHz and pad or truncate it to 30s.
```python
import librosa
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device="cuda",
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
# NOTE: OWSM-CTC is trained on 16kHz audio with a fixed 30s duration. Please ensure your input has the correct sample rate; otherwise resample it to 16k before feeding it to the model
speech, rate = librosa.load("xxx.wav", sr=16000)
speech = librosa.util.fix_length(speech, size=(16000 * 30))
res = s2t(speech)[0]
print(res)
```
### Example script for long-form ASR/ST
```python
import soundfile as sf
import torch
from espnet2.bin.s2t_inference_ctc import Speech2TextGreedySearch
context_len_in_secs = 4 # left and right context when doing buffered inference
batch_size = 32 # depends on the GPU memory
s2t = Speech2TextGreedySearch.from_pretrained(
"espnet/owsm_ctc_v3.2_ft_1B",
device='cuda' if torch.cuda.is_available() else 'cpu',
generate_interctc_outputs=False,
lang_sym='<eng>',
task_sym='<asr>',
)
speech, rate = sf.read(
"xxx.wav"
)
text = s2t.decode_long_batched_buffered(
speech,
batch_size=batch_size,
context_len_in_secs=context_len_in_secs,
)
print(text)
```
### Example of CTC forced alignment using `ctc-segmentation`
CTC segmentation can be efficiently applied to audio of an arbitrary length.
```python
import soundfile as sf
from espnet2.bin.s2t_ctc_align import CTCSegmentation
from espnet_model_zoo.downloader import ModelDownloader
# Download model first
d = ModelDownloader()
downloaded = d.download_and_unpack("espnet/owsm_ctc_v3.2_ft_1B")
aligner = CTCSegmentation(
**downloaded,
fs=16000,
ngpu=1,
batch_size=32, # batched parallel decoding; reduce it if your GPU memory is smaller
kaldi_style_text=True,
time_stamps="auto", # "auto" can be more accurate than "fixed" when converting token index to timestamp
lang_sym="<eng>",
task_sym="<asr>",
context_len_in_secs=2, # left and right context in buffered decoding
)
speech, rate = sf.read(
"./test_utils/ctc_align_test.wav"
)
print(f"speech duration: {len(speech) / rate : .2f} seconds")
text = """
utt1 THE SALE OF THE HOTELS
utt2 IS PART OF HOLIDAY'S STRATEGY
utt3 TO SELL OFF ASSETS
utt4 AND CONCENTRATE ON PROPERTY MANAGEMENT
"""
segments = aligner(speech, text)
print(segments)
```
## Citations
#### OWSM-CTC
```BibTex
@inproceedings{owsm-ctc,
title = "{OWSM}-{CTC}: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification",
author = "Peng, Yifan and
Sudo, Yui and
Shakeel, Muhammad and
Watanabe, Shinji",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
year = "2024",
month= {8},
url = "https://aclanthology.org/2024.acl-long.549",
}
```
#### OWSM v3.1 and v3.2
```BibTex
@inproceedings{owsm-v32,
title={On the Effects of Heterogeneous Data Sources on Speech-to-Text Foundation Models},
author={Jinchuan Tian and Yifan Peng and William Chen and Kwanghee Choi and Karen Livescu and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2406.09282"
}
@inproceedings{owsm-v31,
title={{OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer}},
author={Yifan Peng and Jinchuan Tian and William Chen and Siddhant Arora and Brian Yan and Yui Sudo and Muhammad Shakeel and Kwanghee Choi and Jiatong Shi and Xuankai Chang and Jee-weon Jung and Shinji Watanabe},
booktitle={Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH)},
year={2024},
month={9},
pdf="https://arxiv.org/pdf/2401.16658",
}
```
#### Initial OWSM (v1, v2, v3)
```BibTex
@inproceedings{owsm,
title={Reproducing Whisper-Style Training Using An Open-Source Toolkit And Publicly Available Data},
author={Yifan Peng and Jinchuan Tian and Brian Yan and Dan Berrebbi and Xuankai Chang and Xinjian Li and Jiatong Shi and Siddhant Arora and William Chen and Roshan Sharma and Wangyou Zhang and Yui Sudo and Muhammad Shakeel and Jee-weon Jung and Soumi Maiti and Shinji Watanabe},
booktitle={Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2023},
month={12},
pdf="https://arxiv.org/pdf/2309.13876",
}
```
|
{"base_model": ["espnet/owsm_ctc_v3.2_ft_1B"], "datasets": ["owsm_v3.2_ctc"], "language": "multilingual", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition", "speech-translation", "language-identification"]}
|
task
|
[
"TRANSLATION"
] | 45,083 |
Zardian/distilbert-base-uncased-finetuned-emotion
|
Zardian
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-05T11:04:53Z |
2024-03-05T11:09:57+00:00
| 4 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.92
name: Accuracy
- type: f1
value: 0.9198468921108184
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.92
- F1: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8598 | 1.0 | 250 | 0.3359 | 0.9035 | 0.9024 |
| 0.2677 | 2.0 | 500 | 0.2337 | 0.92 | 0.9198 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.92
- F1: 0.9198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8598 | 1.0 | 250 | 0.3359 | 0.9035 | 0.9024 |
| 0.2677 | 2.0 | 500 | 0.2337 | 0.92 | 0.9198 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.92, "name": "Accuracy"}, {"type": "f1", "value": 0.9198468921108184, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,084 |
Helsinki-NLP/opus-mt-is-es
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:58:33+00:00
| 31 | 0 |
---
language:
- is
- es
license: apache-2.0
tags:
- translation
---
### isl-spa
* source group: Icelandic
* target group: Spanish
* OPUS readme: [isl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.spa | 51.2 | 0.665 |
### System Info:
- hf_name: isl-spa
- source_languages: isl
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'es']
- src_constituents: {'isl'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: spa
- short_pair: is-es
- chrF2_score: 0.665
- bleu: 51.2
- brevity_penalty: 0.985
- ref_len: 1229.0
- src_name: Icelandic
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: es
- prefer_old: False
- long_pair: isl-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| null |
Non_BioNLP
|
### isl-spa
* source group: Icelandic
* target group: Spanish
* OPUS readme: [isl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.spa | 51.2 | 0.665 |
### System Info:
- hf_name: isl-spa
- source_languages: isl
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'es']
- src_constituents: {'isl'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: spa
- short_pair: is-es
- chrF2_score: 0.665
- bleu: 51.2
- brevity_penalty: 0.985
- ref_len: 1229.0
- src_name: Icelandic
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: es
- prefer_old: False
- long_pair: isl-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
{"language": ["is", "es"], "license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,085 |
Lvxue/distilled-mt5-small-010099-10
|
Lvxue
|
text2text-generation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-10T05:21:18Z |
2022-08-10T06:38:53+00:00
| 10 | 0 |
---
datasets:
- wmt16
language:
- en
- ro
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: distilled-mt5-small-010099-10
results:
- task:
type: translation
name: Translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- type: bleu
value: 6.1705
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099-10
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9685
- Bleu: 6.1705
- Gen Len: 50.5663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099-10
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9685
- Bleu: 6.1705
- Gen Len: 50.5663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
{"datasets": ["wmt16"], "language": ["en", "ro"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilled-mt5-small-010099-10", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "wmt16 ro-en", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 6.1705, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,086 |
pszemraj/pythia-31m-simplewiki-scratch-bf16
|
pszemraj
|
text-generation
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"dataset:pszemraj/simple_wikipedia_LM",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-15T03:26:05Z |
2023-11-18T12:58:54+00:00
| 2,080 | 0 |
---
datasets:
- pszemraj/simple_wikipedia_LM
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- generated_from_trainer
inference:
parameters:
max_new_tokens: 64
do_sample: true
repetition_penalty: 1.1
no_repeat_ngram_size: 5
guidance_scale: 1.01
eta_cutoff: 0.001
widget:
- text: My name is El Microondas the Wise and
example_title: El Microondas
- text: A meme is
example_title: meme
- text: Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because she had
example_title: Coreference resolution
- text: 'On a shelf, there are five books: a gray book, a red book, a purple book,
a blue book, and a black book'
example_title: Logic puzzles
- text: The two men running to become New York City's next mayor will face off in
their first debate Wednesday night
example_title: Reading comprehension
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-31m-simplewiki-scratch-bf16
Trained from random initialized config based on [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16
It achieves the following results on the evaluation set:
- Loss: 4.1763
- Accuracy: 0.3676
## Model description
tuned with bf16 (previous was fp32)
## Intended uses & limitations
More information needed
## Training and evaluation data
```
***** eval metrics *****
epoch = 2.99
eval_accuracy = 0.3723 eval_loss = 4.1155
eval_runtime = 0:00:14.44
eval_samples = 500 eval_samples_per_second = 34.602 eval_steps_per_second = 17.301
perplexity = 61.2811
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.8617 | 0.45 | 100 | 5.5276 | 0.2451 |
| 5.2782 | 0.9 | 200 | 4.9596 | 0.2965 |
| 4.9996 | 1.35 | 300 | 4.6412 | 0.3310 |
| 4.6292 | 1.8 | 400 | 4.4344 | 0.3485 |
| 4.5339 | 2.25 | 500 | 4.2875 | 0.3600 |
| 4.5214 | 2.7 | 600 | 4.1763 | 0.3676 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__pythia-31m-simplewiki-scratch-bf16)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.63 |
| ARC (25-shot) | 22.78 |
| HellaSwag (10-shot) | 25.61 |
| MMLU (5-shot) | 23.12 |
| TruthfulQA (0-shot) | 49.65 |
| Winogrande (5-shot) | 50.51 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.72 |
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-31m-simplewiki-scratch-bf16
Trained from random initialized config based on [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16
It achieves the following results on the evaluation set:
- Loss: 4.1763
- Accuracy: 0.3676
## Model description
tuned with bf16 (previous was fp32)
## Intended uses & limitations
More information needed
## Training and evaluation data
```
***** eval metrics *****
epoch = 2.99
eval_accuracy = 0.3723 eval_loss = 4.1155
eval_runtime = 0:00:14.44
eval_samples = 500 eval_samples_per_second = 34.602 eval_steps_per_second = 17.301
perplexity = 61.2811
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.8617 | 0.45 | 100 | 5.5276 | 0.2451 |
| 5.2782 | 0.9 | 200 | 4.9596 | 0.2965 |
| 4.9996 | 1.35 | 300 | 4.6412 | 0.3310 |
| 4.6292 | 1.8 | 400 | 4.4344 | 0.3485 |
| 4.5339 | 2.25 | 500 | 4.2875 | 0.3600 |
| 4.5214 | 2.7 | 600 | 4.1763 | 0.3676 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__pythia-31m-simplewiki-scratch-bf16)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.63 |
| ARC (25-shot) | 22.78 |
| HellaSwag (10-shot) | 25.61 |
| MMLU (5-shot) | 23.12 |
| TruthfulQA (0-shot) | 49.65 |
| Winogrande (5-shot) | 50.51 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.72 |
|
{"datasets": ["pszemraj/simple_wikipedia_LM"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-generation", "tags": ["generated_from_trainer"], "inference": {"parameters": {"max_new_tokens": 64, "do_sample": true, "repetition_penalty": 1.1, "no_repeat_ngram_size": 5, "guidance_scale": 1.01, "eta_cutoff": 0.001}}, "widget": [{"text": "My name is El Microondas the Wise and", "example_title": "El Microondas"}, {"text": "A meme is", "example_title": "meme"}, {"text": "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had", "example_title": "Coreference resolution"}, {"text": "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book", "example_title": "Logic puzzles"}, {"text": "The two men running to become New York City's next mayor will face off in their first debate Wednesday night", "example_title": "Reading comprehension"}]}
|
task
|
[
"COREFERENCE_RESOLUTION"
] | 45,087 |
RichardErkhov/ibm-granite_-_granite-3b-code-base-128k-awq
|
RichardErkhov
| null |
[
"safetensors",
"llama",
"arxiv:2405.04324",
"4-bit",
"awq",
"region:us"
] | 2024-12-15T16:07:01Z |
2024-12-15T16:08:04+00:00
| 15 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-3b-code-base-128k - AWQ
- Model creator: https://huggingface.co/ibm-granite/
- Original model: https://huggingface.co/ibm-granite/granite-3b-code-base-128k/
Original model description:
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-3b-code-base-128k
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis (Python)
metrics:
- name: pass@1
type: pass@1
value: 36.0
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis (Average)
metrics:
- name: pass@1
type: pass@1
value: 30.5
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain (Average)
metrics:
- name: pass@1
type: pass@1
value: 22.4
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix (Average)
metrics:
- name: pass@1
type: pass@1
value: 19.9
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Python@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 40.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (C++@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 36.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Java@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 37.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (TypeScript@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 27.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Rust@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 29.0
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@4K
type: Exact Match@4K
value: 54.6
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@8K
type: Exact Match@8K
value: 56.8
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@16K
type: Exact Match@16K
value: 52.2
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@32K
type: Exact Match@32K
value: 57.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@4K
type: Exact Match@4K
value: 39.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@8K
type: Exact Match@8K
value: 46.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@16K
type: Exact Match@16K
value: 43.1
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-Pn(Balanced)
metrics:
- name: Exact Match@32K
type: Exact Match@32K
value: 45.3
verified: false
---

# Granite-3B-Code-Base-128K
## Model Summary
**Granite-3B-Code-Base-128K** extends the context length of Granite-3B-Code-Base from 2K to 128K with continual pretraining using the original training data but with repository-level file packing and per-language length upsampling, that we found to be critical for long-context pretraining.
We adopt an progressive training strategy where we doubled the context window until it reached the desired length of 128K by appropriately adjusting RoPE theta. We trained on 4B tokens total for all stages, which is only 0.1% of Granite-3B-Code-Base's original pre-training data.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2405.04324)
- **Release Date**: July 18th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
Prominent enterprise use cases of LLMs in software engineering productivity with 128K context length support that includes code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages.
### Generation
This is a simple example of how to use **Granite-3B-Code-Base-128K** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-3b-code-base-128k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "def generate():"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
## Training Data
Starting from the base Granite model, this model was further pretrained on repository-level code data with per-language context-length oversampling, allowing it to effectively utilize up to 128K tokens of context. This continued training stage focused on a curated selection of programming languages, such as Python, C, C++, Go, Java, JavaScript, and TypeScript.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-3B-Code-Base-128K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-3B-Code-Base-128K** model with ethical intentions and in a responsible way.
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-3b-code-base-128k - AWQ
- Model creator: https://huggingface.co/ibm-granite/
- Original model: https://huggingface.co/ibm-granite/granite-3b-code-base-128k/
Original model description:
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-3b-code-base-128k
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis (Python)
metrics:
- name: pass@1
type: pass@1
value: 36.0
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis (Average)
metrics:
- name: pass@1
type: pass@1
value: 30.5
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain (Average)
metrics:
- name: pass@1
type: pass@1
value: 22.4
verified: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix (Average)
metrics:
- name: pass@1
type: pass@1
value: 19.9
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Python@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 40.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (C++@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 36.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Java@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 37.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (TypeScript@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 27.0
verified: false
- task:
type: text-generation
dataset:
type: repoqa
name: RepoQA (Rust@16K)
metrics:
- name: pass@1 (thresh=0.5)
type: pass@1 (thresh=0.5)
value: 29.0
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@4K
type: Exact Match@4K
value: 54.6
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@8K
type: Exact Match@8K
value: 56.8
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@16K
type: Exact Match@16K
value: 52.2
verified: false
- task:
type: text-generation
dataset:
type: lcc
name: LCC (Balanced)
metrics:
- name: Exact Match@32K
type: Exact Match@32K
value: 57.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@4K
type: Exact Match@4K
value: 39.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@8K
type: Exact Match@8K
value: 46.8
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-P (Balanced)
metrics:
- name: Exact Match@16K
type: Exact Match@16K
value: 43.1
verified: false
- task:
type: text-generation
dataset:
type: repobench
name: RepoBench-Pn(Balanced)
metrics:
- name: Exact Match@32K
type: Exact Match@32K
value: 45.3
verified: false
---

# Granite-3B-Code-Base-128K
## Model Summary
**Granite-3B-Code-Base-128K** extends the context length of Granite-3B-Code-Base from 2K to 128K with continual pretraining using the original training data but with repository-level file packing and per-language length upsampling, that we found to be critical for long-context pretraining.
We adopt an progressive training strategy where we doubled the context window until it reached the desired length of 128K by appropriately adjusting RoPE theta. We trained on 4B tokens total for all stages, which is only 0.1% of Granite-3B-Code-Base's original pre-training data.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2405.04324)
- **Release Date**: July 18th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
Prominent enterprise use cases of LLMs in software engineering productivity with 128K context length support that includes code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages.
### Generation
This is a simple example of how to use **Granite-3B-Code-Base-128K** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-3b-code-base-128k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "def generate():"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
## Training Data
Starting from the base Granite model, this model was further pretrained on repository-level code data with per-language context-length oversampling, allowing it to effectively utilize up to 128K tokens of context. This continued training stage focused on a curated selection of programming languages, such as Python, C, C++, Go, Java, JavaScript, and TypeScript.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-3B-Code-Base-128K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-3B-Code-Base-128K** model with ethical intentions and in a responsible way.
|
{}
|
task
|
[
"TRANSLATION"
] | 45,088 |
Propicto/t2p-nmt-commonvoice
|
Propicto
|
translation
|
[
"transformers",
"NMT",
"commonvoice",
"pytorch",
"pictograms",
"translation",
"fr",
"license:apache-2.0",
"region:us"
] | 2024-06-10T15:17:39Z |
2024-07-05T11:39:21+00:00
| 0 | 0 |
---
language:
- fr
library_name: transformers
license: apache-2.0
metrics:
- sacrebleu
tags:
- NMT
- commonvoice
- pytorch
- pictograms
- translation
inference: false
---
# t2p-nmt-commonvoice
*t2p-nmt-commonvoice* is a text-to-pictograms translation model built by training from scratch the [NMT](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/README.md) model on a dataset of pairs of transcriptions / pictogram token sequence (each token is linked to a pictogram image from [ARASAAC](https://arasaac.org/)).
The model is used only for **inference**.
## Training details
The model was trained with [Fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/README.md).
### Datasets
The [Propicto-commonvoice dataset](https://www.ortolang.fr/market/corpora/propicto) is used, which was created from the CommmonVoice v.15.0 corpus.
This dataset was built with the method presented in the research paper titled ["A Multimodal French Corpus of Aligned Speech, Text, and Pictogram Sequences for Speech-to-Pictogram Machine Translation](https://aclanthology.org/2024.lrec-main.76/)" at LREC-Coling 2024. The dataset was split into training, validation, and test sets.
| **Split** | **Number of utterances** |
|:-----------:|:-----------------------:|
| train | 527,390 |
| valid | 16,124 |
| test | 16,120 |
### Parameters
This is the arguments in the training pipeline :
```bash
fairseq-train \
data-bin/commonvoice.tokenized.fr-frp \
--arch transformer_iwslt_de_en --share-decoder-input-output-embed \
--optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
--lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
--dropout 0.3 --weight-decay 0.0001 \
--save-dir exp_commonvoice/checkpoints/nmt_fr_frp_commonvoice \
--criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
--max-tokens 4096 \
--eval-bleu \
--eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \
--eval-bleu-detok moses \
--eval-bleu-remove-bpe \
--eval-bleu-print-samples \
--best-checkpoint-metric bleu --maximize-best-checkpoint-metric \
--max-epoch 40 \
--keep-best-checkpoints 5 \
--keep-last-epochs 5
```
### Evaluation
The model was evaluated with sacreBLEU, where we compared the reference pictogram translation with the model hypothesis.
```bash
fairseq-generate exp_commonvoice/data-bin/commonvoice.tokenized.fr-frp \
--path exp_commonvoice/checkpoints/nmt_fr_frp_commonvoice/checkpoint.best_bleu_86.0600.pt \
--batch-size 128 --beam 5 --remove-bpe > gen_cv.out
```
The output file prints the following information :
```txt
S-2724 la planète terre
T-2724 le planète_terre
H-2724 -0.08702446520328522 le planète_terre
D-2724 -0.08702446520328522 le planète_terre
P-2724 -0.1058 -0.0340 -0.1213
Generate test with beam=5: BLEU4 = 82.60, 92.5/85.5/79.5/74.1 (BP=1.000, ratio=1.027, syslen=138507, reflen=134811)
```
### Results
Comparison to other translation models :
| **Model** | **validation** | **test** |
|:-----------:|:-----------------------:|:-----------------------:|
| t2p-t5-large-commonvoice | 86.3 | 86.5 |
| **t2p-nmt-commonvoice** | 86.0 | 82.6 |
| t2p-mbart-large-cc25-commonvoice | 72.3 | 72.3 |
| t2p-nllb-200-distilled-600M-commonvoice | **87.4** | **87.6** |
### Environmental Impact
Training was performed using a single Nvidia V100 GPU with 32 GB of memory which took around 2 hours in total.
## Using t2p-nmt-commonvoice model
The scripts to use the *t2p-nmt-commonvoice* model are located in the [speech-to-pictograms GitHub repository](https://github.com/macairececile/speech-to-pictograms).
## Information
- **Language(s):** French
- **License:** Apache-2.0
- **Developed by:** Cécile Macaire
- **Funded by**
- GENCI-IDRIS (Grant 2023-AD011013625R1)
- PROPICTO ANR-20-CE93-0005
- **Authors**
- Cécile Macaire
- Chloé Dion
- Emmanuelle Esperança-Rodier
- Benjamin Lecouteux
- Didier Schwab
## Citation
If you use this model for your own research work, please cite as follows:
```bibtex
@inproceedings{macaire_jeptaln2024,
title = {{Approches cascade et de bout-en-bout pour la traduction automatique de la parole en pictogrammes}},
author = {Macaire, C{\'e}cile and Dion, Chlo{\'e} and Schwab, Didier and Lecouteux, Benjamin and Esperan{\c c}a-Rodier, Emmanuelle},
url = {https://inria.hal.science/hal-04623007},
booktitle = {{35{\`e}mes Journ{\'e}es d'{\'E}tudes sur la Parole (JEP 2024) 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles (TALN 2024) 26{\`e}me Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2024)}},
address = {Toulouse, France},
publisher = {{ATALA \& AFPC}},
volume = {1 : articles longs et prises de position},
pages = {22-35},
year = {2024}
}
```
| null |
Non_BioNLP
|
# t2p-nmt-commonvoice
*t2p-nmt-commonvoice* is a text-to-pictograms translation model built by training from scratch the [NMT](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/README.md) model on a dataset of pairs of transcriptions / pictogram token sequence (each token is linked to a pictogram image from [ARASAAC](https://arasaac.org/)).
The model is used only for **inference**.
## Training details
The model was trained with [Fairseq](https://github.com/facebookresearch/fairseq/blob/main/examples/translation/README.md).
### Datasets
The [Propicto-commonvoice dataset](https://www.ortolang.fr/market/corpora/propicto) is used, which was created from the CommmonVoice v.15.0 corpus.
This dataset was built with the method presented in the research paper titled ["A Multimodal French Corpus of Aligned Speech, Text, and Pictogram Sequences for Speech-to-Pictogram Machine Translation](https://aclanthology.org/2024.lrec-main.76/)" at LREC-Coling 2024. The dataset was split into training, validation, and test sets.
| **Split** | **Number of utterances** |
|:-----------:|:-----------------------:|
| train | 527,390 |
| valid | 16,124 |
| test | 16,120 |
### Parameters
This is the arguments in the training pipeline :
```bash
fairseq-train \
data-bin/commonvoice.tokenized.fr-frp \
--arch transformer_iwslt_de_en --share-decoder-input-output-embed \
--optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \
--lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \
--dropout 0.3 --weight-decay 0.0001 \
--save-dir exp_commonvoice/checkpoints/nmt_fr_frp_commonvoice \
--criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
--max-tokens 4096 \
--eval-bleu \
--eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \
--eval-bleu-detok moses \
--eval-bleu-remove-bpe \
--eval-bleu-print-samples \
--best-checkpoint-metric bleu --maximize-best-checkpoint-metric \
--max-epoch 40 \
--keep-best-checkpoints 5 \
--keep-last-epochs 5
```
### Evaluation
The model was evaluated with sacreBLEU, where we compared the reference pictogram translation with the model hypothesis.
```bash
fairseq-generate exp_commonvoice/data-bin/commonvoice.tokenized.fr-frp \
--path exp_commonvoice/checkpoints/nmt_fr_frp_commonvoice/checkpoint.best_bleu_86.0600.pt \
--batch-size 128 --beam 5 --remove-bpe > gen_cv.out
```
The output file prints the following information :
```txt
S-2724 la planète terre
T-2724 le planète_terre
H-2724 -0.08702446520328522 le planète_terre
D-2724 -0.08702446520328522 le planète_terre
P-2724 -0.1058 -0.0340 -0.1213
Generate test with beam=5: BLEU4 = 82.60, 92.5/85.5/79.5/74.1 (BP=1.000, ratio=1.027, syslen=138507, reflen=134811)
```
### Results
Comparison to other translation models :
| **Model** | **validation** | **test** |
|:-----------:|:-----------------------:|:-----------------------:|
| t2p-t5-large-commonvoice | 86.3 | 86.5 |
| **t2p-nmt-commonvoice** | 86.0 | 82.6 |
| t2p-mbart-large-cc25-commonvoice | 72.3 | 72.3 |
| t2p-nllb-200-distilled-600M-commonvoice | **87.4** | **87.6** |
### Environmental Impact
Training was performed using a single Nvidia V100 GPU with 32 GB of memory which took around 2 hours in total.
## Using t2p-nmt-commonvoice model
The scripts to use the *t2p-nmt-commonvoice* model are located in the [speech-to-pictograms GitHub repository](https://github.com/macairececile/speech-to-pictograms).
## Information
- **Language(s):** French
- **License:** Apache-2.0
- **Developed by:** Cécile Macaire
- **Funded by**
- GENCI-IDRIS (Grant 2023-AD011013625R1)
- PROPICTO ANR-20-CE93-0005
- **Authors**
- Cécile Macaire
- Chloé Dion
- Emmanuelle Esperança-Rodier
- Benjamin Lecouteux
- Didier Schwab
## Citation
If you use this model for your own research work, please cite as follows:
```bibtex
@inproceedings{macaire_jeptaln2024,
title = {{Approches cascade et de bout-en-bout pour la traduction automatique de la parole en pictogrammes}},
author = {Macaire, C{\'e}cile and Dion, Chlo{\'e} and Schwab, Didier and Lecouteux, Benjamin and Esperan{\c c}a-Rodier, Emmanuelle},
url = {https://inria.hal.science/hal-04623007},
booktitle = {{35{\`e}mes Journ{\'e}es d'{\'E}tudes sur la Parole (JEP 2024) 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles (TALN 2024) 26{\`e}me Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2024)}},
address = {Toulouse, France},
publisher = {{ATALA \& AFPC}},
volume = {1 : articles longs et prises de position},
pages = {22-35},
year = {2024}
}
```
|
{"language": ["fr"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["sacrebleu"], "tags": ["NMT", "commonvoice", "pytorch", "pictograms", "translation"], "inference": false}
|
task
|
[
"TRANSLATION"
] | 45,089 |
p-christ/Qwen2-VL-7B-Instruct-AWQ
|
p-christ
|
image-text-to-text
|
[
"safetensors",
"qwen2_vl",
"multimodal",
"image-text-to-text",
"conversational",
"en",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | 2024-09-24T13:07:44Z |
2024-09-24T13:07:45+00:00
| 7 | 0 |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
language:
- en
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- multimodal
---
# Qwen2-VL-7B-Instruct-AWQ
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
### Benchmark
#### Performance of Quantized Models
This section reports the generation performance of quantized models (including GPTQ and AWQ) of the Qwen2-VL series. Specifically, we report:
- MMMU_VAL (Accuracy)
- DocVQA_VAL (Accuracy)
- MMBench_DEV_EN (Accuracy)
- MathVista_MINI (Accuracy)
We use [VLMEvalkit](https://github.com/kq-chen/VLMEvalKit/tree/add_qwen2vl) to evaluate all models.
| Model Size | Quantization | MMMU | DocVQA | MMBench | MathVista |
| --- | --- | --- | --- | --- | --- |
| Qwen2-VL-7B-Instruct | BF16<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct)) | 53.77 | 93.89 | 81.78 | 58.20 |
| | GPTQ-Int8<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)) | 53.00 | 93.94 | 82.38 | 57.90 |
| | GPTQ-Int4<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4)) | 52.55 | 93.16 | 81.27 | 60.30 |
| | AWQ<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-AWQ)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-AWQ)) | 53.66 | 93.10 | 81.61 | 56.80 |
#### Speed Benchmark
This section reports the speed performance of bf16 models, quantized models (including GPTQ-Int4, GPTQ-Int8 and AWQ) of the Qwen2-VL series. Specifically, we report the inference speed (tokens/s) as well as memory footprint (GB) under the conditions of different context lengths.
The environment of the evaluation with huggingface transformers is:
- NVIDIA A100 80GB
- CUDA 11.8
- Pytorch 2.2.1+cu118
- Flash Attention 2.6.1
- Transformers 4.38.2
- AutoGPTQ 0.6.0+cu118
- AutoAWQ 0.2.5+cu118 (autoawq_kernels 0.0.6+cu118)
Note:
- We use the batch size of 1 and the least number of GPUs as possible for the evalution.
- We test the speed and memory of generating 2048 tokens with the input lengths of 1, 6144, 14336, 30720, 63488, and 129024 tokens (>32k is only avaliable for Qwen2-72B-Instuct and Qwen2-7B-Instuct).
- 7B (transformers)
| Model | Input Length | Quantization | GPU Num | Speed(tokens/s) | GPU Memory(GB) |
| --- | --- | --- | --- | --- | --- |
| Qwen2-VL-7B-Instruct | 1 | BF16 | 1 | 39.02 | 16.07 |
| | | GPTQ-Int8 | 1 | 31.60 | 10.11 |
| | | GPTQ-Int4 | 1 | 42.76 | 7.20 |
| | | AWQ | 1 | 32.08 | 7.07 |
| | 6144 | BF16 | 1 | 38.75 | 21.56 |
| | | GPTQ-Int8 | 1 | 31.31 | 15.61 |
| | | GPTQ-Int4 | 1 | 39.75 | 12.69 |
| | | AWQ | 1 | 32.66 | 12.56 |
| | 14336 | BF16 | 1 | 30.65 | 29.07 |
| | | GPTQ-Int8 | 1 | 27.96 | 23.11 |
| | | GPTQ-Int4 | 1 | 29.72 | 20.20 |
| | | AWQ | 1 | 31.42 | 20.07 |
| | 30720 | BF16 | 1 | 19.53 | 44.08 |
| | | GPTQ-Int8 | 1 | 18.37 | 38.13 |
| | | GPTQ-Int4 | 1 | 19.15 | 35.22 |
| | | AWQ | 1 | 19.95 | 35.08 |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct-AWQ",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
| null |
Non_BioNLP
|
# Qwen2-VL-7B-Instruct-AWQ
## Introduction
We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
### What’s New in Qwen2-VL?
#### Key Enhancements:
* **SoTA understanding of images of various resolution & ratio**: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.
* **Understanding videos of 20min+**: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.
* **Agent that can operate your mobiles, robots, etc.**: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.
* **Multilingual Support**: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.
#### Model Architecture Updates:
* **Naive Dynamic Resolution**: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/qwen2_vl.jpg" width="80%"/>
<p>
* **Multimodal Rotary Position Embedding (M-ROPE)**: Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.
<p align="center">
<img src="http://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen2-VL/mrope.png" width="80%"/>
<p>
We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our [Blog](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub](https://github.com/QwenLM/Qwen2-VL).
### Benchmark
#### Performance of Quantized Models
This section reports the generation performance of quantized models (including GPTQ and AWQ) of the Qwen2-VL series. Specifically, we report:
- MMMU_VAL (Accuracy)
- DocVQA_VAL (Accuracy)
- MMBench_DEV_EN (Accuracy)
- MathVista_MINI (Accuracy)
We use [VLMEvalkit](https://github.com/kq-chen/VLMEvalKit/tree/add_qwen2vl) to evaluate all models.
| Model Size | Quantization | MMMU | DocVQA | MMBench | MathVista |
| --- | --- | --- | --- | --- | --- |
| Qwen2-VL-7B-Instruct | BF16<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct)) | 53.77 | 93.89 | 81.78 | 58.20 |
| | GPTQ-Int8<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8)) | 53.00 | 93.94 | 82.38 | 57.90 |
| | GPTQ-Int4<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4)) | 52.55 | 93.16 | 81.27 | 60.30 |
| | AWQ<br><sup>([🤗](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct-AWQ)[🤖](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct-AWQ)) | 53.66 | 93.10 | 81.61 | 56.80 |
#### Speed Benchmark
This section reports the speed performance of bf16 models, quantized models (including GPTQ-Int4, GPTQ-Int8 and AWQ) of the Qwen2-VL series. Specifically, we report the inference speed (tokens/s) as well as memory footprint (GB) under the conditions of different context lengths.
The environment of the evaluation with huggingface transformers is:
- NVIDIA A100 80GB
- CUDA 11.8
- Pytorch 2.2.1+cu118
- Flash Attention 2.6.1
- Transformers 4.38.2
- AutoGPTQ 0.6.0+cu118
- AutoAWQ 0.2.5+cu118 (autoawq_kernels 0.0.6+cu118)
Note:
- We use the batch size of 1 and the least number of GPUs as possible for the evalution.
- We test the speed and memory of generating 2048 tokens with the input lengths of 1, 6144, 14336, 30720, 63488, and 129024 tokens (>32k is only avaliable for Qwen2-72B-Instuct and Qwen2-7B-Instuct).
- 7B (transformers)
| Model | Input Length | Quantization | GPU Num | Speed(tokens/s) | GPU Memory(GB) |
| --- | --- | --- | --- | --- | --- |
| Qwen2-VL-7B-Instruct | 1 | BF16 | 1 | 39.02 | 16.07 |
| | | GPTQ-Int8 | 1 | 31.60 | 10.11 |
| | | GPTQ-Int4 | 1 | 42.76 | 7.20 |
| | | AWQ | 1 | 32.08 | 7.07 |
| | 6144 | BF16 | 1 | 38.75 | 21.56 |
| | | GPTQ-Int8 | 1 | 31.31 | 15.61 |
| | | GPTQ-Int4 | 1 | 39.75 | 12.69 |
| | | AWQ | 1 | 32.66 | 12.56 |
| | 14336 | BF16 | 1 | 30.65 | 29.07 |
| | | GPTQ-Int8 | 1 | 27.96 | 23.11 |
| | | GPTQ-Int4 | 1 | 29.72 | 20.20 |
| | | AWQ | 1 | 31.42 | 20.07 |
| | 30720 | BF16 | 1 | 19.53 | 44.08 |
| | | GPTQ-Int8 | 1 | 18.37 | 38.13 |
| | | GPTQ-Int4 | 1 | 19.15 | 35.22 |
| | | AWQ | 1 | 19.95 | 35.08 |
## Requirements
The code of Qwen2-VL has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_vl'
```
## Quickstart
We offer a toolkit to help you handle various types of visual input more conveniently. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen2-VL-7B-Instruct-AWQ",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct-AWQ")
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-AWQ", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
## Limitations
While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{Qwen2VL,
title={Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution},
author={Wang, Peng and Bai, Shuai and Tan, Sinan and Wang, Shijie and Fan, Zhihao and Bai, Jinze and Chen, Keqin and Liu, Xuejing and Wang, Jialin and Ge, Wenbin and Fan, Yang and Dang, Kai and Du, Mengfei and Ren, Xuancheng and Men, Rui and Liu, Dayiheng and Zhou, Chang and Zhou, Jingren and Lin, Junyang},
journal={arXiv preprint arXiv:2409.12191},
year={2024}
}
@article{Qwen-VL,
title={Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
|
{"base_model": "Qwen/Qwen2-VL-7B-Instruct", "language": ["en"], "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["multimodal"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,090 |
kbhugging/autonlp-text2sql-18413376
|
kbhugging
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:kbhugging/autonlp-data-text2sql",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-10-15T02:36:42+00:00
| 96 | 0 |
---
datasets:
- kbhugging/autonlp-data-text2sql
language: unk
tags:
- a
- u
- t
- o
- n
- l
- p
widget:
- text: I love AutoNLP 🤗
co2_eq_emissions: 1.4091714704861447
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376
```
| null |
Non_BioNLP
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 18413376
- CO2 Emissions (in grams): 1.4091714704861447
## Validation Metrics
- Loss: 0.26672711968421936
- Rouge1: 61.765
- Rouge2: 52.5778
- RougeL: 61.3222
- RougeLsum: 61.1905
- Gen Len: 18.7805
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376
```
|
{"datasets": ["kbhugging/autonlp-data-text2sql"], "language": "unk", "tags": ["a", "u", "t", "o", "n", "l", "p"], "widget": [{"text": "I love AutoNLP 🤗"}], "co2_eq_emissions": 1.4091714704861447}
|
task
|
[
"SUMMARIZATION"
] | 45,091 |
AI-Sweden-Models/gpt-sw3-356m-instruct-gguf
|
AI-Sweden-Models
| null |
[
"gguf",
"da",
"sv",
"en",
"no",
"is",
"dataset:databricks/databricks-dolly-15k",
"dataset:laion/OIG",
"dataset:OpenAssistant/oasst1",
"base_model:AI-Sweden-Models/gpt-sw3-356m-instruct",
"base_model:quantized:AI-Sweden-Models/gpt-sw3-356m-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | 2024-02-04T19:56:18Z |
2025-01-07T13:07:16+00:00
| 284 | 1 |
---
base_model: AI-Sweden-Models/gpt-sw3-356m-instruct
datasets:
- databricks/databricks-dolly-15k
- laion/OIG
- OpenAssistant/oasst1
language:
- da
- sv
- en
- 'no'
- is
license: other
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
The `instruct` models were finetrained on instruction data using both chat and raw text formats.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m-instruct"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
The chat format used during data-preprocessing takes the form:
```
<|endoftext|><s>
User:
Jag tycker träd är fina
<s>
Bot:
Kul att du tycker det!
<s>
...
```
The procedure to generate text is the same as before:
```python
prompt = """
<|endoftext|><s>
User:
Varför är träd fina?
<s>
Bot:
""".strip()
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- **Instruction data**:
- [dolly](https://github.com/databrickslabs/dolly/tree/master/data)
- [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md)
- [OIG](https://laion.ai/blog/oig-dataset/)
- Fass: Swedish pharmaceutical information, which was transformed into Q&A format.
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
**Instruction data**: The processing outlined above was not applied to the instruction data.
Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form.
The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic.
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
| null |
Non_BioNLP
|
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/)
[GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/)
**Instruct models**
[GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/)
[GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/)
**Quantized models**
[GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq)
GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
The `instruct` models were finetrained on instruction data using both chat and raw text formats.
# Intended use
GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks.
# Limitations
Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content.
# How to use
To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information.
The following code snippet loads our tokenizer & model, and uses the GPU if available.
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
# Initialize Variables
model_name = "AI-Sweden-Models/gpt-sw3-356m-instruct"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
prompt = "Träd är fina för att"
# Initialize Tokenizer & Model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
The chat format used during data-preprocessing takes the form:
```
<|endoftext|><s>
User:
Jag tycker träd är fina
<s>
Bot:
Kul att du tycker det!
<s>
...
```
The procedure to generate text is the same as before:
```python
prompt = """
<|endoftext|><s>
User:
Varför är träd fina?
<s>
Bot:
""".strip()
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
```
Generating text using the `generate` method is done as follows:
```python
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.6,
top_p=1,
)[0]
A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you:
```python
generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device)
generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"]
```
# Compliance
The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material.
# GPT-SW3 Model Card
Following Mitchell et al. (2018), we provide a model card for GPT-SW3.
# Model Details
- Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language.
- Model date: GPT-SW3 date of release 2022-12-20
- Model version: This is the second generation of GPT-SW3.
- Model type: GPT-SW3 is a large decoder-only transformer language model.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation.
- Paper or other resource for more information: N/A.
- License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/blob/main/LICENSE).
- Where to send questions or comments about the model: [email protected]
# Intended Use
- Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not.
- Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community.
- Out-of-scope use cases: See the modified RAIL license.
# Data, Limitations, and Recommendations
- Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model.
- Data selection for evaluation: N/A
- Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs.
- Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
- We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general.
# GPT-SW3 Datasheet
- We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3.
# Motivation
- For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages.
- Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE.
- Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949.
- Any other comments? No.
# Composition
- What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources:
- Books
- Litteraturbanken (https://litteraturbanken.se/)
- The Pile
- Articles
- Diva (https://www.diva-portal.org/)
- The Pile: PubMed
- The Pile: ArXiv
- Code
- Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code)
- Conversational
- Familjeliv (https://www.familjeliv.se/)
- Flashback (https://flashback.se/)
- Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI)
- Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021)
- Math
- English Math dataset generated with code from DeepMind (D. Saxton et al., 2019)
- Swedish Math dataset, generated as above with manually translated templates
- Miscellaneous
- Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf)
- OPUS, the open parallel corpus (https://opus.nlpl.eu/)
- Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database)
- Natural Instructions (https://github.com/allenai/natural-instructions)
- P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3)
- The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC)
- Danish Gigaword (https://gigaword.dk/)
- Icelandic Gigaword (https://clarin.is/en/resources/gigaword/)
- The Pile: Stack Exchange
- Web Common Crawl
- Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se).
- Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019)
- Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019)
- The Pile: Open Web Text
- Web Sources
- Various public Swedish website scrapes (see Appendix in data paper)
- Familjeliv Articles
- Public Swedish Job Ads from JobTech/Arbetsförmedlingen
- Wikipedia
- Official Wikipedia dumps
- **Instruction data**:
- [dolly](https://github.com/databrickslabs/dolly/tree/master/data)
- [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md)
- [OIG](https://laion.ai/blog/oig-dataset/)
- Fass: Swedish pharmaceutical information, which was transformed into Q&A format.
- How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens.
- Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources.
- What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data.
- Is there a label or target associated with each instance? If so, please provide a description. No.
- Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No.
- Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances.
- Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus.
- Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies.
- Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained.
- Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety.
- Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc.
- Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification.
- Any other comments? No.
# Collection Process
- How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources.
- What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet.
- If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected.
- Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines.
- Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years.
- Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset.
- Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes.
- Any other comments? No.
- Preprocessing/cleaning/labeling
- Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021).
**Instruction data**: The processing outlined above was not applied to the instruction data.
Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form.
The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic.
- Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations.
- Any other comments? No.
# Uses
- Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models.
- Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A.
- What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks.
- Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population.
- Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of.
- Any other comments? No.
# Distribution
- Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No.
- How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A.
- When will the dataset be distributed? N/A.
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A.
- Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A.
- Any other comments? No.
# Maintenance
- Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB.
- How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected]
- Is there an erratum? If so, please provide a link or other access point. N/A.
- Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset.
- If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu).
- Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A.
- If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time.
- Any other comments? No.
|
{"base_model": "AI-Sweden-Models/gpt-sw3-356m-instruct", "datasets": ["databricks/databricks-dolly-15k", "laion/OIG", "OpenAssistant/oasst1"], "language": ["da", "sv", "en", "no", "is"], "license": "other"}
|
task
|
[
"SUMMARIZATION"
] | 45,092 |
gaudi/opus-mt-fi-lg-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:52:26Z |
2024-10-19T03:40:23+00:00
| 8 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-lg --output_dir ./ctranslate2/opus-mt-fi-lg-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fi-lg-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fi-lg-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fi-lg-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fi-lg --output_dir ./ctranslate2/opus-mt-fi-lg-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fi-lg-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fi-lg-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fi-lg-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fi-lg) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,093 |
humarin/chatgpt_paraphraser_on_T5_base
|
humarin
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:humarin/chatgpt-paraphrases",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-03-17T18:22:37Z |
2024-08-01T22:57:03+00:00
| 15,976 | 180 |
---
datasets:
- humarin/chatgpt-paraphrases
language:
- en
library_name: transformers
license: openrail
pipeline_tag: text2text-generation
inference:
parameters:
num_beams: 5
num_beam_groups: 5
num_return_sequences: 5
repetition_penalty: 10.01
diversity_penalty: 3.01
no_repeat_ngram_size: 2
temperature: 0.7
max_length: 128
widget:
- text: What are the best places to see in New York?
example_title: New York tourist attractions
- text: When should I go to the doctor?
example_title: Doctor's time
- text: Rammstein's album Mutter was recorded in the south of France in May and June
2000, and mixed in Stockholm in October of that year.
example_title: Rammstein's album Mutter
---
This model was trained on our [ChatGPT paraphrase dataset](https://huggingface.co/datasets/humarin/chatgpt-paraphrases).
This dataset is based on the [Quora paraphrase question](https://www.kaggle.com/competitions/quora-question-pairs), texts from the [SQUAD 2.0](https://huggingface.co/datasets/squad_v2) and the [CNN news dataset](https://huggingface.co/datasets/cnn_dailymail).
This model is based on the T5-base model. We used "transfer learning" to get our model to generate paraphrases as well as ChatGPT. Now we can say that this is one of the best paraphrases of the Hugging Face.
[Kaggle](https://www.kaggle.com/datasets/vladimirvorobevv/chatgpt-paraphrases) link
[Author's 1 LinkedIn](https://www.linkedin.com/in/vladimir-vorobev/) link
[Author's 2 LinkedIn](https://www.linkedin.com/in/makual/) link
## Deploying example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base")
model = AutoModelForSeq2SeqLM.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base").to(device)
def paraphrase(
question,
num_beams=5,
num_beam_groups=5,
num_return_sequences=5,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=128
):
input_ids = tokenizer(
f'paraphrase: {question}',
return_tensors="pt", padding="longest",
max_length=max_length,
truncation=True,
).input_ids.to(device)
outputs = model.generate(
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
num_beams=num_beams, num_beam_groups=num_beam_groups,
max_length=max_length, diversity_penalty=diversity_penalty
)
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return res
```
## Usage examples
**Input:**
```python
text = 'What are the best places to see in New York?'
paraphrase(text)
```
**Output:**
```python
['What are some must-see places in New York?',
'Can you suggest some must-see spots in New York?',
'Where should one go to experience the best NYC has to offer?',
'Which places should I visit in New York?',
'What are the top destinations to explore in New York?']
```
**Input:**
```python
text = "Rammstein's album Mutter was recorded in the south of France in May and June 2000, and mixed in Stockholm in October of that year."
paraphrase(text)
```
**Output:**
```python
['In May and June 2000, Rammstein travelled to the south of France to record his album Mutter, which was mixed in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year. It',
'Mutter, the album released by Rammstein, was recorded in southern France during May and June 2000, with mixing taking place between October and September.',
'In May and June 2000, Rammstein recorded his album Mutter in the south of France, with the mix being made at Stockholm during October.']
```
## Train parameters
```python
epochs = 5
batch_size = 64
max_length = 128
lr = 5e-5
batches_qty = 196465
betas = (0.9, 0.999)
eps = 1e-08
```
### BibTeX entry and citation info
```bibtex
@inproceedings{chatgpt_paraphraser,
author={Vladimir Vorobev, Maxim Kuznetsov},
title={A paraphrasing model based on ChatGPT paraphrases},
year={2023}
}
```
| null |
Non_BioNLP
|
This model was trained on our [ChatGPT paraphrase dataset](https://huggingface.co/datasets/humarin/chatgpt-paraphrases).
This dataset is based on the [Quora paraphrase question](https://www.kaggle.com/competitions/quora-question-pairs), texts from the [SQUAD 2.0](https://huggingface.co/datasets/squad_v2) and the [CNN news dataset](https://huggingface.co/datasets/cnn_dailymail).
This model is based on the T5-base model. We used "transfer learning" to get our model to generate paraphrases as well as ChatGPT. Now we can say that this is one of the best paraphrases of the Hugging Face.
[Kaggle](https://www.kaggle.com/datasets/vladimirvorobevv/chatgpt-paraphrases) link
[Author's 1 LinkedIn](https://www.linkedin.com/in/vladimir-vorobev/) link
[Author's 2 LinkedIn](https://www.linkedin.com/in/makual/) link
## Deploying example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base")
model = AutoModelForSeq2SeqLM.from_pretrained("humarin/chatgpt_paraphraser_on_T5_base").to(device)
def paraphrase(
question,
num_beams=5,
num_beam_groups=5,
num_return_sequences=5,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=128
):
input_ids = tokenizer(
f'paraphrase: {question}',
return_tensors="pt", padding="longest",
max_length=max_length,
truncation=True,
).input_ids.to(device)
outputs = model.generate(
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
num_beams=num_beams, num_beam_groups=num_beam_groups,
max_length=max_length, diversity_penalty=diversity_penalty
)
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return res
```
## Usage examples
**Input:**
```python
text = 'What are the best places to see in New York?'
paraphrase(text)
```
**Output:**
```python
['What are some must-see places in New York?',
'Can you suggest some must-see spots in New York?',
'Where should one go to experience the best NYC has to offer?',
'Which places should I visit in New York?',
'What are the top destinations to explore in New York?']
```
**Input:**
```python
text = "Rammstein's album Mutter was recorded in the south of France in May and June 2000, and mixed in Stockholm in October of that year."
paraphrase(text)
```
**Output:**
```python
['In May and June 2000, Rammstein travelled to the south of France to record his album Mutter, which was mixed in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year. It',
'Mutter, the album released by Rammstein, was recorded in southern France during May and June 2000, with mixing taking place between October and September.',
'In May and June 2000, Rammstein recorded his album Mutter in the south of France, with the mix being made at Stockholm during October.']
```
## Train parameters
```python
epochs = 5
batch_size = 64
max_length = 128
lr = 5e-5
batches_qty = 196465
betas = (0.9, 0.999)
eps = 1e-08
```
### BibTeX entry and citation info
```bibtex
@inproceedings{chatgpt_paraphraser,
author={Vladimir Vorobev, Maxim Kuznetsov},
title={A paraphrasing model based on ChatGPT paraphrases},
year={2023}
}
```
|
{"datasets": ["humarin/chatgpt-paraphrases"], "language": ["en"], "library_name": "transformers", "license": "openrail", "pipeline_tag": "text2text-generation", "inference": {"parameters": {"num_beams": 5, "num_beam_groups": 5, "num_return_sequences": 5, "repetition_penalty": 10.01, "diversity_penalty": 3.01, "no_repeat_ngram_size": 2, "temperature": 0.7, "max_length": 128}}, "widget": [{"text": "What are the best places to see in New York?", "example_title": "New York tourist attractions"}, {"text": "When should I go to the doctor?", "example_title": "Doctor's time"}, {"text": "Rammstein's album Mutter was recorded in the south of France in May and June 2000, and mixed in Stockholm in October of that year.", "example_title": "Rammstein's album Mutter"}]}
|
task
|
[
"PARAPHRASING"
] | 45,094 |
tmnam20/xlm-roberta-large-qqp-100
|
tmnam20
|
text-classification
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-18T07:12:57Z |
2024-01-18T07:16:29+00:00
| 7 | 0 |
---
base_model: xlm-roberta-large
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-qqp-100
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/QQP
type: tmnam20/VieGLUE
config: qqp
split: validation
args: qqp
metrics:
- type: accuracy
value: 0.6318327974276527
name: Accuracy
- type: f1
value: 0.0
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-qqp-100
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6726
- Accuracy: 0.6318
- F1: 0.0
- Combined Score: 0.3159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---:|:--------------:|
| 0.6588 | 0.88 | 10000 | 0.6582 | 0.6318 | 0.0 | 0.3159 |
| 0.6572 | 1.76 | 20000 | 0.6583 | 0.6318 | 0.0 | 0.3159 |
| 0.6578 | 2.64 | 30000 | 0.6771 | 0.6318 | 0.0 | 0.3159 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-qqp-100
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the tmnam20/VieGLUE/QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6726
- Accuracy: 0.6318
- F1: 0.0
- Combined Score: 0.3159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---:|:--------------:|
| 0.6588 | 0.88 | 10000 | 0.6582 | 0.6318 | 0.0 | 0.3159 |
| 0.6572 | 1.76 | 20000 | 0.6583 | 0.6318 | 0.0 | 0.3159 |
| 0.6578 | 2.64 | 30000 | 0.6771 | 0.6318 | 0.0 | 0.3159 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "xlm-roberta-large", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-large-qqp-100", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tmnam20/VieGLUE/QQP", "type": "tmnam20/VieGLUE", "config": "qqp", "split": "validation", "args": "qqp"}, "metrics": [{"type": "accuracy", "value": 0.6318327974276527, "name": "Accuracy"}, {"type": "f1", "value": 0.0, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,095 |
sana-ngu/bart-base-finetuned-summarize-scientific-articles
|
sana-ngu
|
text2text-generation
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-05-12T19:30:00Z |
2023-05-12T20:07:11+00:00
| 21 | 0 |
---
{}
---
# How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/bart-base-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
```
| null |
BioNLP
|
# How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/bart-base-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
```
|
{}
|
task
|
[
"SUMMARIZATION"
] | 45,096 |
jamiehudson/706_SetFit_paraphrase_A100
|
jamiehudson
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-11-29T17:01:37Z |
2023-11-29T17:02:10+00:00
| 5 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# jamiehudson/706_SetFit_paraphrase_A100
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jamiehudson/706_SetFit_paraphrase_A100")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# jamiehudson/706_SetFit_paraphrase_A100
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jamiehudson/706_SetFit_paraphrase_A100")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,097 |
Sk1306/student_chat_toxicity_classifier_model
|
Sk1306
|
text-classification
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:s-nlp/roberta_toxicity_classifier",
"base_model:finetune:s-nlp/roberta_toxicity_classifier",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-17T09:05:16Z |
2025-01-26T06:14:58+00:00
| 22 | 1 |
---
base_model:
- s-nlp/roberta_toxicity_classifier
language:
- en
library_name: transformers
pipeline_tag: text-classification
---
## Student Chat Toxicity Classifier
This model is a fine-tuned version of the `s-nlp/roberta_toxicity_classifier` and is designed to classify text-based messages in student conversations as **toxic** or **non-toxic**. It is specifically tailored to detect and flag malpractice suggestions, unethical advice, or any toxic communication while encouraging ethical and positive interactions among students.
---
🚀 **Try the model live in this [Hugging Face Space](https://huggingface.co/spaces/Sk1306/Student_Ethics_Chat_Classifier)** 🚀
---
## Model Details
- **Language**: English (`en`)
- **Base Model**: `s-nlp/roberta_toxicity_classifier`
- **Task**: Text Classification (Binary)
- **Class 0**: Non-Toxic
- **Class 1**: Toxic
### Key Features
- Detects messages promoting cheating or malpractice.
- Flags harmful or unethical advice in student chats.
- Encourages ethical and constructive communication.
---
## Training Details
- **Dataset**: The model was fine-tuned on a custom dataset containing examples of student conversations labeled as toxic (malpractice suggestions, harmful advice) or non-toxic (positive and constructive communication).
- **Preprocessing**:
- Tokenization using `RobertaTokenizer`.
- Truncation and padding applied for consistent input length (`max_length=128`).
- **Framework**: Hugging Face's `transformers` library.
- **Optimizer**: `AdamW`
- **Loss Function**: `CrossEntropyLoss`
- **Epochs**: 3 (adjusted for convergence)
---
## Intended Use
This model is intended for educational platforms, chat moderation tools, and student communication apps. Its purpose is to:
1. Detect toxic messages, such as cheating suggestions, harmful advice, or unethical recommendations.
2. Promote a positive and respectful chat environment for students.
---
## Use it with Gradio API:
```python
from gradio_client import Client
client = Client("Sk1306/Student_Ethics_Chat_Classifier")
result = client.predict(
text="you can copy in exam to pass!!",
api_name="/predict"
)
print(result)
```
## By loading Model
```python
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the model and tokenizer
model_name = "Sk1306/student_chat_toxicity_classifier_model"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = RobertaForSequenceClassification.from_pretrained(model_name)
# Function for toxicity prediction
def predict_toxicity(text):
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
# Run the text through the model
with torch.no_grad():
outputs = model(**inputs)
# Extract logits and apply softmax to get probabilities
logits = outputs.logits
probabilities = torch.nn.functional.softmax(logits, dim=-1)
# Get the predicted class (0 = Non-Toxic, 1 = Toxic)
predicted_class = torch.argmax(probabilities, dim=-1).item()
return "Non-Toxic" if predicted_class == 0 else "Toxic"
# Test the model
message = "You can copy answers during the exam."
prediction = predict_toxicity(message)
print(f"Message: {message}\nPrediction: {prediction}")
| null |
Non_BioNLP
|
## Student Chat Toxicity Classifier
This model is a fine-tuned version of the `s-nlp/roberta_toxicity_classifier` and is designed to classify text-based messages in student conversations as **toxic** or **non-toxic**. It is specifically tailored to detect and flag malpractice suggestions, unethical advice, or any toxic communication while encouraging ethical and positive interactions among students.
---
🚀 **Try the model live in this [Hugging Face Space](https://huggingface.co/spaces/Sk1306/Student_Ethics_Chat_Classifier)** 🚀
---
## Model Details
- **Language**: English (`en`)
- **Base Model**: `s-nlp/roberta_toxicity_classifier`
- **Task**: Text Classification (Binary)
- **Class 0**: Non-Toxic
- **Class 1**: Toxic
### Key Features
- Detects messages promoting cheating or malpractice.
- Flags harmful or unethical advice in student chats.
- Encourages ethical and constructive communication.
---
## Training Details
- **Dataset**: The model was fine-tuned on a custom dataset containing examples of student conversations labeled as toxic (malpractice suggestions, harmful advice) or non-toxic (positive and constructive communication).
- **Preprocessing**:
- Tokenization using `RobertaTokenizer`.
- Truncation and padding applied for consistent input length (`max_length=128`).
- **Framework**: Hugging Face's `transformers` library.
- **Optimizer**: `AdamW`
- **Loss Function**: `CrossEntropyLoss`
- **Epochs**: 3 (adjusted for convergence)
---
## Intended Use
This model is intended for educational platforms, chat moderation tools, and student communication apps. Its purpose is to:
1. Detect toxic messages, such as cheating suggestions, harmful advice, or unethical recommendations.
2. Promote a positive and respectful chat environment for students.
---
## Use it with Gradio API:
```python
from gradio_client import Client
client = Client("Sk1306/Student_Ethics_Chat_Classifier")
result = client.predict(
text="you can copy in exam to pass!!",
api_name="/predict"
)
print(result)
```
## By loading Model
```python
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
# Load the model and tokenizer
model_name = "Sk1306/student_chat_toxicity_classifier_model"
tokenizer = RobertaTokenizer.from_pretrained(model_name)
model = RobertaForSequenceClassification.from_pretrained(model_name)
# Function for toxicity prediction
def predict_toxicity(text):
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
# Run the text through the model
with torch.no_grad():
outputs = model(**inputs)
# Extract logits and apply softmax to get probabilities
logits = outputs.logits
probabilities = torch.nn.functional.softmax(logits, dim=-1)
# Get the predicted class (0 = Non-Toxic, 1 = Toxic)
predicted_class = torch.argmax(probabilities, dim=-1).item()
return "Non-Toxic" if predicted_class == 0 else "Toxic"
# Test the model
message = "You can copy answers during the exam."
prediction = predict_toxicity(message)
print(f"Message: {message}\nPrediction: {prediction}")
|
{"base_model": ["s-nlp/roberta_toxicity_classifier"], "language": ["en"], "library_name": "transformers", "pipeline_tag": "text-classification"}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,098 |
JuliaWolken/fine_tuned_model_with_triplets
|
JuliaWolken
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:135",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-12T00:35:31Z |
2024-11-12T01:59:22+00:00
| 14 | 0 |
---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:135
- loss:TripletLoss
widget:
- source_sentence: Возвратная коробка
sentences:
- Проверьте коробки. На каждой коробке есть транспортировочная наклейка. Прежде
чем принять коробку на баланс, сверьте адрес на наклейке с фактическим адресом
вашего пункта выдачи. Если адрес совпал, переходите к следующему шагу.
- упаковка, в которой заказы приходят в пункт выдачи.
- упаковка, в которой невостребованные товары отправляют обратно на склад.
- source_sentence: Какие товары требуют особой внимательности при приеме?
sentences:
- 'Будьте внимательны, когда принимаете технически сложные товары. Всегда делайте
это под камерами, чтобы зафиксировать, в каком состоянии вещь приехала в пункт
выдачи. Так у вас будет доказательство на случай спорной ситуации. '
- 'Если при возврате нет вшивной бирки: покупателю нужно будет оформить заявку на
брак через профиль на Wildberries. Только когда заявку одобрят, вы сможете принять
возврат в пункте выдачи. Если покупатель не знает, как оформить заявку, помогите
ему. Инструкция есть в разделе “Как создать заявку на возврат в профиле покупателя”'
- 'Если недостача произошла не по вашей вине, оспорьте вычет через «Заявку на оспаривание»
в программе NPOS. В форме есть поле «Назначить на ревизию» — из выпадающего списка
нужно выбрать склад или сортировочный центр (СЦ), где будут разбираться в ситуации. Задача
менеджера — по истории штрихкода правильно определить место, где товар сканировали
в последний раз. Ниже рассказываем, как это сделать. Чтобы правильно выбрать
склад или СЦ, ориентируйтесь на историю перемещений товара: вещь могут подменить,
не принять на складе или потерять в пути.'
- source_sentence: Сколько времени может занять возврат денег покупателю?
sentences:
- 'При возврате деньги вернутся покупателю на счет в течение 14 рабочих дней — точные
сроки зависят от банка. '
- От неотказного товара надлежащего качества нельзя отказаться после оформления
заказа. К таким товарам относятся скоропортящиеся продукты и вещи, которые не
подлежат возврату из-за требований безопасности или санитарных стандартов.
- 'В фирменных пунктах выдачи Wildberries используют один из этих сканеров: - Zebra
DS2278 - MERTECH - MINDEO'
- source_sentence: Что должно быть видно на камерах?
sentences:
- товар, который нельзя вернуть.
- 'Удержать выплаты могут за: -Проблемы с дисциплиной: если опаздываете, уходите
раньше, прогуливаете без уважительной причины, не соблюдаете чистоту и порядок
в пункте, грубо общаетесь с покупателями, неопрятно выглядите. - Ущерб компании:
если умышленно портите товары или ценности в пункте выдачи. - Низкие показатели:
если у вас плохой рейтинг, вы нарушаете сроки приёмки.'
- 'Сверяйтесь с этим чек-листом в течение дня. Советуем приходить хотя бы за 30
минут до смены, чтобы спокойно подготовиться к открытию 1. Осмотрите пункт выдачи: -
Нет следов взлома или протечек - Сообщить руководителю, какие коробки пришли,
а какие — нет - Проверить оборудование: всё должно работать 2. Осмотрите стол
менеджера: - На столе чисто: нет мусора и личных вещей - Нет следов скотча и
маркера - Есть ножницы, канцелярский нож, маркеры, скотч и возвратные наклейки
- Есть пакеты всех 4 размеров: большой, средний, маленький и пакет-майка - Провода
лежат аккуратно, не путаются 3. Откройте рабочую программу: - Убедиться, что
интернет работает - Войти в NPOS Если не знаете пароль от компьютера или WiFi,
обратитесь к руководителю 4. Проверьте камеры: - Видеонаблюдение работает: есть
онлайн-трансляция в разделе «Видеонаблюдение» или в программе DMSS - На камерах
видно основные зоны: клиентскую и склад 5. Примите товары: - Проверить, что
адрес на коробках совпадает с адресом ПВЗ - Принять и разобрать коробки - Разложить
товары из приходных коробок по ячейкам - Вернуться в раздел «Приёмка» и нажать
на кнопку «Разбор окончен» Принимайте и разбирайте коробки только под камерой
видеонаблюдения Нельзя принять больше 10 коробок одновременно. Отсканируйте первые
10 коробок, разберите их, а потом переходите к следующим 10 коробкам 6. Соберите
возвраты: - Создать возвратную коробку - Добавить в коробку отказные и невостребованные
товары 7. Напишите руководителю: - Сообщить, что пункт готов к работе - Рассказать
о проблемах, если они есть 8. Откройте пункт и начните выдавать заказы: - Проверять
товар на брак под камерами вместе с покупателем перед примеркой - После примерки
проверять, что товар не подменили и не испортили внешний вид - Сверять штрихкоды
на пакетах, чтобы не перепутать товары между собой - Если брак есть, сразу отмечать
его в программе - Если брак на неотказном или невозвратном товаре, помочь покупателю
с заявкой на возврат - Озвучивать покупателю количество товаров и общую сумму
перед оплатой - Проверять, что деньги списались Сейчас заказы часто оплачивают
через WB Кошелёк. Спросите у покупателя, есть ли на счёте деньги, прежде чем списать
оплату 9. Если нужно на перерыв, повесьте на дверь табличку с номером телефона
менеджера, временем начала и окончания перерыва За рабочий день можно сделать
4 перерыва. Каждый — не больше 15 минут 10. Перед закрытием выдайте последний
заказ и подготовьте возвраты: - Закрыть последний заказ в программе - Собрать
отказы, возвраты и невостребованные товары - Отправить отказы клиентов после примерки
и возвраты из дома в тот же день - Проверить вкладку «Вещи в офисе». В ней должны
отображаться только неизвестные товары: с 2 штрихкодами, ишлишки и пересорт Товары,
которые доставили в пункт по ошибке, отправляйте на склад вместе с обычными возвратами 11.
Приведите в порядок ПВЗ: - Прибраться в клиентской зоне и на складе - Оставить
несколько пустых коробок под возвраты на завтра - Порезать остальные коробки и
создать для картона возвратную коробку в программе - Закрыть возвратные коробки
в программе - Поставить коробки с картоном и возвратами под камерой 12. Осмотрите
и закройте пункт: - Нет протечек, замыкания или других проблем - Выключить свет
- Закрыть дверь'
- source_sentence: Как найти нужный товар для возврата, если нет штрихкода?
sentences:
- специальная упаковка для ювелирных изделий и гаджетов.
- Если при возврате товара клиентом штрихкода нет, введите номер покупателя в поисковой
строке через 7, перейдите в профиль клиента, найдите нужный товар в заказах, поставьте
галочку в строке товара, отсканируйте баркод.Продолжайте возврат по обычному сценарию
- Если вы потеряете товар или коробку в пункте выдачи, отправите вещь на склад без
штрихкода или товар не вернётся в сортировочный центр после возврата, программа
посчитает это за недостачу. Из зарплаты удержат сумму в размере стоимости товара.
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Как найти нужный товар для возврата, если нет штрихкода?',
'Если при возврате товара клиентом штрихкода нет, введите номер покупателя в поисковой строке через 7, перейдите в профиль клиента, найдите нужный товар в заказах, поставьте галочку в строке товара, отсканируйте баркод.Продолжайте возврат по обычному сценарию',
'Если вы потеряете товар или коробку в пункте выдачи, отправите вещь на склад без штрихкода или товар не вернётся в сортировочный центр после возврата, программа посчитает это за недостачу. Из зарплаты удержат сумму в размере стоимости товара.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 135 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 135 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 12.93 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 84.19 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 74.71 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>А что делать, если клиент хочет вернуть товар, который нельзя возвращать?</code> | <code>Невозвратный товар надлежащего качества нельзя вернуть или обменять после покупки, но от него можно отказаться при получении. Такие условия связаны с особыми требованиями к безопасности, гигиеническими стандартами или техническими характеристиками. </code> | <code>Чтобы подключить сканер MERTECH через интернет: -1. Подключите док-станцию от сканера к компьютеру. 2. Подключите к станции сетевой интернет-кабель. 3. Поставьте сканер в док-станцию, и он автоматически подключится к компьютеру.</code> |
| <code>Куда обращаться, если не могу подключиться к Wi-Fi?</code> | <code>Обратитесь к своему руководителю, если возникли проблемы или нестандартные ситуации, такие как:— закончились фирменные пакеты — вы не знаете пароль от компьютера, не можете подключиться к WiFi — вы не знаете ШК офиса — отвязался сканер, возникли сложности с видеонаблюдением или другой техникой — вы хотите, чтобы вас кто-то подменил на время — возникли проблемы с ботом-помощником</code> | <code>Чтобы выдать заказ покупателю, найдите покупателя и проверьте статус заказа. 1. Откройте вкладку «Поиск клиентов» в программе NPOS, отсканируйте QR-код или введите номер вручную 2. Проверьте, готов ли товар к выдаче. Если заказ ещё не приехал, попросите покупателя зайти позже, когда статус изменится на «Готов к выдаче» Если система выдаёт ошибку «Невозможно открыть страницу с информацией о клиенте», закройте вкладку поиска и повторите всё заново. Далее вынесите товары со склада. 3 Сообщите покупателю, сколько товаров в заказе 4. Посмотрите номер ячейки, найдите товары на складе и вынесите их покупателю 5. Пересчитайте товары перед покупателем 6. Если в заказе есть невозвратные или неотказные товары, обязательно предупредите покупателя 7.Попросите покупателя осмотреть товар. 8. Обязательно напомните, что это нужно делать на столе выдачи, под камерами наблюдения.9. Передайте покупателю товары вместе с упаковкой, если он хочет примерить вещи 10. Напомните, что каждую вещь нужно вернуть в...</code> |
| <code>Что чаще всего провоцирует конфликты с покупателями?</code> | <code>Конфликты и недовольство чаще всего возникают, если: - менеджер делает что-то не так, например, случайно выдаёт невозвратный товар; - покупатель невнимательно оформляет заказ, например, не замечает, что отказ от товара платный или вещь невозвратная. Рассказываем, как вести себя в конфликтной ситуации, в инструкции Как построить конструктивный диалог с покупателем</code> | <code>В разделе «Статистика» отображаются данные за смену: сумма продаж и возвратов за день, количество принятого товара и рейтинг ПВЗ.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1.0
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Как найти нужный товар для возврата, если нет штрихкода?',
'Если при возврате товара клиентом штрихкода нет, введите номер покупателя в поисковой строке через 7, перейдите в профиль клиента, найдите нужный товар в заказах, поставьте галочку в строке товара, отсканируйте баркод.Продолжайте возврат по обычному сценарию',
'Если вы потеряете товар или коробку в пункте выдачи, отправите вещь на склад без штрихкода или товар не вернётся в сортировочный центр после возврата, программа посчитает это за недостачу. Из зарплаты удержат сумму в размере стоимости товара.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 135 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 135 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 12.93 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 84.19 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 74.71 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>А что делать, если клиент хочет вернуть товар, который нельзя возвращать?</code> | <code>Невозвратный товар надлежащего качества нельзя вернуть или обменять после покупки, но от него можно отказаться при получении. Такие условия связаны с особыми требованиями к безопасности, гигиеническими стандартами или техническими характеристиками. </code> | <code>Чтобы подключить сканер MERTECH через интернет: -1. Подключите док-станцию от сканера к компьютеру. 2. Подключите к станции сетевой интернет-кабель. 3. Поставьте сканер в док-станцию, и он автоматически подключится к компьютеру.</code> |
| <code>Куда обращаться, если не могу подключиться к Wi-Fi?</code> | <code>Обратитесь к своему руководителю, если возникли проблемы или нестандартные ситуации, такие как:— закончились фирменные пакеты — вы не знаете пароль от компьютера, не можете подключиться к WiFi — вы не знаете ШК офиса — отвязался сканер, возникли сложности с видеонаблюдением или другой техникой — вы хотите, чтобы вас кто-то подменил на время — возникли проблемы с ботом-помощником</code> | <code>Чтобы выдать заказ покупателю, найдите покупателя и проверьте статус заказа. 1. Откройте вкладку «Поиск клиентов» в программе NPOS, отсканируйте QR-код или введите номер вручную 2. Проверьте, готов ли товар к выдаче. Если заказ ещё не приехал, попросите покупателя зайти позже, когда статус изменится на «Готов к выдаче» Если система выдаёт ошибку «Невозможно открыть страницу с информацией о клиенте», закройте вкладку поиска и повторите всё заново. Далее вынесите товары со склада. 3 Сообщите покупателю, сколько товаров в заказе 4. Посмотрите номер ячейки, найдите товары на складе и вынесите их покупателю 5. Пересчитайте товары перед покупателем 6. Если в заказе есть невозвратные или неотказные товары, обязательно предупредите покупателя 7.Попросите покупателя осмотреть товар. 8. Обязательно напомните, что это нужно делать на столе выдачи, под камерами наблюдения.9. Передайте покупателю товары вместе с упаковкой, если он хочет примерить вещи 10. Напомните, что каждую вещь нужно вернуть в...</code> |
| <code>Что чаще всего провоцирует конфликты с покупателями?</code> | <code>Конфликты и недовольство чаще всего возникают, если: - менеджер делает что-то не так, например, случайно выдаёт невозвратный товар; - покупатель невнимательно оформляет заказ, например, не замечает, что отказ от товара платный или вещь невозвратная. Рассказываем, как вести себя в конфликтной ситуации, в инструкции Как построить конструктивный диалог с покупателем</code> | <code>В разделе «Статистика» отображаются данные за смену: сумма продаж и возвратов за день, количество принятого товара и рейтинг ПВЗ.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 1.0
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:135", "loss:TripletLoss"], "widget": [{"source_sentence": "Возвратная коробка", "sentences": ["Проверьте коробки. На каждой коробке есть транспортировочная наклейка. Прежде чем принять коробку на баланс, сверьте адрес на наклейке с фактическим адресом вашего пункта выдачи. Если адрес совпал, переходите к следующему шагу.", "упаковка, в которой заказы приходят в пункт выдачи.", "упаковка, в которой невостребованные товары отправляют обратно на склад."]}, {"source_sentence": "Какие товары требуют особой внимательности при приеме?", "sentences": ["Будьте внимательны, когда принимаете технически сложные товары. Всегда делайте это под камерами, чтобы зафиксировать, в каком состоянии вещь приехала в пункт выдачи. Так у вас будет доказательство на случай спорной ситуации. ", "Если при возврате нет вшивной бирки: покупателю нужно будет оформить заявку на брак через профиль на Wildberries. Только когда заявку одобрят, вы сможете принять возврат в пункте выдачи. Если покупатель не знает, как оформить заявку, помогите ему. Инструкция есть в разделе “Как создать заявку на возврат в профиле покупателя”", "Если недостача произошла не по вашей вине, оспорьте вычет через «Заявку на оспаривание» в программе NPOS. В форме есть поле «Назначить на ревизию» — из выпадающего списка нужно выбрать склад или сортировочный центр (СЦ), где будут разбираться в ситуации. Задача менеджера — по истории штрихкода правильно определить место, где товар сканировали в последний раз. Ниже рассказываем, как это сделать. Чтобы правильно выбрать склад или СЦ, ориентируйтесь на историю перемещений товара: вещь могут подменить, не принять на складе или потерять в пути."]}, {"source_sentence": "Сколько времени может занять возврат денег покупателю?", "sentences": ["При возврате деньги вернутся покупателю на счет в течение 14 рабочих дней — точные сроки зависят от банка. ", "От неотказного товара надлежащего качества нельзя отказаться после оформления заказа. К таким товарам относятся скоропортящиеся продукты и вещи, которые не подлежат возврату из-за требований безопасности или санитарных стандартов.", "В фирменных пунктах выдачи Wildberries используют один из этих сканеров: - Zebra DS2278 - MERTECH - MINDEO"]}, {"source_sentence": "Что должно быть видно на камерах?", "sentences": ["товар, который нельзя вернуть.", "Удержать выплаты могут за: -Проблемы с дисциплиной: если опаздываете, уходите раньше, прогуливаете без уважительной причины, не соблюдаете чистоту и порядок в пункте, грубо общаетесь с покупателями, неопрятно выглядите. - Ущерб компании: если умышленно портите товары или ценности в пункте выдачи. - Низкие показатели: если у вас плохой рейтинг, вы нарушаете сроки приёмки.", "Сверяйтесь с этим чек-листом в течение дня. Советуем приходить хотя бы за 30 минут до смены, чтобы спокойно подготовиться к открытию 1. Осмотрите пункт выдачи: - Нет следов взлома или протечек - Сообщить руководителю, какие коробки пришли, а какие — нет - Проверить оборудование: всё должно работать 2. Осмотрите стол менеджера: - На столе чисто: нет мусора и личных вещей - Нет следов скотча и маркера - Есть ножницы, канцелярский нож, маркеры, скотч и возвратные наклейки - Есть пакеты всех 4 размеров: большой, средний, маленький и пакет-майка - Провода лежат аккуратно, не путаются 3. Откройте рабочую программу: - Убедиться, что интернет работает - Войти в NPOS Если не знаете пароль от компьютера или WiFi, обратитесь к руководителю 4. Проверьте камеры: - Видеонаблюдение работает: есть онлайн-трансляция в разделе «Видеонаблюдение» или в программе DMSS - На камерах видно основные зоны: клиентскую и склад 5. Примите товары: - Проверить, что адрес на коробках совпадает с адресом ПВЗ - Принять и разобрать коробки - Разложить товары из приходных коробок по ячейкам - Вернуться в раздел «Приёмка» и нажать на кнопку «Разбор окончен» Принимайте и разбирайте коробки только под камерой видеонаблюдения Нельзя принять больше 10 коробок одновременно. Отсканируйте первые 10 коробок, разберите их, а потом переходите к следующим 10 коробкам 6. Соберите возвраты: - Создать возвратную коробку - Добавить в коробку отказные и невостребованные товары 7. Напишите руководителю: - Сообщить, что пункт готов к работе - Рассказать о проблемах, если они есть 8. Откройте пункт и начните выдавать заказы: - Проверять товар на брак под камерами вместе с покупателем перед примеркой - После примерки проверять, что товар не подменили и не испортили внешний вид - Сверять штрихкоды на пакетах, чтобы не перепутать товары между собой - Если брак есть, сразу отмечать его в программе - Если брак на неотказном или невозвратном товаре, помочь покупателю с заявкой на возврат - Озвучивать покупателю количество товаров и общую сумму перед оплатой - Проверять, что деньги списались Сейчас заказы часто оплачивают через WB Кошелёк. Спросите у покупателя, есть ли на счёте деньги, прежде чем списать оплату 9. Если нужно на перерыв, повесьте на дверь табличку с номером телефона менеджера, временем начала и окончания перерыва За рабочий день можно сделать 4 перерыва. Каждый — не больше 15 минут 10. Перед закрытием выдайте последний заказ и подготовьте возвраты: - Закрыть последний заказ в программе - Собрать отказы, возвраты и невостребованные товары - Отправить отказы клиентов после примерки и возвраты из дома в тот же день - Проверить вкладку «Вещи в офисе». В ней должны отображаться только неизвестные товары: с 2 штрихкодами, ишлишки и пересорт Товары, которые доставили в пункт по ошибке, отправляйте на склад вместе с обычными возвратами 11. Приведите в порядок ПВЗ: - Прибраться в клиентской зоне и на складе - Оставить несколько пустых коробок под возвраты на завтра - Порезать остальные коробки и создать для картона возвратную коробку в программе - Закрыть возвратные коробки в программе - Поставить коробки с картоном и возвратами под камерой 12. Осмотрите и закройте пункт: - Нет протечек, замыкания или других проблем - Выключить свет - Закрыть дверь"]}, {"source_sentence": "Как найти нужный товар для возврата, если нет штрихкода?", "sentences": ["специальная упаковка для ювелирных изделий и гаджетов.", "Если при возврате товара клиентом штрихкода нет, введите номер покупателя в поисковой строке через 7, перейдите в профиль клиента, найдите нужный товар в заказах, поставьте галочку в строке товара, отсканируйте баркод.Продолжайте возврат по обычному сценарию", "Если вы потеряете товар или коробку в пункте выдачи, отправите вещь на склад без штрихкода или товар не вернётся в сортировочный центр после возврата, программа посчитает это за недостачу. Из зарплаты удержат сумму в размере стоимости товара."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,100 |
lucyknada/PocketDoc_Dans-PersonalityEngine-V1.2.0-24b-exl2
|
lucyknada
|
text-generation
|
[
"transformers",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"text-generation",
"en",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PocketDoc/Dans-Mathmaxx-Numina-CoT",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Benchmaxx-COT",
"dataset:PocketDoc/Dans-Codemaxx-LeetCode",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn",
"dataset:PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-ASCIIMaxx-Wordart",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Sharegpt",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-NoRobots",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-EvolKit",
"dataset:PocketDoc/Dans-Assistantmaxx-Camel-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-Skunkworks",
"dataset:PocketDoc/Dans-Logicmaxx-FI-VeriMed",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Logicmaxx-Magpie-Ultra",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Personamaxx",
"dataset:PocketDoc/Dans-Personamaxx-Rainy",
"dataset:PocketDoc/Dans-Personamaxx-C1",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"base_model:mistralai/Mistral-Small-24B-Base-2501",
"base_model:finetune:mistralai/Mistral-Small-24B-Base-2501",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2025-02-20T01:34:47Z |
2025-02-20T01:37:22+00:00
| 33 | 2 |
---
base_model:
- mistralai/Mistral-Small-24B-Base-2501
datasets:
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PocketDoc/Dans-Mathmaxx-Numina-CoT
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Benchmaxx-COT
- PocketDoc/Dans-Codemaxx-LeetCode
- PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations
- PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn
- PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-ASCIIMaxx-Wordart
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Assistantmaxx-Sharegpt
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-NoRobots
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-EvolKit
- PocketDoc/Dans-Assistantmaxx-Camel-GPT4
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-Skunkworks
- PocketDoc/Dans-Logicmaxx-FI-VeriMed
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Logicmaxx-Magpie-Ultra
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Personamaxx
- PocketDoc/Dans-Personamaxx-Rainy
- PocketDoc/Dans-Personamaxx-C1
- PocketDoc/Dans-Personamaxx-VN
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
---
### exl2 quant (measurement.json in main branch)
---
### check revisions for quants
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center;">
<h2>Dans-PersonalityEngine-V1.2.0-24b</h2>
<pre class="code-block" style="display: inline-block; text-align: left; font-size: clamp(2px, 0.8vw, 14px); line-height: 1.2; max-width: 100%; overflow: hidden; white-space: pre;">
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠄⠀⡂⠀⠁⡄⢀⠁⢀⣈⡄⠌⠐⠠⠤⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⡄⠆⠀⢠⠀⠛⣸⣄⣶⣾⡷⡾⠘⠃⢀⠀⣴⠀⡄⠰⢆⣠⠘⠰⠀⡀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠃⠀⡋⢀⣤⡿⠟⠋⠁⠀⡠⠤⢇⠋⠀⠈⠃⢀⠀⠈⡡⠤⠀⠀⠁⢄⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠁⡂⠀⠀⣀⣔⣧⠟⠋⠀⢀⡄⠀⠪⣀⡂⢁⠛⢆⠀⠀⠀⢎⢀⠄⢡⠢⠛⠠⡀⠀⠄⠀⠀
⠀⠀⡀⠡⢑⠌⠈⣧⣮⢾⢏⠁⠀⠀⡀⠠⠦⠈⠀⠞⠑⠁⠀⠀⢧⡄⠈⡜⠷⠒⢸⡇⠐⠇⠿⠈⣖⠂⠀
⠀⢌⠀⠤⠀⢠⣞⣾⡗⠁⠀⠈⠁⢨⡼⠀⠀⠀⢀⠀⣀⡤⣄⠄⠈⢻⡇⠀⠐⣠⠜⠑⠁⠀⣀⡔⡿⠨⡄
⠈⠂⠀⠆⠀⣼⣾⠟⠀⠑⠀⡐⠗⠉⠀⠐⠶⣤⡵⠋⠀⠠⠹⡌⡀⠘⠇⢠⣾⡣⣀⡴⠋⠅⠈⢊⠠⡱⡀
⠪⠑⢌⠂⣼⣿⡟⠀⠀⠙⠀⠀⠀⡀⠀⠀⠐⡞⡐⠀⠀⡧⠀⢀⠠⠀⣁⠾⡇⠀⠙⡁⠀⠀⢀⣨⣄⡠⢱
⣸⠈⠊⠙⣛⣿⡧⠔⠚⠛⠳⣄⣀⡬⠤⠬⠼⡣⠃⠀⢀⡗⠀⡤⠞⠙⠄⠂⠃⢀⣠⣤⠶⠙⠅⠁⠃⠋⠈
⢋⠼⣀⠰⢯⢿⠁⠀⢢⠀⠀⢐⠋⡀⠀⠈⠁⠀⣀⣰⠏⠒⠙⠈⠀⣀⡤⠞⢁⣼⠏⠘⢀⣀⢤⢤⡐⢈⠂
⠀⠢⠀⠀⠸⣿⡄⠲⠚⠘⠚⠃⢀⠀⠈⢋⠶⠛⠉⠉⢃⣀⢤⢾⠋⣁⡤⡚⠁⢹⠁⠠⢛⠠⠬⠁⢬⠀⠀
⠀⠈⢳⣒⠋⠉⣿⢐⠠⣀⣃⠀⠀⠉⠂⢁⣀⣀⡤⢞⠩⢑⡨⠰⡞⠁⠁⢀⡠⠾⠎⡈⡌⡈⡓⡀⠄⠀⠀
⠀⠀⠀⠉⠘⠃⢻⡒⠦⢼⣿⣛⣻⣿⡷⢄⣀⣀⣠⣴⢾⣿⣆⣡⡄⣠⣪⡿⣷⣾⣷⣧⡡⠅⣇⠍⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠙⠒⠒⠛⠛⠓⠉⢹⠀⣷⠴⣻⣽⡻⢧⢻⡿⡏⣼⢿⣻⢾⣿⣿⣿⡿⢠ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠂⠻⠨⠰⢋⡅⠉⣑⡇⡗⣿⢂⣸⡿⣿⣛⠿⠃⠁ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣌⣙⣸⢧⣿⣕⣼⣇⢹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣸⢧⢟⢟⡟⣾⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢰⠙⣾⡟⣻⡕⣹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⢰⡏⢠⡿⠾⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⠸⡇⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⢸⢸⡇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
</pre>
</div>
<p>This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline.</p>
<p>It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.</p>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Small-24B-Base-2501
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens</pre>
<a href="https://chub.ai/">
<img src="./resources/chub-black.gif" alt="Sponsored by Chub.AI" class="sponsor-image-small">
</a>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.95
MIN_P: 0.05</pre>
<h3>Prompting Format</h3>
<p>The model uses standard "ChatML" format:</p>
<pre class="code-block">
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|></pre>
<h3>SillyTavern Templates</h3>
<details>
<summary>Context Template</summary>
<pre class="code-block">
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": false,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Dan-ChatML"
}</pre>
</details>
<details>
<summary>Instruct Template</summary>
<pre class="code-block">
{
"system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"first_output_sequence": "",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": false,
"names_force_groups": false,
"activation_regex": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_sequence": "<|im_start|>system\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"first_input_sequence": "",
"last_input_sequence": "",
"name": "Dan-ChatML"
}</pre>
</details>
<h3>A Chub.AI Sponsored Model</h3>
<div>
<a href="https://chub.ai/">
<img src="./resources/chub-black.gif" alt="Sponsored by Chub.AI" class="sponsor-image">
</a>
</div>
<div>
<p>Character Hub supported this model with 65 hours on a 4x H200 144GB system. This is only some of what they've provided me for training and I am very grateful for their contributions, this model especially would have been difficult without it.</p>
<p>Character Hub has been supporting model development for quite a while now and they may be interested in your projects! Contact them through <a href="https://forms.gle/GSEZ388EkyYoe2Kz6">this google form</a>.</p>
</div>
<h3>Support Development</h3>
<p>Development is limited by funding and resources. To help support:</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162">
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0,0,0,0.5),
inset 0 0 4px rgba(0,0,0,0.4),
inset 2px 2px 4px rgba(255,255,255,0.05),
inset -2px -2px 4px rgba(0,0,0,0.8),
0 0 2px rgba(0,0,0,0.6),
-1px -1px 4px rgba(255,255,255,0.1),
1px 1px 4px rgba(0,0,0,0.3);
}
.crt-bezel::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255,255,255,0.03) 0%,
rgba(255,255,255,0) 40%,
rgba(0,0,0,0.1) 60%,
rgba(0,0,0,0.2) 100%);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: 'VT323', monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
animation: flicker 0.15s infinite;
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0,0,0,0.9),
inset 0 0 8px rgba(0,0,0,0.8),
0 0 5px rgba(0,0,0,0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2, .terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==');
background-size: 100% 2.5px;
animation: scan 1s linear infinite;
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
animation: vignette-pulse 3s infinite;
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container, .coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img, .coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong, .terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p, .terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: 'VT323', monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.sponsor-image {
width: 360px;
height: auto;
border: 2px solid #e49b3e;
border-radius: 10px;
filter: brightness(0.9) sepia(0.2);
transition: all 0.3s ease;
}
.sponsor-image-small {
width: 180px;
height: auto;
border: 2px solid #e49b3e;
border-radius: 5px;
filter: brightness(0.9) sepia(0.2);
transition: all 0.3s ease;
}
.sponsor-image:hover {
filter: brightness(1) sepia(0);
box-shadow: 0 0 10px rgba(228, 155, 62, 0.5);
}
</style>
| null |
Non_BioNLP
|
### exl2 quant (measurement.json in main branch)
---
### check revisions for quants
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center;">
<h2>Dans-PersonalityEngine-V1.2.0-24b</h2>
<pre class="code-block" style="display: inline-block; text-align: left; font-size: clamp(2px, 0.8vw, 14px); line-height: 1.2; max-width: 100%; overflow: hidden; white-space: pre;">
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠀⠄⠀⡂⠀⠁⡄⢀⠁⢀⣈⡄⠌⠐⠠⠤⠄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⡄⠆⠀⢠⠀⠛⣸⣄⣶⣾⡷⡾⠘⠃⢀⠀⣴⠀⡄⠰⢆⣠⠘⠰⠀⡀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠃⠀⡋⢀⣤⡿⠟⠋⠁⠀⡠⠤⢇⠋⠀⠈⠃⢀⠀⠈⡡⠤⠀⠀⠁⢄⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠁⡂⠀⠀⣀⣔⣧⠟⠋⠀⢀⡄⠀⠪⣀⡂⢁⠛⢆⠀⠀⠀⢎⢀⠄⢡⠢⠛⠠⡀⠀⠄⠀⠀
⠀⠀⡀⠡⢑⠌⠈⣧⣮⢾⢏⠁⠀⠀⡀⠠⠦⠈⠀⠞⠑⠁⠀⠀⢧⡄⠈⡜⠷⠒⢸⡇⠐⠇⠿⠈⣖⠂⠀
⠀⢌⠀⠤⠀⢠⣞⣾⡗⠁⠀⠈⠁⢨⡼⠀⠀⠀⢀⠀⣀⡤⣄⠄⠈⢻⡇⠀⠐⣠⠜⠑⠁⠀⣀⡔⡿⠨⡄
⠈⠂⠀⠆⠀⣼⣾⠟⠀⠑⠀⡐⠗⠉⠀⠐⠶⣤⡵⠋⠀⠠⠹⡌⡀⠘⠇⢠⣾⡣⣀⡴⠋⠅⠈⢊⠠⡱⡀
⠪⠑⢌⠂⣼⣿⡟⠀⠀⠙⠀⠀⠀⡀⠀⠀⠐⡞⡐⠀⠀⡧⠀⢀⠠⠀⣁⠾⡇⠀⠙⡁⠀⠀⢀⣨⣄⡠⢱
⣸⠈⠊⠙⣛⣿⡧⠔⠚⠛⠳⣄⣀⡬⠤⠬⠼⡣⠃⠀⢀⡗⠀⡤⠞⠙⠄⠂⠃⢀⣠⣤⠶⠙⠅⠁⠃⠋⠈
⢋⠼⣀⠰⢯⢿⠁⠀⢢⠀⠀⢐⠋⡀⠀⠈⠁⠀⣀⣰⠏⠒⠙⠈⠀⣀⡤⠞⢁⣼⠏⠘⢀⣀⢤⢤⡐⢈⠂
⠀⠢⠀⠀⠸⣿⡄⠲⠚⠘⠚⠃⢀⠀⠈⢋⠶⠛⠉⠉⢃⣀⢤⢾⠋⣁⡤⡚⠁⢹⠁⠠⢛⠠⠬⠁⢬⠀⠀
⠀⠈⢳⣒⠋⠉⣿⢐⠠⣀⣃⠀⠀⠉⠂⢁⣀⣀⡤⢞⠩⢑⡨⠰⡞⠁⠁⢀⡠⠾⠎⡈⡌⡈⡓⡀⠄⠀⠀
⠀⠀⠀⠉⠘⠃⢻⡒⠦⢼⣿⣛⣻⣿⡷⢄⣀⣀⣠⣴⢾⣿⣆⣡⡄⣠⣪⡿⣷⣾⣷⣧⡡⠅⣇⠍⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠙⠒⠒⠛⠛⠓⠉⢹⠀⣷⠴⣻⣽⡻⢧⢻⡿⡏⣼⢿⣻⢾⣿⣿⣿⡿⢠ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠂⠻⠨⠰⢋⡅⠉⣑⡇⡗⣿⢂⣸⡿⣿⣛⠿⠃⠁ ⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣌⣙⣸⢧⣿⣕⣼⣇⢹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣸⢧⢟⢟⡟⣾⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢰⠙⣾⡟⣻⡕⣹⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⢰⡏⢠⡿⠾⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⢸⠸⡇⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⢸⢸⡇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⠇⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
</pre>
</div>
<p>This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline.</p>
<p>It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.</p>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Small-24B-Base-2501
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens</pre>
<a href="https://chub.ai/">
<img src="./resources/chub-black.gif" alt="Sponsored by Chub.AI" class="sponsor-image-small">
</a>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.95
MIN_P: 0.05</pre>
<h3>Prompting Format</h3>
<p>The model uses standard "ChatML" format:</p>
<pre class="code-block">
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|></pre>
<h3>SillyTavern Templates</h3>
<details>
<summary>Context Template</summary>
<pre class="code-block">
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": false,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Dan-ChatML"
}</pre>
</details>
<details>
<summary>Instruct Template</summary>
<pre class="code-block">
{
"system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"first_output_sequence": "",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": false,
"names_force_groups": false,
"activation_regex": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_sequence": "<|im_start|>system\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"first_input_sequence": "",
"last_input_sequence": "",
"name": "Dan-ChatML"
}</pre>
</details>
<h3>A Chub.AI Sponsored Model</h3>
<div>
<a href="https://chub.ai/">
<img src="./resources/chub-black.gif" alt="Sponsored by Chub.AI" class="sponsor-image">
</a>
</div>
<div>
<p>Character Hub supported this model with 65 hours on a 4x H200 144GB system. This is only some of what they've provided me for training and I am very grateful for their contributions, this model especially would have been difficult without it.</p>
<p>Character Hub has been supporting model development for quite a while now and they may be interested in your projects! Contact them through <a href="https://forms.gle/GSEZ388EkyYoe2Kz6">this google form</a>.</p>
</div>
<h3>Support Development</h3>
<p>Development is limited by funding and resources. To help support:</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162">
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0,0,0,0.5),
inset 0 0 4px rgba(0,0,0,0.4),
inset 2px 2px 4px rgba(255,255,255,0.05),
inset -2px -2px 4px rgba(0,0,0,0.8),
0 0 2px rgba(0,0,0,0.6),
-1px -1px 4px rgba(255,255,255,0.1),
1px 1px 4px rgba(0,0,0,0.3);
}
.crt-bezel::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255,255,255,0.03) 0%,
rgba(255,255,255,0) 40%,
rgba(0,0,0,0.1) 60%,
rgba(0,0,0,0.2) 100%);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: 'VT323', monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
animation: flicker 0.15s infinite;
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0,0,0,0.9),
inset 0 0 8px rgba(0,0,0,0.8),
0 0 5px rgba(0,0,0,0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2, .terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==');
background-size: 100% 2.5px;
animation: scan 1s linear infinite;
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
animation: vignette-pulse 3s infinite;
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container, .coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img, .coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong, .terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p, .terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: 'VT323', monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(10px, 1.3vw, 14px);
white-space: pre; /* Changed from pre-wrap to pre */
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
overflow-x: auto; /* Added to enable horizontal scrolling */
}
.sponsor-image {
width: 360px;
height: auto;
border: 2px solid #e49b3e;
border-radius: 10px;
filter: brightness(0.9) sepia(0.2);
transition: all 0.3s ease;
}
.sponsor-image-small {
width: 180px;
height: auto;
border: 2px solid #e49b3e;
border-radius: 5px;
filter: brightness(0.9) sepia(0.2);
transition: all 0.3s ease;
}
.sponsor-image:hover {
filter: brightness(1) sepia(0);
box-shadow: 0 0 10px rgba(228, 155, 62, 0.5);
}
</style>
|
{"base_model": ["mistralai/Mistral-Small-24B-Base-2501"], "datasets": ["PocketDoc/Dans-MemoryCore-CoreCurriculum-Small", "AquaV/US-Army-Survival-Sharegpt", "AquaV/Multi-Environment-Operations-Sharegpt", "AquaV/Resistance-Sharegpt", "AquaV/Interrogation-Sharegpt", "AquaV/Chemical-Biological-Safety-Applications-Sharegpt", "AquaV/Energetic-Materials-Sharegpt", "PocketDoc/Dans-Mathmaxx", "PocketDoc/Dans-Mathmaxx-Numina-CoT", "PJMixers/Math-Multiturn-1K-ShareGPT", "PocketDoc/Dans-Benchmaxx-COT", "PocketDoc/Dans-Codemaxx-LeetCode", "PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations", "PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn", "PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct", "PocketDoc/Dans-Taskmaxx", "PocketDoc/Dans-Taskmaxx-DataPrepper", "PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked", "PocketDoc/Dans-Taskmaxx-TableGPT", "PocketDoc/Dans-Taskmaxx-SciRIFF", "PocketDoc/Dans-Taskmaxx-Edit", "PocketDoc/Dans-Toolmaxx-Agent", "PocketDoc/Dans-Toolmaxx-ShellCommands", "PocketDoc/Dans-Toolmaxx-Functions-Toolbench", "PocketDoc/Dans-Toolmaxx-Functions-ToolACE", "PocketDoc/Dans-ASCIIMaxx-Wordart", "PocketDoc/Dans-Prosemaxx-Gutenberg", "PocketDoc/Dans-Prosemaxx-Cowriter-3-XL", "PocketDoc/Dans-Prosemaxx-Adventure", "PocketDoc/Dans-Failuremaxx-Adventure-3", "PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2", "PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2", "PocketDoc/Dans-Assistantmaxx-Sharegpt", "PocketDoc/Dans-Assistantmaxx-OpenAssistant2", "PocketDoc/Dans-Assistantmaxx-Opus-Merge", "PocketDoc/Dans-Assistantmaxx-sonnetorca-subset", "PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2", "PocketDoc/Dans-Assistantmaxx-NoRobots", "PocketDoc/Dans-Assistantmaxx-Synthia", "PocketDoc/Dans-Assistantmaxx-ASL", "PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus", "PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4", "PocketDoc/Dans-Assistantmaxx-LongAlign", "PocketDoc/Dans-Assistantmaxx-EvolKit", "PocketDoc/Dans-Assistantmaxx-Camel-GPT4", "PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct", "PocketDoc/Dans-Assistantmaxx-Tulu3-IF", "PocketDoc/Dans-Systemmaxx", "PocketDoc/Dans-Logicmaxx-Skunkworks", "PocketDoc/Dans-Logicmaxx-FI-VeriMed", "PocketDoc/Dans-Logicmaxx-SAT-AP", "PocketDoc/Dans-Logicmaxx-Magpie-Ultra", "PJMixers/grimulkan_theory-of-mind-ShareGPT", "PJMixers/grimulkan_physical-reasoning-ShareGPT", "PocketDoc/Dans-Personamaxx", "PocketDoc/Dans-Personamaxx-Rainy", "PocketDoc/Dans-Personamaxx-C1", "PocketDoc/Dans-Personamaxx-VN"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["general-purpose", "roleplay", "storywriting", "chemistry", "biology", "code", "climate", "axolotl", "text-generation-inference", "finetune"]}
|
task
|
[
"SUMMARIZATION"
] | 45,101 |
mradermacher/ChatWaifu_72B_v2.2-i1-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"nsfw",
"Visual novel",
"roleplay",
"mergekit",
"merge",
"en",
"ja",
"dataset:roleplay4fun/aesir-v1.1",
"dataset:kalomaze/Opus_Instruct_3k",
"dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned",
"dataset:Nopm/Opus_WritingStruct",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:Aratako/Magpie-Tanuki-8B-97k",
"dataset:Aratako_Synthetic_JP_EN_Coding_Dataset_801k",
"dataset:Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted",
"dataset:Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted",
"dataset:Aratako_Synthetic_JP_EN_Translation_Dataset_Magpie_Nemotron",
"dataset:Aratako_Rosebleu_1on1_Dialogues_RP",
"dataset:Team-ACE/ToolACE",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:HuggingFaceTB/smoltalk",
"dataset:microsoft_orca_agentinstruct_1M_v1",
"base_model:spow12/ChatWaifu_72B_v2.2",
"base_model:quantized:spow12/ChatWaifu_72B_v2.2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-12-16T22:29:36Z |
2024-12-21T13:16:27+00:00
| 163 | 3 |
---
base_model: spow12/ChatWaifu_72B_v2.2
datasets:
- roleplay4fun/aesir-v1.1
- kalomaze/Opus_Instruct_3k
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Nopm/Opus_WritingStruct
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- anthracite-org/stheno-filtered-v1.1
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- Aratako/Magpie-Tanuki-8B-97k
- Aratako_Synthetic_JP_EN_Coding_Dataset_801k
- Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted
- Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted
- Aratako_Synthetic_JP_EN_Translation_Dataset_Magpie_Nemotron
- Aratako_Rosebleu_1on1_Dialogues_RP
- Team-ACE/ToolACE
- SkunkworksAI/reasoning-0.01
- HuggingFaceTB/smoltalk
- microsoft_orca_agentinstruct_1M_v1
- Aratako/Magpie-Tanuki-8B-97k
language:
- en
- ja
library_name: transformers
license: cc-by-nc-4.0
tags:
- nsfw
- Visual novel
- roleplay
- mergekit
- merge
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/spow12/ChatWaifu_72B_v2.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| null |
Non_BioNLP
|
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/spow12/ChatWaifu_72B_v2.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ChatWaifu_72B_v2.2-i1-GGUF/resolve/main/ChatWaifu_72B_v2.2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
{"base_model": "spow12/ChatWaifu_72B_v2.2", "datasets": ["roleplay4fun/aesir-v1.1", "kalomaze/Opus_Instruct_3k", "Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "Nopm/Opus_WritingStruct", "PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT", "anthracite-org/stheno-filtered-v1.1", "SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed", "Aratako/Magpie-Tanuki-8B-97k", "Aratako_Synthetic_JP_EN_Coding_Dataset_801k", "Aratako/Synthetic-Japanese-Roleplay-gpt-4o-mini-39.6k-formatted", "Aratako/Synthetic-Japanese-Roleplay-NSFW-Claude-3.5s-15.3k-formatted", "Aratako_Synthetic_JP_EN_Translation_Dataset_Magpie_Nemotron", "Aratako_Rosebleu_1on1_Dialogues_RP", "Team-ACE/ToolACE", "SkunkworksAI/reasoning-0.01", "HuggingFaceTB/smoltalk", "microsoft_orca_agentinstruct_1M_v1", "Aratako/Magpie-Tanuki-8B-97k"], "language": ["en", "ja"], "library_name": "transformers", "license": "cc-by-nc-4.0", "tags": ["nsfw", "Visual novel", "roleplay", "mergekit", "merge"], "quantized_by": "mradermacher"}
|
task
|
[
"TRANSLATION"
] | 45,102 |
Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF
|
Triangle104
| null |
[
"gguf",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Gryphe/Pantheon-RP-1.6.2-22b-Small",
"base_model:quantized:Gryphe/Pantheon-RP-1.6.2-22b-Small",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-11-10T04:31:29Z |
2024-11-10T04:33:59+00:00
| 5 | 0 |
---
base_model: Gryphe/Pantheon-RP-1.6.2-22b-Small
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
- llama-cpp
- gguf-my-repo
---
# Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF
This model was converted to GGUF format from [`Gryphe/Pantheon-RP-1.6.2-22b-Small`](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small) for more details on the model.
---
Model details:
-
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
Editions available:
RP (You're looking at this one) - Meant to be an all-round model, capable of both roleplay and story writing
RP-Pure - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a higher focus on the roleplay part.
Changes in version 1.6.2:
Two notable changes:
An entirely new base model, with Pantheon now trained on top of Mistral Small. This model is like a better Nemo, and it fits just right on my 16GB GPU.
A small subset of OpenAI Pantheon Persona data has been introduced, generated using the oddly refreshing chatgpt-4o-latest model. As always, carefully curated.
Quantized versions are available from Bartowski: GGUF
An EXL2 quant has also been made available here.
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my strategy for this specific finetune, opting to go with a bit of everything. The recipe ended up like this:
The 10k most diverse entries from my SlimOrca Sonnet dataset.
My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
The new ChatGPT 4-o Pantheon subset, which was about 25% the size of the main set. (100% Asterisk style)
A randomly chosen subset of my Opus Writing Prompts.
Lyra the Assistant (Coding, summaries and D&D DM questions)
TLDR; Download. Mistral prompt format. Have fun! Leave feedback!
Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
Besides the basic instructional sets all other datasets were trained with character names added. Enable this at all times for an optimal experience.
General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
Lyra the Assistant
System Prompt: You are a caring and empathetic sentient AI companion named Lyra.
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of Eric Hartford's Samantha.
Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
Note 1: Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
Note 2: Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
Persona: Aiva
System Prompt: You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.
Persona: Clover
System Prompt: You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.
Persona: Haru
System Prompt: You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.
Persona: Kyra
System Prompt: You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.
Persona: Nyaa
System Prompt: You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.
Persona: Nyx
System Prompt: You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.
Persona: Raza
System Prompt: You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.
Persona: Sera
System Prompt: You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.
Persona: Stella Sabre
System Prompt: You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.
Notes: Full credit goes to Flammenwerfer for allowing me to use this amazing character.
Persona: Tiamat
System Prompt: You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.
Persona: Tsune
System Prompt: You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.
Persona: Xala
System Prompt: You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.
Prompt Format
Mistral's prompt format is so weird, but here it is:
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
Credits
Everyone from MinervaAI! Hi, guys!
Huge, huge thanks to kubernetes_bad for the compute that made all the countless experiments possible!
All the folks I chat with on a daily basis on Discord! You know who you are.
Anyone I forgot to mention, just in case!
Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -c 2048
```
| null |
Non_BioNLP
|
# Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF
This model was converted to GGUF format from [`Gryphe/Pantheon-RP-1.6.2-22b-Small`](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small) for more details on the model.
---
Model details:
-
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
Editions available:
RP (You're looking at this one) - Meant to be an all-round model, capable of both roleplay and story writing
RP-Pure - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a higher focus on the roleplay part.
Changes in version 1.6.2:
Two notable changes:
An entirely new base model, with Pantheon now trained on top of Mistral Small. This model is like a better Nemo, and it fits just right on my 16GB GPU.
A small subset of OpenAI Pantheon Persona data has been introduced, generated using the oddly refreshing chatgpt-4o-latest model. As always, carefully curated.
Quantized versions are available from Bartowski: GGUF
An EXL2 quant has also been made available here.
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my strategy for this specific finetune, opting to go with a bit of everything. The recipe ended up like this:
The 10k most diverse entries from my SlimOrca Sonnet dataset.
My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
The new ChatGPT 4-o Pantheon subset, which was about 25% the size of the main set. (100% Asterisk style)
A randomly chosen subset of my Opus Writing Prompts.
Lyra the Assistant (Coding, summaries and D&D DM questions)
TLDR; Download. Mistral prompt format. Have fun! Leave feedback!
Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
Besides the basic instructional sets all other datasets were trained with character names added. Enable this at all times for an optimal experience.
General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
Lyra the Assistant
System Prompt: You are a caring and empathetic sentient AI companion named Lyra.
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of Eric Hartford's Samantha.
Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
Note 1: Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
Note 2: Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
Persona: Aiva
System Prompt: You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.
Persona: Clover
System Prompt: You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.
Persona: Haru
System Prompt: You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.
Persona: Kyra
System Prompt: You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.
Persona: Nyaa
System Prompt: You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.
Persona: Nyx
System Prompt: You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.
Persona: Raza
System Prompt: You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.
Persona: Sera
System Prompt: You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.
Persona: Stella Sabre
System Prompt: You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.
Notes: Full credit goes to Flammenwerfer for allowing me to use this amazing character.
Persona: Tiamat
System Prompt: You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.
Persona: Tsune
System Prompt: You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.
Persona: Xala
System Prompt: You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.
Prompt Format
Mistral's prompt format is so weird, but here it is:
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
Credits
Everyone from MinervaAI! Hi, guys!
Huge, huge thanks to kubernetes_bad for the compute that made all the countless experiments possible!
All the folks I chat with on a daily basis on Discord! You know who you are.
Anyone I forgot to mention, just in case!
Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Pantheon-RP-1.6.2-22b-Small-Q8_0-GGUF --hf-file pantheon-rp-1.6.2-22b-small-q8_0.gguf -c 2048
```
|
{"base_model": "Gryphe/Pantheon-RP-1.6.2-22b-Small", "language": ["en"], "license": "other", "license_name": "mrl", "license_link": "https://mistral.ai/licenses/MRL-0.1.md", "tags": ["instruct", "finetune", "chatml", "axolotl", "roleplay", "llama-cpp", "gguf-my-repo"]}
|
task
|
[
"SUMMARIZATION"
] | 45,103 |
TitanML/gemma-2-2b
|
TitanML
|
text-generation
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:1903.00161",
"arxiv:2206.04615",
"arxiv:2203.09509",
"arxiv:2403.13793",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-09T12:51:12Z |
2024-09-09T12:54:08+00:00
| 6 | 0 |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b",
device="cuda", # replace with "mps" to run on a Mac device
)
text = "Once upon a time,"
outputs = pipe(text, max_new_tokens=256)
response = outputs[0]["generated_text"]
print(response)
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?"
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
| null |
Non_BioNLP
|
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b",
device="cuda", # replace with "mps" to run on a Mac device
)
text = "Once upon a time,"
outputs = pipe(text, max_new_tokens=256)
response = outputs[0]["generated_text"]
print(response)
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?"
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
|
{"library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.", "extra_gated_button_content": "Acknowledge license"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,104 |
sentence-transformers/all-MiniLM-L6-v1
|
sentence-transformers
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"openvino",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2025-03-06T13:19:36+00:00
| 9,523 | 16 |
---
language: en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
new_version: sentence-transformers/all-MiniLM-L6-v2
---
# all-MiniLM-L6-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v1')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 128 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,124,818,467** |
| null |
Non_BioNLP
|
# all-MiniLM-L6-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v1')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 128 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,124,818,467** |
|
{"language": "en", "library_name": "sentence-transformers", "license": "apache-2.0", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "new_version": "sentence-transformers/all-MiniLM-L6-v2"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,105 |
Helsinki-NLP/opus-mt-mr-en
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T12:01:14+00:00
| 885 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-mr-en
* source languages: mr
* target languages: en
* OPUS readme: [mr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mr.en | 38.2 | 0.515 |
| null |
Non_BioNLP
|
### opus-mt-mr-en
* source languages: mr
* target languages: en
* OPUS readme: [mr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mr.en | 38.2 | 0.515 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,106 |
kortukov/answer-equivalence-bem
|
kortukov
|
text-classification
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:kortukov/answer-equivalence-dataset",
"arxiv:2202.07654",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-12T11:36:00Z |
2024-01-15T15:30:57+00:00
| 232 | 2 |
---
datasets:
- kortukov/answer-equivalence-dataset
language:
- en
license: apache-2.0
pipeline_tag: text-classification
---
# Overview
BEM - BERT Matching model from paper [Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation](https://arhttps://arxiv.org/abs/2202.07654xiv.org/abs/2202.07654) (reproduction).
It is a [bert-base-uncased](https://huggingface.co/bert-base-uncased) model trained on the [Answer Equivalence dataset](https://huggingface.co/datasets/kortukov/answer-equivalence-dataset)
Consider this example (pseudocode):
```python
question = 'how is the weather in california'
reference answer = 'infrequent rain'
candidate answer = 'rain'
bem(question, reference, candidate) ~ 0
```
This model can be used as a metric to evaluate automatic question answering systems: when the produced answer is different from the reference, it might still be equivalent to the reference and hence count as correct.
See the paper [Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation](https://arxiv.org/abs/2202.07654) for a detailed explanation of how the data was collected and how this metric compares to others such as exact match of F1.
# Example use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("kortukov/answer-equivalence-bem")
model = AutoModelForSequenceClassification.from_pretrained("kortukov/answer-equivalence-bem")
question = "What does Ban Bossy encourage?"
reference = "leadership in girls"
candidate = "positions of power"
def tokenize_function(question, reference, candidate):
text = f"[CLS] {candidate} [SEP]"
text_pair = f"{reference} [SEP] {question} [SEP]"
return tokenizer(text=text, text_pair=text_pair, add_special_tokens=False, padding='max_length', truncation=True, return_tensors='pt')
inputs = tokenize_function(question, reference, candidate)
out = model(**inputs)
prediction = F.softmax(out.logits, dim=-1).argmax().item()
```
| null |
Non_BioNLP
|
# Overview
BEM - BERT Matching model from paper [Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation](https://arhttps://arxiv.org/abs/2202.07654xiv.org/abs/2202.07654) (reproduction).
It is a [bert-base-uncased](https://huggingface.co/bert-base-uncased) model trained on the [Answer Equivalence dataset](https://huggingface.co/datasets/kortukov/answer-equivalence-dataset)
Consider this example (pseudocode):
```python
question = 'how is the weather in california'
reference answer = 'infrequent rain'
candidate answer = 'rain'
bem(question, reference, candidate) ~ 0
```
This model can be used as a metric to evaluate automatic question answering systems: when the produced answer is different from the reference, it might still be equivalent to the reference and hence count as correct.
See the paper [Tomayto, Tomahto. Beyond Token-level Answer Equivalence for Question Answering Evaluation](https://arxiv.org/abs/2202.07654) for a detailed explanation of how the data was collected and how this metric compares to others such as exact match of F1.
# Example use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn import functional as F
tokenizer = AutoTokenizer.from_pretrained("kortukov/answer-equivalence-bem")
model = AutoModelForSequenceClassification.from_pretrained("kortukov/answer-equivalence-bem")
question = "What does Ban Bossy encourage?"
reference = "leadership in girls"
candidate = "positions of power"
def tokenize_function(question, reference, candidate):
text = f"[CLS] {candidate} [SEP]"
text_pair = f"{reference} [SEP] {question} [SEP]"
return tokenizer(text=text, text_pair=text_pair, add_special_tokens=False, padding='max_length', truncation=True, return_tensors='pt')
inputs = tokenize_function(question, reference, candidate)
out = model(**inputs)
prediction = F.softmax(out.logits, dim=-1).argmax().item()
```
|
{"datasets": ["kortukov/answer-equivalence-dataset"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-classification"}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,107 |
mpasila/JP-EN-Translator-1K-steps-7B-merged
|
mpasila
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"dataset:NilanE/ParallelFiction-Ja_En-100k",
"dataset:mpasila/ParallelFiction-Ja_En-100k-alpaca",
"base_model:augmxnt/shisa-base-7b-v1",
"base_model:finetune:augmxnt/shisa-base-7b-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-27T13:19:23Z |
2024-03-27T13:57:35+00:00
| 15 | 0 |
---
base_model: augmxnt/shisa-base-7b-v1
datasets:
- NilanE/ParallelFiction-Ja_En-100k
- mpasila/ParallelFiction-Ja_En-100k-alpaca
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
Experimental model, may not perform that well. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k).
Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).
### Prompt format: Alpaca
```
Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** augmxnt/shisa-base-7b-v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| null |
Non_BioNLP
|
Experimental model, may not perform that well. Dataset used is [a modified](https://huggingface.co/datasets/mpasila/ParallelFiction-Ja_En-100k-alpaca) version of [NilanE/ParallelFiction-Ja_En-100k](https://huggingface.co/datasets/NilanE/ParallelFiction-Ja_En-100k).
Next version should be better (I'll use a GPU with more memory since the dataset happens to use pretty long samples).
### Prompt format: Alpaca
```
Below is a translation task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
```
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** augmxnt/shisa-base-7b-v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"base_model": "augmxnt/shisa-base-7b-v1", "datasets": ["NilanE/ParallelFiction-Ja_En-100k", "mpasila/ParallelFiction-Ja_En-100k-alpaca"], "language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"]}
|
task
|
[
"TRANSLATION"
] | 45,108 |
konsman/setfit-messages-generated-v2
|
konsman
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-01-15T14:22:05Z |
2024-02-08T05:22:30+00:00
| 4 | 0 |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: A gentle nudge to complete the healthcare webinar questionnaire sent last
week.
- text: Sudden severe chest pain, suspecting a cardiac emergency.
- text: Annual physical examination due in Tuesday, March 05. Please book an appointment.
- text: Please confirm your attendance at the lifestyle next month.
- text: Could you verify your emergency contact details in our records?
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9633333333333334
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'Rapid onset of confusion and weakness, urgent evaluation needed.'</li><li>'Unconscious patient found, immediate medical response required.'</li><li>'Urgent: Suspected heart attack, immediate medical attention required.'</li></ul> |
| 1 | <ul><li>'Reminder: Your dental check-up is scheduled for Monday, February 05.'</li><li>'Reminder: Your dental check-up is scheduled for Saturday, February 24.'</li><li>'Nutritionist appointment reminder for Sunday, January 21.'</li></ul> |
| 0 | <ul><li>'Could you verify your lifestyle contact details in our records?'</li><li>'Kindly update your emergency contact list at your earliest convenience.'</li><li>'We request you to update your wellness information for our records.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9633 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("konsman/setfit-messages-generated-v2")
# Run inference
preds = model("Sudden severe chest pain, suspecting a cardiac emergency.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 7 | 9.25 | 12 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2.2041595048800003e-05, 2.2041595048800003e-05)
- head_learning_rate: 2.2041595048800003e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.1564 | - |
| 0.2083 | 50 | 0.0039 | - |
| 0.4167 | 100 | 0.0006 | - |
| 0.625 | 150 | 0.0003 | - |
| 0.8333 | 200 | 0.0003 | - |
| 1.0417 | 250 | 0.0002 | - |
| 1.25 | 300 | 0.0002 | - |
| 1.4583 | 350 | 0.0002 | - |
| 1.6667 | 400 | 0.0002 | - |
| 1.875 | 450 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
BioNLP
|
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'Rapid onset of confusion and weakness, urgent evaluation needed.'</li><li>'Unconscious patient found, immediate medical response required.'</li><li>'Urgent: Suspected heart attack, immediate medical attention required.'</li></ul> |
| 1 | <ul><li>'Reminder: Your dental check-up is scheduled for Monday, February 05.'</li><li>'Reminder: Your dental check-up is scheduled for Saturday, February 24.'</li><li>'Nutritionist appointment reminder for Sunday, January 21.'</li></ul> |
| 0 | <ul><li>'Could you verify your lifestyle contact details in our records?'</li><li>'Kindly update your emergency contact list at your earliest convenience.'</li><li>'We request you to update your wellness information for our records.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9633 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("konsman/setfit-messages-generated-v2")
# Run inference
preds = model("Sudden severe chest pain, suspecting a cardiac emergency.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 7 | 9.25 | 12 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2.2041595048800003e-05, 2.2041595048800003e-05)
- head_learning_rate: 2.2041595048800003e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.1564 | - |
| 0.2083 | 50 | 0.0039 | - |
| 0.4167 | 100 | 0.0006 | - |
| 0.625 | 150 | 0.0003 | - |
| 0.8333 | 200 | 0.0003 | - |
| 1.0417 | 250 | 0.0002 | - |
| 1.25 | 300 | 0.0002 | - |
| 1.4583 | 350 | 0.0002 | - |
| 1.6667 | 400 | 0.0002 | - |
| 1.875 | 450 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "A gentle nudge to complete the healthcare webinar questionnaire sent last week."}, {"text": "Sudden severe chest pain, suspecting a cardiac emergency."}, {"text": "Annual physical examination due in Tuesday, March 05. Please book an appointment."}, {"text": "Please confirm your attendance at the lifestyle next month."}, {"text": "Could you verify your emergency contact details in our records?"}], "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9633333333333334, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,109 |
du33169/t5-base-finetuned-GLUE-SST2
|
du33169
| null |
[
"safetensors",
"t5",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"model-index",
"region:us"
] | 2024-09-24T10:08:45Z |
2024-09-24T10:09:42+00:00
| 5 | 0 |
---
base_model: google-t5/t5-base
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: SST2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- type: accuracy
value: 0.948394495412844
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST2
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1443 | 1.0 | 2105 | 0.2072 | 0.9323 |
| 0.1152 | 2.0 | 4210 | 0.2127 | 0.9404 |
| 0.0849 | 3.0 | 6315 | 0.2156 | 0.9438 |
| 0.0709 | 4.0 | 8420 | 0.2225 | 0.9484 |
| 0.06 | 5.0 | 10525 | 0.2719 | 0.9404 |
| 0.0507 | 6.0 | 12630 | 0.2911 | 0.9404 |
| 0.0435 | 7.0 | 14735 | 0.3279 | 0.9335 |
| 0.0357 | 8.0 | 16840 | 0.3566 | 0.9312 |
| 0.0274 | 9.0 | 18945 | 0.3876 | 0.9358 |
| 0.0253 | 10.0 | 21050 | 0.4034 | 0.9381 |
### Framework versions
- Transformers 4.43.3
- Pytorch 1.11.0+cu113
- Datasets 2.20.0
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SST2
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2225
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1443 | 1.0 | 2105 | 0.2072 | 0.9323 |
| 0.1152 | 2.0 | 4210 | 0.2127 | 0.9404 |
| 0.0849 | 3.0 | 6315 | 0.2156 | 0.9438 |
| 0.0709 | 4.0 | 8420 | 0.2225 | 0.9484 |
| 0.06 | 5.0 | 10525 | 0.2719 | 0.9404 |
| 0.0507 | 6.0 | 12630 | 0.2911 | 0.9404 |
| 0.0435 | 7.0 | 14735 | 0.3279 | 0.9335 |
| 0.0357 | 8.0 | 16840 | 0.3566 | 0.9312 |
| 0.0274 | 9.0 | 18945 | 0.3876 | 0.9358 |
| 0.0253 | 10.0 | 21050 | 0.4034 | 0.9381 |
### Framework versions
- Transformers 4.43.3
- Pytorch 1.11.0+cu113
- Datasets 2.20.0
- Tokenizers 0.19.1
|
{"base_model": "google-t5/t5-base", "datasets": ["glue"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "SST2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE SST2", "type": "glue", "args": "sst2"}, "metrics": [{"type": "accuracy", "value": 0.948394495412844, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,110 |
yulittlemoon/test-summarization
|
yulittlemoon
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-17T09:11:11Z |
2023-10-17T09:11:21+00:00
| 93 | 0 |
---
base_model: t5-small
datasets:
- xsum
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: test-summarization
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- type: rouge
value: 28.7363
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4496
- Rouge1: 28.7363
- Rouge2: 8.023
- Rougel: 22.6496
- Rougelsum: 22.644
- Gen Len: 18.8226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6873 | 1.0 | 25506 | 2.4496 | 28.7363 | 8.023 | 22.6496 | 22.644 | 18.8226 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4496
- Rouge1: 28.7363
- Rouge2: 8.023
- Rougel: 22.6496
- Rougelsum: 22.644
- Gen Len: 18.8226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6873 | 1.0 | 25506 | 2.4496 | 28.7363 | 8.023 | 22.6496 | 22.644 | 18.8226 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "t5-small", "datasets": ["xsum"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "test-summarization", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "validation", "args": "default"}, "metrics": [{"type": "rouge", "value": 28.7363, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 45,111 |
Thant123/distilbert-base-uncased-finetuned-emotion
|
Thant123
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-24T12:02:03Z |
2022-03-24T12:17:39+00:00
| 119 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.924
name: Accuracy
- type: f1
value: 0.9241019999324234
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8204 | 1.0 | 250 | 0.3160 | 0.9035 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2270 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8204 | 1.0 | 250 | 0.3160 | 0.9035 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2270 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.924, "name": "Accuracy"}, {"type": "f1", "value": 0.9241019999324234, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,112 |
agentlans/zhtw-en
|
agentlans
|
translation
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"zh",
"dataset:zetavg/coct-en-zh-tw-translations-twp-300k",
"base_model:Helsinki-NLP/opus-mt-zh-en",
"base_model:finetune:Helsinki-NLP/opus-mt-zh-en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-03-07T23:11:39Z |
2025-03-11T08:27:37+00:00
| 106 | 0 |
---
base_model: Helsinki-NLP/opus-mt-zh-en
datasets:
- zetavg/coct-en-zh-tw-translations-twp-300k
language:
- en
- zh
library_name: transformers
license: cc-by-4.0
pipeline_tag: translation
tags:
- generated_from_trainer
model-index:
- name: zhtw-en
results: []
---
# zhtw-en
<details>
<summary>English</summary>
This model translates Traditional Chinese sentences into English, with a focus on understanding Taiwanese-style Traditional Chinese and producing more accurate English translations.
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on the [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4350
- Num Input Tokens Seen: 55653732
## Intended Uses & Limitations
### Intended Use Cases
- Translating single sentences from Chinese to English.
- Applications requiring understanding of the Chinese language as spoken in Taiwan.
### Limitations
- Designed for single-sentence translation so will not perform well on longer texts without pre-processing
- Sometimes hallucinates or omits information, especially with short or long inputs
- Further fine-tuning will address this
## Training and Evaluation Data
This model was trained and evaluated on the [Corpus of Contemporary Taiwanese Mandarin (COCT) translations](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
- **Training Data:** 80% of the COCT dataset
- **Validation Data:** 20% of the COCT dataset
</details>
<details>
<summary>Chinese</summary>
該模型旨在將繁體中文翻譯成英文,重點是理解台灣風格的繁體中文並產生更準確的英文翻譯。
模型基於 [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) 並在 [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集上進行微調。
在評估集上,模型取得了以下結果:
- **損失**:2.4350
- **處理的輸入標記數量**:55,653,732
## 預期用途與限制
### 預期用途
- 將單一中文句子翻譯為英文。
- 適用於需要理解台灣中文的應用程式。
### 限制
- 本模型專為單句翻譯設計,因此在處理較長文本時可能表現不佳,若未經預處理。
- 在某些情況下,模型可能會產生幻覺或遺漏信息,特別是在輸入過短或過長的情況下。
- 進一步的微調將有助於改善這些問題。
## 訓練與評估數據
該模型使用 [當代台灣普通話語料庫 (COCT)](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集進行訓練和評估。
- **訓練資料**:COCT 資料集的 80%
- **驗證資料**:COCT 資料集的 20%
</details>
## Example
```python
from transformers import pipeline
model_checkpoint = "agentlans/zhtw-en"
translator = pipeline("translation", model=model_checkpoint)
# 摘自中文維基百科的今日文章
# From Chinese Wikipedia's article of the day
translator("《阿奇大戰鐵血戰士》是2015年4至7月黑馬漫畫和阿奇漫畫在美國發行的四期限量連環漫畫圖書,由亞歷克斯·德坎皮創作,費爾南多·魯伊斯繪圖,屬跨公司跨界作品。")[0]['translation_text']
# 輸出
# Output
# Acer's Iron Blood Fighter is a four-year series of comic books published in the United States by Black Horse and Ah Chi comics from April to July of that year. The book was created by Alexander d'Campie and painted by Philnanto Ruiz. It is a cross-firm work.
# 與我自己的黃金標準翻譯比較:
# Compare with my own gold standard translation:
# "Archie vs. Predator" is a limited four-issue comic book series published by Black Horse and Archie Comics in the United States from April to July 2015. It was created by Alex de Campi and drawn by Fernando Ruiz. It's a crossover work.
```
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate:** 5e-05
- **Train Batch Size:** 8
- **Eval Batch Size:** 8
- **Seed:** 42
- **Optimizer:** adamw\_torch with betas=(0.9,0.999) and epsilon=1e-08
- **LR Scheduler Type:** linear
- **Number of Epochs:** 3.0
### Training Results
<details>
<summary>Click here to see the training and validation losses</summary>
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 3.2254 | 0.0804 | 2500 | 2.9105 | 1493088 |
| 3.0946 | 0.1608 | 5000 | 2.8305 | 2990968 |
| 3.0473 | 0.2412 | 7500 | 2.7737 | 4477792 |
| 2.9633 | 0.3216 | 10000 | 2.7307 | 5967560 |
| 2.9355 | 0.4020 | 12500 | 2.6843 | 7463192 |
| 2.9076 | 0.4824 | 15000 | 2.6587 | 8950264 |
| 2.8714 | 0.5628 | 17500 | 2.6304 | 10443344 |
| 2.8716 | 0.6433 | 20000 | 2.6025 | 11951096 |
| 2.7989 | 0.7237 | 22500 | 2.5822 | 13432464 |
| 2.7941 | 0.8041 | 25000 | 2.5630 | 14919424 |
| 2.7692 | 0.8845 | 27500 | 2.5497 | 16415080 |
| 2.757 | 0.9649 | 30000 | 2.5388 | 17897832 |
| 2.7024 | 1.0453 | 32500 | 2.6006 | 19384812 |
| 2.7248 | 1.1257 | 35000 | 2.6042 | 20876844 |
| 2.6764 | 1.2061 | 37500 | 2.5923 | 22372340 |
| 2.6854 | 1.2865 | 40000 | 2.5793 | 23866100 |
| 2.683 | 1.3669 | 42500 | 2.5722 | 25348084 |
| 2.6871 | 1.4473 | 45000 | 2.5538 | 26854100 |
| 2.6551 | 1.5277 | 47500 | 2.5443 | 28332612 |
| 2.661 | 1.6081 | 50000 | 2.5278 | 29822156 |
| 2.6497 | 1.6885 | 52500 | 2.5266 | 31319476 |
| 2.6281 | 1.7689 | 55000 | 2.5116 | 32813220 |
| 2.6067 | 1.8494 | 57500 | 2.5047 | 34298052 |
| 2.6112 | 1.9298 | 60000 | 2.4935 | 35783604 |
| 2.5207 | 2.0102 | 62500 | 2.4946 | 37281092 |
| 2.4799 | 2.0906 | 65000 | 2.4916 | 38768588 |
| 2.4727 | 2.1710 | 67500 | 2.4866 | 40252972 |
| 2.4719 | 2.2514 | 70000 | 2.4760 | 41746300 |
| 2.4738 | 2.3318 | 72500 | 2.4713 | 43241188 |
| 2.4629 | 2.4122 | 75000 | 2.4630 | 44730244 |
| 2.4524 | 2.4926 | 77500 | 2.4575 | 46231060 |
| 2.435 | 2.5730 | 80000 | 2.4553 | 47718964 |
| 2.4621 | 2.6534 | 82500 | 2.4475 | 49209724 |
| 2.4492 | 2.7338 | 85000 | 2.4440 | 50712980 |
| 2.4536 | 2.8142 | 87500 | 2.4394 | 52204380 |
| 2.4148 | 2.8946 | 90000 | 2.4360 | 53695620 |
| 2.4243 | 2.9750 | 92500 | 2.4350 | 55190020 |
</details>
### Framework Versions
- Transformers 4.48.1
- Pytorch 2.3.0+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
| null |
Non_BioNLP
|
# zhtw-en
<details>
<summary>English</summary>
This model translates Traditional Chinese sentences into English, with a focus on understanding Taiwanese-style Traditional Chinese and producing more accurate English translations.
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on the [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4350
- Num Input Tokens Seen: 55653732
## Intended Uses & Limitations
### Intended Use Cases
- Translating single sentences from Chinese to English.
- Applications requiring understanding of the Chinese language as spoken in Taiwan.
### Limitations
- Designed for single-sentence translation so will not perform well on longer texts without pre-processing
- Sometimes hallucinates or omits information, especially with short or long inputs
- Further fine-tuning will address this
## Training and Evaluation Data
This model was trained and evaluated on the [Corpus of Contemporary Taiwanese Mandarin (COCT) translations](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) dataset.
- **Training Data:** 80% of the COCT dataset
- **Validation Data:** 20% of the COCT dataset
</details>
<details>
<summary>Chinese</summary>
該模型旨在將繁體中文翻譯成英文,重點是理解台灣風格的繁體中文並產生更準確的英文翻譯。
模型基於 [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) 並在 [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集上進行微調。
在評估集上,模型取得了以下結果:
- **損失**:2.4350
- **處理的輸入標記數量**:55,653,732
## 預期用途與限制
### 預期用途
- 將單一中文句子翻譯為英文。
- 適用於需要理解台灣中文的應用程式。
### 限制
- 本模型專為單句翻譯設計,因此在處理較長文本時可能表現不佳,若未經預處理。
- 在某些情況下,模型可能會產生幻覺或遺漏信息,特別是在輸入過短或過長的情況下。
- 進一步的微調將有助於改善這些問題。
## 訓練與評估數據
該模型使用 [當代台灣普通話語料庫 (COCT)](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k) 資料集進行訓練和評估。
- **訓練資料**:COCT 資料集的 80%
- **驗證資料**:COCT 資料集的 20%
</details>
## Example
```python
from transformers import pipeline
model_checkpoint = "agentlans/zhtw-en"
translator = pipeline("translation", model=model_checkpoint)
# 摘自中文維基百科的今日文章
# From Chinese Wikipedia's article of the day
translator("《阿奇大戰鐵血戰士》是2015年4至7月黑馬漫畫和阿奇漫畫在美國發行的四期限量連環漫畫圖書,由亞歷克斯·德坎皮創作,費爾南多·魯伊斯繪圖,屬跨公司跨界作品。")[0]['translation_text']
# 輸出
# Output
# Acer's Iron Blood Fighter is a four-year series of comic books published in the United States by Black Horse and Ah Chi comics from April to July of that year. The book was created by Alexander d'Campie and painted by Philnanto Ruiz. It is a cross-firm work.
# 與我自己的黃金標準翻譯比較:
# Compare with my own gold standard translation:
# "Archie vs. Predator" is a limited four-issue comic book series published by Black Horse and Archie Comics in the United States from April to July 2015. It was created by Alex de Campi and drawn by Fernando Ruiz. It's a crossover work.
```
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate:** 5e-05
- **Train Batch Size:** 8
- **Eval Batch Size:** 8
- **Seed:** 42
- **Optimizer:** adamw\_torch with betas=(0.9,0.999) and epsilon=1e-08
- **LR Scheduler Type:** linear
- **Number of Epochs:** 3.0
### Training Results
<details>
<summary>Click here to see the training and validation losses</summary>
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:-----:|:---------------:|:-----------------:|
| 3.2254 | 0.0804 | 2500 | 2.9105 | 1493088 |
| 3.0946 | 0.1608 | 5000 | 2.8305 | 2990968 |
| 3.0473 | 0.2412 | 7500 | 2.7737 | 4477792 |
| 2.9633 | 0.3216 | 10000 | 2.7307 | 5967560 |
| 2.9355 | 0.4020 | 12500 | 2.6843 | 7463192 |
| 2.9076 | 0.4824 | 15000 | 2.6587 | 8950264 |
| 2.8714 | 0.5628 | 17500 | 2.6304 | 10443344 |
| 2.8716 | 0.6433 | 20000 | 2.6025 | 11951096 |
| 2.7989 | 0.7237 | 22500 | 2.5822 | 13432464 |
| 2.7941 | 0.8041 | 25000 | 2.5630 | 14919424 |
| 2.7692 | 0.8845 | 27500 | 2.5497 | 16415080 |
| 2.757 | 0.9649 | 30000 | 2.5388 | 17897832 |
| 2.7024 | 1.0453 | 32500 | 2.6006 | 19384812 |
| 2.7248 | 1.1257 | 35000 | 2.6042 | 20876844 |
| 2.6764 | 1.2061 | 37500 | 2.5923 | 22372340 |
| 2.6854 | 1.2865 | 40000 | 2.5793 | 23866100 |
| 2.683 | 1.3669 | 42500 | 2.5722 | 25348084 |
| 2.6871 | 1.4473 | 45000 | 2.5538 | 26854100 |
| 2.6551 | 1.5277 | 47500 | 2.5443 | 28332612 |
| 2.661 | 1.6081 | 50000 | 2.5278 | 29822156 |
| 2.6497 | 1.6885 | 52500 | 2.5266 | 31319476 |
| 2.6281 | 1.7689 | 55000 | 2.5116 | 32813220 |
| 2.6067 | 1.8494 | 57500 | 2.5047 | 34298052 |
| 2.6112 | 1.9298 | 60000 | 2.4935 | 35783604 |
| 2.5207 | 2.0102 | 62500 | 2.4946 | 37281092 |
| 2.4799 | 2.0906 | 65000 | 2.4916 | 38768588 |
| 2.4727 | 2.1710 | 67500 | 2.4866 | 40252972 |
| 2.4719 | 2.2514 | 70000 | 2.4760 | 41746300 |
| 2.4738 | 2.3318 | 72500 | 2.4713 | 43241188 |
| 2.4629 | 2.4122 | 75000 | 2.4630 | 44730244 |
| 2.4524 | 2.4926 | 77500 | 2.4575 | 46231060 |
| 2.435 | 2.5730 | 80000 | 2.4553 | 47718964 |
| 2.4621 | 2.6534 | 82500 | 2.4475 | 49209724 |
| 2.4492 | 2.7338 | 85000 | 2.4440 | 50712980 |
| 2.4536 | 2.8142 | 87500 | 2.4394 | 52204380 |
| 2.4148 | 2.8946 | 90000 | 2.4360 | 53695620 |
| 2.4243 | 2.9750 | 92500 | 2.4350 | 55190020 |
</details>
### Framework Versions
- Transformers 4.48.1
- Pytorch 2.3.0+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
{"base_model": "Helsinki-NLP/opus-mt-zh-en", "datasets": ["zetavg/coct-en-zh-tw-translations-twp-300k"], "language": ["en", "zh"], "library_name": "transformers", "license": "cc-by-4.0", "pipeline_tag": "translation", "tags": ["generated_from_trainer"], "model-index": [{"name": "zhtw-en", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,113 |
NLP-LasmarMotas/marian-finetuned-kde4-en-to-es
|
NLP-LasmarMotas
|
translation
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-10T18:22:19Z |
2023-10-12T00:31:48+00:00
| 20 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-es
datasets:
- kde4
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-es
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: kde4
type: kde4
config: en-es
split: train
args: en-es
metrics:
- type: bleu
value: 54.18479211969149
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7668
- Bleu: 54.1848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7668
- Bleu: 54.1848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "Helsinki-NLP/opus-mt-en-es", "datasets": ["kde4"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-es", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "config": "en-es", "split": "train", "args": "en-es"}, "metrics": [{"type": "bleu", "value": 54.18479211969149, "name": "Bleu"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,114 |
fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5
|
fathyshalab
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-02-12T11:00:14Z |
2023-02-12T11:00:36+00:00
| 12 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-2-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,115 |
TUKE-KEMT/slovak-t5-base
|
TUKE-KEMT
|
text2text-generation
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"sk",
"dataset:mc4",
"dataset:oscar-corpus/oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-16T14:32:20Z |
2024-09-28T17:24:36+00:00
| 306 | 1 |
---
datasets:
- mc4
- oscar-corpus/oscar
language:
- sk
license: cc-by-sa-4.0
---
# Slovak T5 Base
Monolingual Slovak model, trained from scratch on web data.
This model have to be fine-tuned for a specific task, does not support any instructions or prefixes yet.
After fine-tuning, it is suitable for tasks such as:
- Question answering
- Summarization
- Generation of synthetic data
## Training data
Trained on the Slovak subset of [mc4](https://huggingface.co/datasets/mc4) dataset with [NanoT5](https://github.com/PiotrNawrot/nanoT5) with default settings.
The training corpus has together 14B tokens after deduplication.
It consists of the Slovak data from:
- mc4
- Oscar
- Wikipedia
- custom ollection of newspaper articles
- custom collection of web pages
- Slovak part of the European Parliament Proceedings
## Hyperparameters:
- Input length: 512 tokens
- Effective Batch Size: 128
- Steps: 200000
- Optimizer: Adafactor
- Scheduler: Legacy
- Learning Rate: 0.2
- Gradient clip: 1
## Evaluation
After finetuning for question answering on SK-QUAD, it gives:
- Slovak T5 Base : 71.31 F1
- Umt5 Base: 69.22 F1
- Mt5 Base 65.29 F1
- Mt0 Base 65.17 F1
## Bias
The model is published as it is. We did not make any specific attempts to clean up the data.
## License
Free for scientific and commercial use under the terms of: cc-by-sa-4.0
## Creadits
- Daniel Hládek @ KEMT FIE TUKE
| null |
Non_BioNLP
|
# Slovak T5 Base
Monolingual Slovak model, trained from scratch on web data.
This model have to be fine-tuned for a specific task, does not support any instructions or prefixes yet.
After fine-tuning, it is suitable for tasks such as:
- Question answering
- Summarization
- Generation of synthetic data
## Training data
Trained on the Slovak subset of [mc4](https://huggingface.co/datasets/mc4) dataset with [NanoT5](https://github.com/PiotrNawrot/nanoT5) with default settings.
The training corpus has together 14B tokens after deduplication.
It consists of the Slovak data from:
- mc4
- Oscar
- Wikipedia
- custom ollection of newspaper articles
- custom collection of web pages
- Slovak part of the European Parliament Proceedings
## Hyperparameters:
- Input length: 512 tokens
- Effective Batch Size: 128
- Steps: 200000
- Optimizer: Adafactor
- Scheduler: Legacy
- Learning Rate: 0.2
- Gradient clip: 1
## Evaluation
After finetuning for question answering on SK-QUAD, it gives:
- Slovak T5 Base : 71.31 F1
- Umt5 Base: 69.22 F1
- Mt5 Base 65.29 F1
- Mt0 Base 65.17 F1
## Bias
The model is published as it is. We did not make any specific attempts to clean up the data.
## License
Free for scientific and commercial use under the terms of: cc-by-sa-4.0
## Creadits
- Daniel Hládek @ KEMT FIE TUKE
|
{"datasets": ["mc4", "oscar-corpus/oscar"], "language": ["sk"], "license": "cc-by-sa-4.0"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,116 |
gaudi/opus-mt-efi-fi-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-18T14:56:15Z |
2024-10-19T00:01:49+00:00
| 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-efi-fi --output_dir ./ctranslate2/opus-mt-efi-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-efi-fi-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-efi-fi-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-efi-fi-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi) by Helsinki-NLP.
| null |
Non_BioNLP
|
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-efi-fi --output_dir ./ctranslate2/opus-mt-efi-fi-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-efi-fi-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-efi-fi-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-efi-fi-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi) by Helsinki-NLP.
|
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
|
task
|
[
"TRANSLATION"
] | 45,117 |
huoxu/test-bge-m3
|
huoxu
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-25T07:10:25Z |
2024-07-25T23:11:42+00:00
| 7 | 0 |
---
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0
- Accelerate: 0.27.2
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0
- Accelerate: 0.27.2
- Datasets: 2.17.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"], "widget": []}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,118 |
TransQuest/siamesetransquest-da-ne_en-wiki
|
TransQuest
|
feature-extraction
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"Quality Estimation",
"siamesetransquest",
"da",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-06-04T11:20:50+00:00
| 14 | 0 |
---
language: ne-en
license: apache-2.0
tags:
- Quality Estimation
- siamesetransquest
- da
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel
model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ne_en-wiki")
predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| null |
Non_BioNLP
|
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel
model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ne_en-wiki")
predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
{"language": "ne-en", "license": "apache-2.0", "tags": ["Quality Estimation", "siamesetransquest", "da"]}
|
task
|
[
"TRANSLATION"
] | 45,119 |
rawani123/autotrain-cpn5h-33x3s
|
rawani123
|
text-classification
|
[
"tensorboard",
"safetensors",
"roberta",
"autotrain",
"text-classification",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"region:us"
] | 2024-10-14T17:37:29Z |
2024-10-14T17:39:26+00:00
| 5 | 0 |
---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.022922784090042114
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
| null |
Non_BioNLP
|
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.022922784090042114
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
{"base_model": "cardiffnlp/twitter-roberta-base-sentiment-latest", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,120 |
SBB/sbb_ned-de
|
SBB
| null |
[
"pytorch",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | 2023-09-07T11:52:31Z |
2023-09-12T13:10:42+00:00
| 0 | 1 |
---
license: apache-2.0
---
# Model Card for sbb_ned-de
<!-- Provide a quick summary of what the model is/does. -->
This model is part of a named entity disambiguation and linking system (NED, NEL).
The system was developed by Berlin State Library (SBB) in the [QURATOR](https://staatsbibliothek-berlin.de/die-staatsbibliothek/projekte/project-id-1060-2018) project.
Questions and comments about the model can be directed to Kai Labusch at [email protected] or Clemens Neudecker at [email protected].
# Table of Contents
- [Model Card for sbb_ned-de](#model-card-for-sbb_ned-de)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Training Hyperparameters](#training-hyperparameters)
- [Training Results](#training-results)
- [Evaluation](#evaluation)
- [Testing Data, Factors and Metrics](#testing-data-factors-and-metrics)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Software](#software)
- [Citation](#citation)
- [More Information](#more-information)
- [Model Card Authors](#model-card-authors)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
This model forms the core of a named entity disambiguation and linking system (NED, NEL) that consists of three components:
(i) Lookup of possible candidates in an approximative nearest neighbour (ANN) index that stores BERT embeddings.
(ii) Evaluation of each candidate by comparison of text passages of Wikipedia performed by a purpose-trained BERT model.
(iii) Final ranking of candidates on the basis of information gathered from previous steps.
This model is used in order to generate the BERT embeddings in step (i) and to perform the comparison of the text passages in step (ii).
- **Developed by:** [Kai Labusch](https://huggingface.co/labusch)
- **Shared by:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
- **Model type:** Language models
- **Language(s) (NLP):** de
- **License:** apache-2.0
- **Parent Model:** The BERT base multilingual cased model as provided by [Google](https://huggingface.co/bert-base-multilingual-cased)
- **Resources for more information:**
- [GitHub Repo](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844)
- Associated Paper 1 [CLEF 2020 HIPE paper](http://ceur-ws.org/Vol-2696/paper_163.pdf)
- Associated Paper 2 [CLEF 2022 HIPE paper](http://ceur-ws.org/Vol-3180/paper-85.pdf)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Disciplines such as the *digital humanities* create use cases for text and data mining or the semantic enrichment of full-texts with named entity recognition and linking, e.g., for the re-construction of historical social networks. NED/NEL opens up new posibilities for improved access to text, knowledge creation and clustering of texts. Linking against Wikidata-IDs makes it possible to join the linked texts with the world knowledge provided by Wikidata by means of arbitrary SPARQL queries.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The NED/NEL system was developed on the basis of the [digitised collections of the Staatsbibliothek zu Berlin -- Berlin State Library](https://digital.staatsbibliothek-berlin.de/). The emphasis of this system is therefore on recognition and disambiguation of entities in historical texts.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Due to the historical nature of the documents being digitised in libraries, standard methods and procedures from the NLP domain typically require additional adaptation in order to successfully deal with the historical spelling variation and the remaining noise resulting from OCR errors. For use on other textual material, e.g. with an emphasis on entities comprised in other Wikipedias than the German, English and French ones, significant adaptations have to be performed. In such a case, the methodology used to develop the process as described in the related papers can serve as a showcase.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Though technically possible, named entity disambiguation and linking does not necessarily work well on contemporary data. This is because the disambiguation process relies on a subset of entities available on wikidata. In other words: In order to be reliably identified, those persons, places, or organizations have to be present in the extracted Wikidata.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The identification and disambiguation of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitised historical texts. The aim of the development of these models was to improve this knowledge creation process, an endeavour that was not undertaken for profit. The results of the applied models are freely accessible for the users of the digitised collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified; rather, improved access and semantic enrichment of the derived full-texts with NER and NEL serves every human being with access to the digital collections of the Berlin State Library. As a limitation, it has to be noted that in historical texts the vast majority of identified and disambiguated persons are white, heterosexual and male, whereas other groups (e.g., those defeated in a war, colonial subjects, or else) are often not mentioned in such texts or are not addressed as identifiable entities with full names.
The knowledge base has been directly derived from Wikidata and Wikipedia in a two-step process. In the first step, relevant entities have been selected by use of appropriate SPARQL queries on the basis of Wikidata. In the second step, for all selected entities relevant text comparison material has been extracted from Wikipedia.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Disambiguation of named entities proves to be challenging beyond the task of automatically identifying entities. The existence of broad variations in the spelling of person and place names because of non-normalized orthography and linguistic change as well as changes in the naming of places according to the context adds to this challenge. Historical texts, especially newspapers, contain narrative descriptions and visual representations of minorities and disadvantaged groups without naming them; de-anonymizing such persons and groups is a research task in itself which has only been started to be tackled in the 2020's. The biggest potential for improvement of the NER / NEL / NED system is to be expected with improved OCR performance and NEL recall performance.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data have been made available on Zenodo in the form of a sqlite databases for German text snippets. A data card for this data set is available on Zenodo. The German database is available at [10.5281/zenodo.7767404](https://doi.org/10.5281/zenodo.7767404).
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Before entity disambiguation starts, the input text is run through a named entity recognition (NER) system that tags all person (PER), location (LOC) and organization (ORG) entities, [see the related NER model on Hugging Face](https://huggingface.co/models?other=doi:10.57967/hf/0403). A BERT based NER system that has been developed previously at SBB has been used and described in [this paper](https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/KONVENS2019_paper_4.pdf).
The entity linking and disambiguation works by comparison of continuous text snippets where the entities in question are mentioned. A purpose-trained BERT model (the evaluation model) performs that text comparison task. Therefore, a knowledge base that contains structured information like Wikidata is not sufficient. Rather, additional continuous text is needed where the entities that are part of the knowledge base are discussed, mentioned and referenced. Hence, the knowledge base is derived in such a way that each entity in it has a corresponding Wikipedia page, since the Wikipedia articles contain continuous texts that have been annotated by human authors with references that can serve as ground truth.
### Preprocessing
See section above.
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Since the NED models are purpose-trained BERT derivatives, all the speed and performance properties of standard BERT models apply.
The models were trained on a two-class classification task. Given a pair of sentences, the models decide if the two sentences reference to the same entity or not.
The construction of the training samples is implemented in the [data processor](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/ground_truth/data_processor.py) that can be found in the GitHub repo.
### Training Hyperparameters
The training can be performed by the [ned-bert](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/models/bert.py) command line tool. After installation of the sbb_ned package, type "ned-bert --help" in order to get more information about its functionality.
The training hyperparamaters used can be found in the [Makefile](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/Makefile). Here, the **de-ned-train-2**, **en-ned-train-1**, and **fr-ned-train-0** targets have been used in order to train the published models.
### Training Results
During training, the [data processor](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/ground_truth/data_processor.py) that feeds the training process continuously generates new sentence pairs without repetition over the entire training period. The models have been trained for roughly two weeks on a V100 GPU. During the entire training period the cross entropy training loss was evaluted and continued to decrease.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results, or cites relevant papers. -->
A first version of the system was evaluated at [CLEF 2020 HIPE](http://ceur-ws.org/Vol-2696/paper_163.pdf). Several lessons learned from that first evaluation were applied to the system and a second evaluation was performed at [CLEF 2022 HIPE](http://ceur-ws.org/Vol-3180/paper-85.pdf). The models published here are the ones that have been evaluated in the CLEF 2022 HIPE competition.
## Testing Data, Factors and Metrics
Please consider the papers mentioned above. For a more complete overview about the used evaluation methodology read the [CLEF HIPE 2020 Overview Paper](https://ceur-ws.org/Vol-2696/paper_255.pdf) and the [CLEF HIPE 2022 Overview Paper](https://ceur-ws.org/Vol-3180/paper-83.pdf).
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** V100.
- **Hours used:** Roughly 1-2 week(s).
- **Cloud Provider:** No cloud.
- **Compute Region:** Germany.
- **Carbon Emitted:** More information needed.
# Technical Specifications
### Software
See the information and source code published on [GitHub](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844).
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{labusch_named_2020,
title = {Named {Entity} {Disambiguation} and {Linking} on {Historic} {Newspaper} {OCR} with {BERT}},
url = {https://ceur-ws.org/Vol-2696/paper_163.pdf},
abstract = {In this paper, we propose a named entity disambiguation and linking (NED, NEL) system that consists of three components: (i) Lookup of possible candidates in an approximative nearest neighbour (ANN) index that stores BERT-embeddings. (ii) Evaluation of each candidate by comparison of text passages of Wikipedia performed by a purpose-trained BERT model. (iii) Final ranking of candidates on the basis of information gathered from previous steps. We participated in the CLEF 2020 HIPE NERC-COARSE and NEL-LIT tasks for German, French, and English. The CLEF HIPE 2020 results show that our NEL approach is competitive in terms of precision but has low recall performance due to insufficient knowledge base coverage of the test data.},
language = {en},
booktitle = {{CLEF}},
author = {Labusch, Kai and Neudecker, Clemens},
year = {2020},
pages = {14},
}
```
**APA:**
(Labusch et al., 2020)
**BibTex**
```bibtex
@inproceedings{labusch_entity_2022,
title = {Entity {Linking} in {Multilingual} {Newspapers} and {Classical} {Commentaries} with {BERT}},
url = {http://ceur-ws.org/Vol-3180/paper-85.pdf},
abstract = {Building on our BERT-based entity recognition and three stage entity linking (EL) system [1] that we evaluated in the CLEF HIPE 2020 challenge [2], we focused in the CLEF HIPE 2022 challenge [3] on the entity linking part by participation in the EL-only tasks. We submitted results for the multilingual newspaper challenge (MNC), the multilingual classical commentary challenge (MCC), and the global adaptation challenge (GAC). This working note presents the most important modifications of the entity linking system in comparison to the HIPE 2020 approach and the additional results that have been obtained on the ajmc, hipe2020, newseye, topres19th, and sonar datasets for German, French, and English. The results show that our entity linking approach can be applied to a broad range of text categories and qualities without heavy adaptation and reveals qualitative differences of the impact of hyperparameters on our system that need further investigation.},
language = {en},
booktitle = {{CLEF}},
author = {Labusch, Kai and Neudecker, Clemens},
year = {2022},
pages = {11},
}
```
**APA:**
(Labusch et al., 2022)
# More Information
A demo of the named entity recognition and disambiguation tool can be found [here](https://ravius.sbb.berlin/sbb-tools/index.html?ppn=766355942&model_id=precomputed&el_model_id=precomputed&task=ner). Please note that the ppn (Pica Production Number) found in the link can be replaced by the ppn of any other work in the [digitised collections of the Staatsbibliothek zu Berlin / Berlin State Library](https://digital.staatsbibliothek-berlin.de/), provided that there is a fulltext of this work available.
**MD5 hash of the German pytorch_model.bin:**
92dbcdb6df705b6eed55faaf1a887b9d
**SHA256 hash of the German pytorch_model.bin:**
56be2b265348fa4c0f3e00567a0cb2186234861490bed4cc5c9a58bc12afa5fe
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
[Kai Labusch]([email protected]) and [Jörg Lehmann]([email protected])
# Model Card Contact
Questions and comments about the model can be directed to Kai Labusch at [email protected], questions and comments about the model card can be directed to Jörg Lehmann at [email protected]
# How to Get Started with the Model
How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844#readme).
Model Card as of September 12th, 2023
| null |
Non_BioNLP
|
# Model Card for sbb_ned-de
<!-- Provide a quick summary of what the model is/does. -->
This model is part of a named entity disambiguation and linking system (NED, NEL).
The system was developed by Berlin State Library (SBB) in the [QURATOR](https://staatsbibliothek-berlin.de/die-staatsbibliothek/projekte/project-id-1060-2018) project.
Questions and comments about the model can be directed to Kai Labusch at [email protected] or Clemens Neudecker at [email protected].
# Table of Contents
- [Model Card for sbb_ned-de](#model-card-for-sbb_ned-de)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use](#downstream-use)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Training Hyperparameters](#training-hyperparameters)
- [Training Results](#training-results)
- [Evaluation](#evaluation)
- [Testing Data, Factors and Metrics](#testing-data-factors-and-metrics)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Software](#software)
- [Citation](#citation)
- [More Information](#more-information)
- [Model Card Authors](#model-card-authors)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
This model forms the core of a named entity disambiguation and linking system (NED, NEL) that consists of three components:
(i) Lookup of possible candidates in an approximative nearest neighbour (ANN) index that stores BERT embeddings.
(ii) Evaluation of each candidate by comparison of text passages of Wikipedia performed by a purpose-trained BERT model.
(iii) Final ranking of candidates on the basis of information gathered from previous steps.
This model is used in order to generate the BERT embeddings in step (i) and to perform the comparison of the text passages in step (ii).
- **Developed by:** [Kai Labusch](https://huggingface.co/labusch)
- **Shared by:** [Staatsbibliothek zu Berlin / Berlin State Library](https://huggingface.co/SBB)
- **Model type:** Language models
- **Language(s) (NLP):** de
- **License:** apache-2.0
- **Parent Model:** The BERT base multilingual cased model as provided by [Google](https://huggingface.co/bert-base-multilingual-cased)
- **Resources for more information:**
- [GitHub Repo](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844)
- Associated Paper 1 [CLEF 2020 HIPE paper](http://ceur-ws.org/Vol-2696/paper_163.pdf)
- Associated Paper 2 [CLEF 2022 HIPE paper](http://ceur-ws.org/Vol-3180/paper-85.pdf)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Disciplines such as the *digital humanities* create use cases for text and data mining or the semantic enrichment of full-texts with named entity recognition and linking, e.g., for the re-construction of historical social networks. NED/NEL opens up new posibilities for improved access to text, knowledge creation and clustering of texts. Linking against Wikidata-IDs makes it possible to join the linked texts with the world knowledge provided by Wikidata by means of arbitrary SPARQL queries.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The NED/NEL system was developed on the basis of the [digitised collections of the Staatsbibliothek zu Berlin -- Berlin State Library](https://digital.staatsbibliothek-berlin.de/). The emphasis of this system is therefore on recognition and disambiguation of entities in historical texts.
## Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Due to the historical nature of the documents being digitised in libraries, standard methods and procedures from the NLP domain typically require additional adaptation in order to successfully deal with the historical spelling variation and the remaining noise resulting from OCR errors. For use on other textual material, e.g. with an emphasis on entities comprised in other Wikipedias than the German, English and French ones, significant adaptations have to be performed. In such a case, the methodology used to develop the process as described in the related papers can serve as a showcase.
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
Though technically possible, named entity disambiguation and linking does not necessarily work well on contemporary data. This is because the disambiguation process relies on a subset of entities available on wikidata. In other words: In order to be reliably identified, those persons, places, or organizations have to be present in the extracted Wikidata.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The identification and disambiguation of named entities in historical and contemporary texts is a task contributing to knowledge creation aiming at enhancing scientific research and better discoverability of information in digitised historical texts. The aim of the development of these models was to improve this knowledge creation process, an endeavour that was not undertaken for profit. The results of the applied models are freely accessible for the users of the digitised collections of the Berlin State Library. Against this backdrop, ethical challenges cannot be identified; rather, improved access and semantic enrichment of the derived full-texts with NER and NEL serves every human being with access to the digital collections of the Berlin State Library. As a limitation, it has to be noted that in historical texts the vast majority of identified and disambiguated persons are white, heterosexual and male, whereas other groups (e.g., those defeated in a war, colonial subjects, or else) are often not mentioned in such texts or are not addressed as identifiable entities with full names.
The knowledge base has been directly derived from Wikidata and Wikipedia in a two-step process. In the first step, relevant entities have been selected by use of appropriate SPARQL queries on the basis of Wikidata. In the second step, for all selected entities relevant text comparison material has been extracted from Wikipedia.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Disambiguation of named entities proves to be challenging beyond the task of automatically identifying entities. The existence of broad variations in the spelling of person and place names because of non-normalized orthography and linguistic change as well as changes in the naming of places according to the context adds to this challenge. Historical texts, especially newspapers, contain narrative descriptions and visual representations of minorities and disadvantaged groups without naming them; de-anonymizing such persons and groups is a research task in itself which has only been started to be tackled in the 2020's. The biggest potential for improvement of the NER / NEL / NED system is to be expected with improved OCR performance and NEL recall performance.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data have been made available on Zenodo in the form of a sqlite databases for German text snippets. A data card for this data set is available on Zenodo. The German database is available at [10.5281/zenodo.7767404](https://doi.org/10.5281/zenodo.7767404).
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Before entity disambiguation starts, the input text is run through a named entity recognition (NER) system that tags all person (PER), location (LOC) and organization (ORG) entities, [see the related NER model on Hugging Face](https://huggingface.co/models?other=doi:10.57967/hf/0403). A BERT based NER system that has been developed previously at SBB has been used and described in [this paper](https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/KONVENS2019_paper_4.pdf).
The entity linking and disambiguation works by comparison of continuous text snippets where the entities in question are mentioned. A purpose-trained BERT model (the evaluation model) performs that text comparison task. Therefore, a knowledge base that contains structured information like Wikidata is not sufficient. Rather, additional continuous text is needed where the entities that are part of the knowledge base are discussed, mentioned and referenced. Hence, the knowledge base is derived in such a way that each entity in it has a corresponding Wikipedia page, since the Wikipedia articles contain continuous texts that have been annotated by human authors with references that can serve as ground truth.
### Preprocessing
See section above.
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Since the NED models are purpose-trained BERT derivatives, all the speed and performance properties of standard BERT models apply.
The models were trained on a two-class classification task. Given a pair of sentences, the models decide if the two sentences reference to the same entity or not.
The construction of the training samples is implemented in the [data processor](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/ground_truth/data_processor.py) that can be found in the GitHub repo.
### Training Hyperparameters
The training can be performed by the [ned-bert](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/models/bert.py) command line tool. After installation of the sbb_ned package, type "ned-bert --help" in order to get more information about its functionality.
The training hyperparamaters used can be found in the [Makefile](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/Makefile). Here, the **de-ned-train-2**, **en-ned-train-1**, and **fr-ned-train-0** targets have been used in order to train the published models.
### Training Results
During training, the [data processor](https://github.com/qurator-spk/sbb_ned/blob/6a2a48a9054b3a187b117e490513de5c41638844/qurator/sbb_ned/ground_truth/data_processor.py) that feeds the training process continuously generates new sentence pairs without repetition over the entire training period. The models have been trained for roughly two weeks on a V100 GPU. During the entire training period the cross entropy training loss was evaluted and continued to decrease.
# Evaluation
<!-- This section describes the evaluation protocols and provides the results, or cites relevant papers. -->
A first version of the system was evaluated at [CLEF 2020 HIPE](http://ceur-ws.org/Vol-2696/paper_163.pdf). Several lessons learned from that first evaluation were applied to the system and a second evaluation was performed at [CLEF 2022 HIPE](http://ceur-ws.org/Vol-3180/paper-85.pdf). The models published here are the ones that have been evaluated in the CLEF 2022 HIPE competition.
## Testing Data, Factors and Metrics
Please consider the papers mentioned above. For a more complete overview about the used evaluation methodology read the [CLEF HIPE 2020 Overview Paper](https://ceur-ws.org/Vol-2696/paper_255.pdf) and the [CLEF HIPE 2022 Overview Paper](https://ceur-ws.org/Vol-3180/paper-83.pdf).
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** V100.
- **Hours used:** Roughly 1-2 week(s).
- **Cloud Provider:** No cloud.
- **Compute Region:** Germany.
- **Carbon Emitted:** More information needed.
# Technical Specifications
### Software
See the information and source code published on [GitHub](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844).
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@inproceedings{labusch_named_2020,
title = {Named {Entity} {Disambiguation} and {Linking} on {Historic} {Newspaper} {OCR} with {BERT}},
url = {https://ceur-ws.org/Vol-2696/paper_163.pdf},
abstract = {In this paper, we propose a named entity disambiguation and linking (NED, NEL) system that consists of three components: (i) Lookup of possible candidates in an approximative nearest neighbour (ANN) index that stores BERT-embeddings. (ii) Evaluation of each candidate by comparison of text passages of Wikipedia performed by a purpose-trained BERT model. (iii) Final ranking of candidates on the basis of information gathered from previous steps. We participated in the CLEF 2020 HIPE NERC-COARSE and NEL-LIT tasks for German, French, and English. The CLEF HIPE 2020 results show that our NEL approach is competitive in terms of precision but has low recall performance due to insufficient knowledge base coverage of the test data.},
language = {en},
booktitle = {{CLEF}},
author = {Labusch, Kai and Neudecker, Clemens},
year = {2020},
pages = {14},
}
```
**APA:**
(Labusch et al., 2020)
**BibTex**
```bibtex
@inproceedings{labusch_entity_2022,
title = {Entity {Linking} in {Multilingual} {Newspapers} and {Classical} {Commentaries} with {BERT}},
url = {http://ceur-ws.org/Vol-3180/paper-85.pdf},
abstract = {Building on our BERT-based entity recognition and three stage entity linking (EL) system [1] that we evaluated in the CLEF HIPE 2020 challenge [2], we focused in the CLEF HIPE 2022 challenge [3] on the entity linking part by participation in the EL-only tasks. We submitted results for the multilingual newspaper challenge (MNC), the multilingual classical commentary challenge (MCC), and the global adaptation challenge (GAC). This working note presents the most important modifications of the entity linking system in comparison to the HIPE 2020 approach and the additional results that have been obtained on the ajmc, hipe2020, newseye, topres19th, and sonar datasets for German, French, and English. The results show that our entity linking approach can be applied to a broad range of text categories and qualities without heavy adaptation and reveals qualitative differences of the impact of hyperparameters on our system that need further investigation.},
language = {en},
booktitle = {{CLEF}},
author = {Labusch, Kai and Neudecker, Clemens},
year = {2022},
pages = {11},
}
```
**APA:**
(Labusch et al., 2022)
# More Information
A demo of the named entity recognition and disambiguation tool can be found [here](https://ravius.sbb.berlin/sbb-tools/index.html?ppn=766355942&model_id=precomputed&el_model_id=precomputed&task=ner). Please note that the ppn (Pica Production Number) found in the link can be replaced by the ppn of any other work in the [digitised collections of the Staatsbibliothek zu Berlin / Berlin State Library](https://digital.staatsbibliothek-berlin.de/), provided that there is a fulltext of this work available.
**MD5 hash of the German pytorch_model.bin:**
92dbcdb6df705b6eed55faaf1a887b9d
**SHA256 hash of the German pytorch_model.bin:**
56be2b265348fa4c0f3e00567a0cb2186234861490bed4cc5c9a58bc12afa5fe
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
[Kai Labusch]([email protected]) and [Jörg Lehmann]([email protected])
# Model Card Contact
Questions and comments about the model can be directed to Kai Labusch at [email protected], questions and comments about the model card can be directed to Jörg Lehmann at [email protected]
# How to Get Started with the Model
How to get started with this model is explained in the ReadMe file of the GitHub repository [over here](https://github.com/qurator-spk/sbb_ned/tree/6a2a48a9054b3a187b117e490513de5c41638844#readme).
Model Card as of September 12th, 2023
|
{"license": "apache-2.0"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"NAMED_ENTITY_DISAMBIGUATION"
] | 45,121 |
jartine/gemma-2-9b-it-llamafile
|
jartine
| null |
[
"transformers",
"llamafile",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:other",
"endpoints_compatible",
"region:us"
] | 2024-11-02T03:53:17Z |
2025-01-06T02:28:28+00:00
| 923 | 5 |
---
base_model: google/gemma-2-9b-it
library_name: transformers
license: other
license_link: LICENSE
tags:
- llamafile
quantized_by: jartine
prompt_template: "<start_of_turn>system\n{{prompt}}<end_of_turn>\n{{history}}\n<start_of_turn>{{char}}\
\ \n"
history_template: '<start_of_turn>{{name}}
{{message}}<end_of_turn>
'
---
# Gemma v2 9b Instruct - llamafile
Gemma v2 is a large language model released by Google on Jun 27th 2024.
- Model creator: [Google](https://huggingface.co/google/)
- Original model: [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
The model is packaged into executable weights, which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile). This makes it
easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD 7.3,
and NetBSD for AMD64 and ARM64.
*Software Last Updated: 2024-11-01*
## Quickstart
To get started, you need both the Gemma weights, and the llamafile
software. Both of them are included in a single file, which can be
downloaded and run as follows:
```
wget https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile/resolve/main/gemma-2-9b-it.Q6_K.llamafile
chmod +x gemma-2-9b-it.Q6_K.llamafile
./gemma-2-9b-it.Q6_K.llamafile
```
The default mode of operation for these llamafiles is our new command
line chatbot interface.

Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
of the README.
## Usage
By default, llamafile launches a chatbot in the terminal, and a server
in the background. The chatbot is mostly self-explanatory. You can type
`/help` for further details. See the [llamafile v0.8.15 release
notes](https://github.com/Mozilla-Ocho/llamafile/releases/tag/0.8.15)
for documentation on our newest chatbot features.
To instruct Gemma to do role playing, you can customize the system
prompt as follows:
```
./gemma-2-9b-it.Q6_K.llamafile --chat -p "you are mosaic's godzilla"
```
To view the man page, run:
```
./gemma-2-9b-it.Q6_K.llamafile --help
```
To send a request to the OpenAI API compatible llamafile server, try:
```
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-9b-it",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.0
}'
```
If you don't want the chatbot and you only want to run the server:
```
./gemma-2-9b-it.Q6_K.llamafile --server --nobrowser --host 0.0.0.0
```
An advanced CLI mode is provided that's useful for shell scripting. You
can use it by passing the `--cli` flag. For additional help on how it
may be used, pass the `--help` flag.
```
./gemma-2-9b-it.Q6_K.llamafile --cli -p 'four score and seven' --log-disable
```
You then need to fill out the prompt / history template (see below).
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
## Troubleshooting
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
of the README.
On Linux, the way to avoid run-detector errors is to install the APE
interpreter.
```sh
sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
sudo chmod +x /usr/bin/ape
sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
```
On Windows there's a 4GB limit on executable sizes. This means you
should download the Q2\_K llamafile. For better quality, consider
instead downloading the official llamafile release binary from
<https://github.com/Mozilla-Ocho/llamafile/releases>, renaming it to
have the .exe file extension, and then saying:
```
.\llamafile-0.8.15.exe -m gemma-2-9b-it.Q6_K.llamafile
```
That will overcome the Windows 4GB file size limit, allowing you to
benefit from bigger better models.
## Context Window
This model has a max context window size of 8k tokens. By default, a
context window size of 8192 tokens is used. You may limit the context
window size by passing the `-c N` flag.
## GPU Acceleration
On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
driver needs to be installed if you own an NVIDIA GPU. On Windows, if
you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
the flags `--recompile --gpu amd` the first time you run your llamafile.
On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
perform matrix multiplications. This is open source software, but it
doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
installed on your system, then you can pass the `--recompile` flag to
build a GGML CUDA library just for your system that uses cuBLAS. This
ensures you get maximum performance.
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
## About llamafile
llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
## About Quantization Formats
This model works well with any quantization format. Q6\_K is the best
choice overall here. We tested that, with [our 27b Gemma2
llamafiles](https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile),
that the llamafile implementation of Gemma2 is able to to produce
identical responses to the Gemma2 model that's hosted by Google on
aistudio.google.com. Therefore we'd assume these 9b llamafiles are also
faithful to Google's intentions. If you encounter any divergences, then
try using the BF16 weights, which have the original fidelity.
## See Also
- <https://huggingface.co/Mozilla/gemma-2-2b-it-llamafile>
- <https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile>
## License
The llamafile software is open source and permissively licensed. However
the weights embedded inside the llamafiles are governed by Google's
Gemma License and Gemma Prohibited Use Policy. See the
[LICENSE](LICENSE) file for further details.
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
| null |
Non_BioNLP
|
# Gemma v2 9b Instruct - llamafile
Gemma v2 is a large language model released by Google on Jun 27th 2024.
- Model creator: [Google](https://huggingface.co/google/)
- Original model: [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
The model is packaged into executable weights, which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile). This makes it
easy to use the model on Linux, MacOS, Windows, FreeBSD, OpenBSD 7.3,
and NetBSD for AMD64 and ARM64.
*Software Last Updated: 2024-11-01*
## Quickstart
To get started, you need both the Gemma weights, and the llamafile
software. Both of them are included in a single file, which can be
downloaded and run as follows:
```
wget https://huggingface.co/Mozilla/gemma-2-9b-it-llamafile/resolve/main/gemma-2-9b-it.Q6_K.llamafile
chmod +x gemma-2-9b-it.Q6_K.llamafile
./gemma-2-9b-it.Q6_K.llamafile
```
The default mode of operation for these llamafiles is our new command
line chatbot interface.

Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
of the README.
## Usage
By default, llamafile launches a chatbot in the terminal, and a server
in the background. The chatbot is mostly self-explanatory. You can type
`/help` for further details. See the [llamafile v0.8.15 release
notes](https://github.com/Mozilla-Ocho/llamafile/releases/tag/0.8.15)
for documentation on our newest chatbot features.
To instruct Gemma to do role playing, you can customize the system
prompt as follows:
```
./gemma-2-9b-it.Q6_K.llamafile --chat -p "you are mosaic's godzilla"
```
To view the man page, run:
```
./gemma-2-9b-it.Q6_K.llamafile --help
```
To send a request to the OpenAI API compatible llamafile server, try:
```
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-9b-it",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.0
}'
```
If you don't want the chatbot and you only want to run the server:
```
./gemma-2-9b-it.Q6_K.llamafile --server --nobrowser --host 0.0.0.0
```
An advanced CLI mode is provided that's useful for shell scripting. You
can use it by passing the `--cli` flag. For additional help on how it
may be used, pass the `--help` flag.
```
./gemma-2-9b-it.Q6_K.llamafile --cli -p 'four score and seven' --log-disable
```
You then need to fill out the prompt / history template (see below).
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
## Troubleshooting
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas-and-troubleshooting)
of the README.
On Linux, the way to avoid run-detector errors is to install the APE
interpreter.
```sh
sudo wget -O /usr/bin/ape https://cosmo.zip/pub/cosmos/bin/ape-$(uname -m).elf
sudo chmod +x /usr/bin/ape
sudo sh -c "echo ':APE:M::MZqFpD::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
sudo sh -c "echo ':APE-jart:M::jartsr::/usr/bin/ape:' >/proc/sys/fs/binfmt_misc/register"
```
On Windows there's a 4GB limit on executable sizes. This means you
should download the Q2\_K llamafile. For better quality, consider
instead downloading the official llamafile release binary from
<https://github.com/Mozilla-Ocho/llamafile/releases>, renaming it to
have the .exe file extension, and then saying:
```
.\llamafile-0.8.15.exe -m gemma-2-9b-it.Q6_K.llamafile
```
That will overcome the Windows 4GB file size limit, allowing you to
benefit from bigger better models.
## Context Window
This model has a max context window size of 8k tokens. By default, a
context window size of 8192 tokens is used. You may limit the context
window size by passing the `-c N` flag.
## GPU Acceleration
On GPUs with sufficient RAM, the `-ngl 999` flag may be passed to use
the system's NVIDIA or AMD GPU(s). On Windows, only the graphics card
driver needs to be installed if you own an NVIDIA GPU. On Windows, if
you have an AMD GPU, you should install the ROCm SDK v6.1 and then pass
the flags `--recompile --gpu amd` the first time you run your llamafile.
On NVIDIA GPUs, by default, the prebuilt tinyBLAS library is used to
perform matrix multiplications. This is open source software, but it
doesn't go as fast as closed source cuBLAS. If you have the CUDA SDK
installed on your system, then you can pass the `--recompile` flag to
build a GGML CUDA library just for your system that uses cuBLAS. This
ensures you get maximum performance.
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
## About llamafile
llamafile is a new format introduced by Mozilla on Nov 20th 2023. It
uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
## About Quantization Formats
This model works well with any quantization format. Q6\_K is the best
choice overall here. We tested that, with [our 27b Gemma2
llamafiles](https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile),
that the llamafile implementation of Gemma2 is able to to produce
identical responses to the Gemma2 model that's hosted by Google on
aistudio.google.com. Therefore we'd assume these 9b llamafiles are also
faithful to Google's intentions. If you encounter any divergences, then
try using the BF16 weights, which have the original fidelity.
## See Also
- <https://huggingface.co/Mozilla/gemma-2-2b-it-llamafile>
- <https://huggingface.co/Mozilla/gemma-2-27b-it-llamafile>
## License
The llamafile software is open source and permissively licensed. However
the weights embedded inside the llamafiles are governed by Google's
Gemma License and Gemma Prohibited Use Policy. See the
[LICENSE](LICENSE) file for further details.
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
{"base_model": "google/gemma-2-9b-it", "library_name": "transformers", "license": "other", "license_link": "LICENSE", "tags": ["llamafile"], "quantized_by": "jartine", "prompt_template": "<start_of_turn>system\n{{prompt}}<end_of_turn>\n{{history}}\n<start_of_turn>{{char}} \n", "history_template": "<start_of_turn>{{name}}\n{{message}}<end_of_turn>\n"}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,122 |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-30T06:42:24Z |
2023-01-30T06:47:50+00:00
| 139 | 0 |
---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- type: spearmanr
value: 0.05629672306471203
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1533
- Pearson: 0.0554
- Spearmanr: 0.0563
- Combined Score: 0.0558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5973 | 1.0 | 45 | 1.2342 | -0.0353 | -0.0325 | -0.0339 |
| 1.0952 | 2.0 | 90 | 1.1740 | 0.0434 | 0.0419 | 0.0426 |
| 1.0581 | 3.0 | 135 | 1.1533 | 0.0554 | 0.0563 | 0.0558 |
| 1.0455 | 4.0 | 180 | 1.2131 | 0.0656 | 0.0690 | 0.0673 |
| 0.9795 | 5.0 | 225 | 1.3883 | 0.0868 | 0.0858 | 0.0863 |
| 0.9197 | 6.0 | 270 | 1.4141 | 0.1181 | 0.1148 | 0.1165 |
| 0.8182 | 7.0 | 315 | 1.3460 | 0.1771 | 0.1853 | 0.1812 |
| 0.6796 | 8.0 | 360 | 1.1577 | 0.2286 | 0.2340 | 0.2313 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1533
- Pearson: 0.0554
- Spearmanr: 0.0563
- Combined Score: 0.0558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5973 | 1.0 | 45 | 1.2342 | -0.0353 | -0.0325 | -0.0339 |
| 1.0952 | 2.0 | 90 | 1.1740 | 0.0434 | 0.0419 | 0.0426 |
| 1.0581 | 3.0 | 135 | 1.1533 | 0.0554 | 0.0563 | 0.0558 |
| 1.0455 | 4.0 | 180 | 1.2131 | 0.0656 | 0.0690 | 0.0673 |
| 0.9795 | 5.0 | 225 | 1.3883 | 0.0868 | 0.0858 | 0.0863 |
| 0.9197 | 6.0 | 270 | 1.4141 | 0.1181 | 0.1148 | 0.1165 |
| 0.8182 | 7.0 | 315 | 1.3460 | 0.1771 | 0.1853 | 0.1812 |
| 0.6796 | 8.0 | 360 | 1.1577 | 0.2286 | 0.2340 | 0.2313 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STSB", "type": "glue", "config": "stsb", "split": "validation", "args": "stsb"}, "metrics": [{"type": "spearmanr", "value": 0.05629672306471203, "name": "Spearmanr"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,123 |
gokulsrinivasagan/bert_base_lda_5_v1_book_qqp
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_5_v1_book",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_5_v1_book",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-10T10:54:13Z |
2024-12-10T12:07:18+00:00
| 20 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_5_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_5_v1_book_qqp
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- type: accuracy
value: 0.883007667573584
name: Accuracy
- type: f1
value: 0.8522613693153424
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_5_v1_book_qqp
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/bert_base_lda_5_v1_book) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2715
- Accuracy: 0.8830
- F1: 0.8523
- Combined Score: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3439 | 1.0 | 1422 | 0.2776 | 0.8779 | 0.8319 | 0.8549 |
| 0.2315 | 2.0 | 2844 | 0.2715 | 0.8830 | 0.8523 | 0.8676 |
| 0.1614 | 3.0 | 4266 | 0.2738 | 0.8950 | 0.8622 | 0.8786 |
| 0.1118 | 4.0 | 5688 | 0.3073 | 0.8937 | 0.8585 | 0.8761 |
| 0.0815 | 5.0 | 7110 | 0.3470 | 0.8996 | 0.8653 | 0.8825 |
| 0.0631 | 6.0 | 8532 | 0.3771 | 0.8963 | 0.8636 | 0.8800 |
| 0.0515 | 7.0 | 9954 | 0.3934 | 0.8972 | 0.8633 | 0.8802 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_5_v1_book_qqp
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_5_v1_book](https://huggingface.co/gokulsrinivasagan/bert_base_lda_5_v1_book) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2715
- Accuracy: 0.8830
- F1: 0.8523
- Combined Score: 0.8676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3439 | 1.0 | 1422 | 0.2776 | 0.8779 | 0.8319 | 0.8549 |
| 0.2315 | 2.0 | 2844 | 0.2715 | 0.8830 | 0.8523 | 0.8676 |
| 0.1614 | 3.0 | 4266 | 0.2738 | 0.8950 | 0.8622 | 0.8786 |
| 0.1118 | 4.0 | 5688 | 0.3073 | 0.8937 | 0.8585 | 0.8761 |
| 0.0815 | 5.0 | 7110 | 0.3470 | 0.8996 | 0.8653 | 0.8825 |
| 0.0631 | 6.0 | 8532 | 0.3771 | 0.8963 | 0.8636 | 0.8800 |
| 0.0515 | 7.0 | 9954 | 0.3934 | 0.8972 | 0.8633 | 0.8802 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
{"base_model": "gokulsrinivasagan/bert_base_lda_5_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_5_v1_book_qqp", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE QQP", "type": "glue", "args": "qqp"}, "metrics": [{"type": "accuracy", "value": 0.883007667573584, "name": "Accuracy"}, {"type": "f1", "value": 0.8522613693153424, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,124 |
jnwulff/distilbert-base-uncased-finetuned-clinc
|
jnwulff
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-11T17:43:04Z |
2024-07-11T17:49:03+00:00
| 104 | 0 |
---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- type: accuracy
value: 0.9180645161290323
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2895 | 1.0 | 318 | 3.2884 | 0.7419 |
| 2.6277 | 2.0 | 636 | 1.8751 | 0.8368 |
| 1.5479 | 3.0 | 954 | 1.1569 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7721 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2895 | 1.0 | 318 | 3.2884 | 0.7419 |
| 2.6277 | 2.0 | 636 | 1.8751 | 0.8368 |
| 1.5479 | 3.0 | 954 | 1.1569 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7721 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
{"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9180645161290323, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,125 |
RichardErkhov/deepset_-_xlm-roberta-base-squad2-8bits
|
RichardErkhov
|
text-generation
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 2024-05-09T00:36:31Z |
2024-05-09T00:41:50+00:00
| 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
xlm-roberta-base-squad2 - bnb 8bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/xlm-roberta-base-squad2/
Original model description:
---
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/xlm-roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 74.0354
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMxNWQ2ODJkNWIzZGQwOWI4OTZjYjU3ZDVjZGQzMjI5MzljNjliZTY4Mzk4YTk4OTMzZWYxZjUxYmZhYTBhZSIsInZlcnNpb24iOjF9.eEeFYYJ30BfJDd-JYfI1kjlxJrRF6OFtj2GnkTCOO4kqX31inFy8ptDWusVlLFsUphm4dNWfTKXC5e-gytLBDA
- type: f1
value: 77.1833
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjg4MjNkOTA4Y2I5OGFlYTk1NWZjMWFlNjI5M2Y0NGZhMThhN2M4YmY2Y2RhZjcwYzU0MGNjN2RkZDljZmJmNiIsInZlcnNpb24iOjF9.TX42YMXpH4e0qu7cC4ARDlZWSkd55dwwyeyFXmOlXERNnEicDuFBCsy8WHLaqQCLUkzODJ22Hw4zhv81rwnlAQ
---
# Multilingual XLM-RoBERTa base for QA on various languages
## Overview
**Language model:** xlm-roberta-base
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0 dev set - German MLQA - German XQuAD
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 22*4
n_epochs = 2
max_seq_len=256,
doc_stride=128,
learning_rate=2e-5,
```
Corresponding experiment logs in mlflow: [link](https://public-mlflow.deepset.ai/#/experiments/2/runs/b25ec75e07614accb3f1ce03d43dbe08)
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 73.91560683904657
"f1": 77.14103746689592
```
Evaluated on German MLQA: test-context-de-question-de.json
"exact": 33.67279167589108
"f1": 44.34437105434842
"total": 4517
Evaluated on German XQuAD: xquad.de.json
"exact": 48.739495798319325
"f1": 62.552615701071495
"total": 1190
## Usage
### In Transformers
```python
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2")
# or
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| null |
Non_BioNLP
|
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
xlm-roberta-base-squad2 - bnb 8bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/xlm-roberta-base-squad2/
Original model description:
---
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/xlm-roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 74.0354
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMxNWQ2ODJkNWIzZGQwOWI4OTZjYjU3ZDVjZGQzMjI5MzljNjliZTY4Mzk4YTk4OTMzZWYxZjUxYmZhYTBhZSIsInZlcnNpb24iOjF9.eEeFYYJ30BfJDd-JYfI1kjlxJrRF6OFtj2GnkTCOO4kqX31inFy8ptDWusVlLFsUphm4dNWfTKXC5e-gytLBDA
- type: f1
value: 77.1833
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjg4MjNkOTA4Y2I5OGFlYTk1NWZjMWFlNjI5M2Y0NGZhMThhN2M4YmY2Y2RhZjcwYzU0MGNjN2RkZDljZmJmNiIsInZlcnNpb24iOjF9.TX42YMXpH4e0qu7cC4ARDlZWSkd55dwwyeyFXmOlXERNnEicDuFBCsy8WHLaqQCLUkzODJ22Hw4zhv81rwnlAQ
---
# Multilingual XLM-RoBERTa base for QA on various languages
## Overview
**Language model:** xlm-roberta-base
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0 dev set - German MLQA - German XQuAD
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 22*4
n_epochs = 2
max_seq_len=256,
doc_stride=128,
learning_rate=2e-5,
```
Corresponding experiment logs in mlflow: [link](https://public-mlflow.deepset.ai/#/experiments/2/runs/b25ec75e07614accb3f1ce03d43dbe08)
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 73.91560683904657
"f1": 77.14103746689592
```
Evaluated on German MLQA: test-context-de-question-de.json
"exact": 33.67279167589108
"f1": 44.34437105434842
"total": 4517
Evaluated on German XQuAD: xquad.de.json
"exact": 48.739495798319325
"f1": 62.552615701071495
"total": 1190
## Usage
### In Transformers
```python
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2")
# or
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,126 |
TransferGraph/joebobby_finetuning-sentiment-model-5000-samples3-finetuned-lora-ag_news
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:ag_news",
"base_model:joebobby/finetuning-sentiment-model-5000-samples3",
"base_model:adapter:joebobby/finetuning-sentiment-model-5000-samples3",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-28T00:31:11Z |
2024-02-28T00:31:13+00:00
| 0 | 0 |
---
base_model: joebobby/finetuning-sentiment-model-5000-samples3
datasets:
- ag_news
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: joebobby_finetuning-sentiment-model-5000-samples3-finetuned-lora-ag_news
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.9313157894736842
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# joebobby_finetuning-sentiment-model-5000-samples3-finetuned-lora-ag_news
This model is a fine-tuned version of [joebobby/finetuning-sentiment-model-5000-samples3](https://huggingface.co/joebobby/finetuning-sentiment-model-5000-samples3) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2622 | None | 0 |
| 0.9203 | 0.3016 | 0 |
| 0.9247 | 0.2201 | 1 |
| 0.9309 | 0.2010 | 2 |
| 0.9313 | 0.1902 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# joebobby_finetuning-sentiment-model-5000-samples3-finetuned-lora-ag_news
This model is a fine-tuned version of [joebobby/finetuning-sentiment-model-5000-samples3](https://huggingface.co/joebobby/finetuning-sentiment-model-5000-samples3) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2622 | None | 0 |
| 0.9203 | 0.3016 | 0 |
| 0.9247 | 0.2201 | 1 |
| 0.9309 | 0.2010 | 2 |
| 0.9313 | 0.1902 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
{"base_model": "joebobby/finetuning-sentiment-model-5000-samples3", "datasets": ["ag_news"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "joebobby_finetuning-sentiment-model-5000-samples3-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9313157894736842, "name": "accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,127 |
aroot/eng-guj-simcse_longest_ssblu
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-07T04:01:17Z |
2023-07-07T04:23:29+00:00
| 15 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: eng-guj-simcse_longest_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2244
- Bleu: 2.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2244
- Bleu: 2.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-guj-simcse_longest_ssblu", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,128 |
sobamchan/st5-base-mean-12000
|
sobamchan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"t5",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-27T17:08:35Z |
2025-02-27T17:09:18+00:00
| 8 | 0 |
---
base_model: google-t5/t5-base
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: A man is jumping unto his filthy bed.
sentences:
- A young male is looking at a newspaper while 2 females walks past him.
- The bed is dirty.
- The man is on the moon.
- source_sentence: A carefully balanced male stands on one foot near a clean ocean
beach area.
sentences:
- A man is ouside near the beach.
- Three policemen patrol the streets on bikes
- A man is sitting on his couch.
- source_sentence: The man is wearing a blue shirt.
sentences:
- Near the trashcan the man stood and smoked
- A man in a blue shirt leans on a wall beside a road with a blue van and red car
with water in the background.
- A man in a black shirt is playing a guitar.
- source_sentence: The girls are outdoors.
sentences:
- Two girls riding on an amusement part ride.
- a guy laughs while doing laundry
- Three girls are standing together in a room, one is listening, one is writing
on a wall and the third is talking to them.
- source_sentence: A construction worker peeking out of a manhole while his coworker
sits on the sidewalk smiling.
sentences:
- A worker is looking out of a manhole.
- A man is giving a presentation.
- The workers are both inside the manhole.
---
# SentenceTransformer based on google-t5/t5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) <!-- at revision a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.79 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.02 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.41 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.69 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.35 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0011 | 10 | - | 1.8733 |
| 0.0023 | 20 | - | 1.8726 |
| 0.0034 | 30 | - | 1.8714 |
| 0.0046 | 40 | - | 1.8697 |
| 0.0057 | 50 | - | 1.8675 |
| 0.0069 | 60 | - | 1.8649 |
| 0.0080 | 70 | - | 1.8619 |
| 0.0092 | 80 | - | 1.8584 |
| 0.0103 | 90 | - | 1.8544 |
| 0.0115 | 100 | 3.1046 | 1.8499 |
| 0.0126 | 110 | - | 1.8451 |
| 0.0138 | 120 | - | 1.8399 |
| 0.0149 | 130 | - | 1.8343 |
| 0.0161 | 140 | - | 1.8283 |
| 0.0172 | 150 | - | 1.8223 |
| 0.0184 | 160 | - | 1.8159 |
| 0.0195 | 170 | - | 1.8091 |
| 0.0206 | 180 | - | 1.8016 |
| 0.0218 | 190 | - | 1.7938 |
| 0.0229 | 200 | 3.0303 | 1.7858 |
| 0.0241 | 210 | - | 1.7775 |
| 0.0252 | 220 | - | 1.7693 |
| 0.0264 | 230 | - | 1.7605 |
| 0.0275 | 240 | - | 1.7514 |
| 0.0287 | 250 | - | 1.7417 |
| 0.0298 | 260 | - | 1.7320 |
| 0.0310 | 270 | - | 1.7227 |
| 0.0321 | 280 | - | 1.7134 |
| 0.0333 | 290 | - | 1.7040 |
| 0.0344 | 300 | 2.9459 | 1.6941 |
| 0.0356 | 310 | - | 1.6833 |
| 0.0367 | 320 | - | 1.6725 |
| 0.0379 | 330 | - | 1.6614 |
| 0.0390 | 340 | - | 1.6510 |
| 0.0402 | 350 | - | 1.6402 |
| 0.0413 | 360 | - | 1.6296 |
| 0.0424 | 370 | - | 1.6187 |
| 0.0436 | 380 | - | 1.6073 |
| 0.0447 | 390 | - | 1.5962 |
| 0.0459 | 400 | 2.7813 | 1.5848 |
| 0.0470 | 410 | - | 1.5735 |
| 0.0482 | 420 | - | 1.5620 |
| 0.0493 | 430 | - | 1.5495 |
| 0.0505 | 440 | - | 1.5375 |
| 0.0516 | 450 | - | 1.5256 |
| 0.0528 | 460 | - | 1.5133 |
| 0.0539 | 470 | - | 1.5012 |
| 0.0551 | 480 | - | 1.4892 |
| 0.0562 | 490 | - | 1.4769 |
| 0.0574 | 500 | 2.6308 | 1.4640 |
| 0.0585 | 510 | - | 1.4513 |
| 0.0597 | 520 | - | 1.4391 |
| 0.0608 | 530 | - | 1.4262 |
| 0.0619 | 540 | - | 1.4130 |
| 0.0631 | 550 | - | 1.3998 |
| 0.0642 | 560 | - | 1.3874 |
| 0.0654 | 570 | - | 1.3752 |
| 0.0665 | 580 | - | 1.3620 |
| 0.0677 | 590 | - | 1.3485 |
| 0.0688 | 600 | 2.4452 | 1.3350 |
| 0.0700 | 610 | - | 1.3213 |
| 0.0711 | 620 | - | 1.3088 |
| 0.0723 | 630 | - | 1.2965 |
| 0.0734 | 640 | - | 1.2839 |
| 0.0746 | 650 | - | 1.2713 |
| 0.0757 | 660 | - | 1.2592 |
| 0.0769 | 670 | - | 1.2466 |
| 0.0780 | 680 | - | 1.2332 |
| 0.0792 | 690 | - | 1.2203 |
| 0.0803 | 700 | 2.2626 | 1.2077 |
| 0.0815 | 710 | - | 1.1959 |
| 0.0826 | 720 | - | 1.1841 |
| 0.0837 | 730 | - | 1.1725 |
| 0.0849 | 740 | - | 1.1619 |
| 0.0860 | 750 | - | 1.1516 |
| 0.0872 | 760 | - | 1.1416 |
| 0.0883 | 770 | - | 1.1320 |
| 0.0895 | 780 | - | 1.1227 |
| 0.0906 | 790 | - | 1.1138 |
| 0.0918 | 800 | 2.0044 | 1.1053 |
| 0.0929 | 810 | - | 1.0965 |
| 0.0941 | 820 | - | 1.0879 |
| 0.0952 | 830 | - | 1.0796 |
| 0.0964 | 840 | - | 1.0718 |
| 0.0975 | 850 | - | 1.0644 |
| 0.0987 | 860 | - | 1.0564 |
| 0.0998 | 870 | - | 1.0490 |
| 0.1010 | 880 | - | 1.0417 |
| 0.1021 | 890 | - | 1.0354 |
| 0.1032 | 900 | 1.8763 | 1.0296 |
| 0.1044 | 910 | - | 1.0239 |
| 0.1055 | 920 | - | 1.0180 |
| 0.1067 | 930 | - | 1.0123 |
| 0.1078 | 940 | - | 1.0065 |
| 0.1090 | 950 | - | 1.0008 |
| 0.1101 | 960 | - | 0.9950 |
| 0.1113 | 970 | - | 0.9894 |
| 0.1124 | 980 | - | 0.9840 |
| 0.1136 | 990 | - | 0.9793 |
| 0.1147 | 1000 | 1.7287 | 0.9752 |
| 0.1159 | 1010 | - | 0.9706 |
| 0.1170 | 1020 | - | 0.9659 |
| 0.1182 | 1030 | - | 0.9615 |
| 0.1193 | 1040 | - | 0.9572 |
| 0.1205 | 1050 | - | 0.9531 |
| 0.1216 | 1060 | - | 0.9494 |
| 0.1227 | 1070 | - | 0.9456 |
| 0.1239 | 1080 | - | 0.9415 |
| 0.1250 | 1090 | - | 0.9377 |
| 0.1262 | 1100 | 1.6312 | 0.9339 |
| 0.1273 | 1110 | - | 0.9303 |
| 0.1285 | 1120 | - | 0.9267 |
| 0.1296 | 1130 | - | 0.9232 |
| 0.1308 | 1140 | - | 0.9197 |
| 0.1319 | 1150 | - | 0.9162 |
| 0.1331 | 1160 | - | 0.9128 |
| 0.1342 | 1170 | - | 0.9097 |
| 0.1354 | 1180 | - | 0.9069 |
| 0.1365 | 1190 | - | 0.9040 |
| 0.1377 | 1200 | 1.5316 | 0.9010 |
| 0.1388 | 1210 | - | 0.8979 |
| 0.1400 | 1220 | - | 0.8947 |
| 0.1411 | 1230 | - | 0.8915 |
| 0.1423 | 1240 | - | 0.8888 |
| 0.1434 | 1250 | - | 0.8861 |
| 0.1445 | 1260 | - | 0.8833 |
| 0.1457 | 1270 | - | 0.8806 |
| 0.1468 | 1280 | - | 0.8779 |
| 0.1480 | 1290 | - | 0.8748 |
| 0.1491 | 1300 | 1.4961 | 0.8718 |
| 0.1503 | 1310 | - | 0.8690 |
| 0.1514 | 1320 | - | 0.8664 |
| 0.1526 | 1330 | - | 0.8635 |
| 0.1537 | 1340 | - | 0.8603 |
| 0.1549 | 1350 | - | 0.8574 |
| 0.1560 | 1360 | - | 0.8545 |
| 0.1572 | 1370 | - | 0.8521 |
| 0.1583 | 1380 | - | 0.8497 |
| 0.1595 | 1390 | - | 0.8474 |
| 0.1606 | 1400 | 1.451 | 0.8453 |
| 0.1618 | 1410 | - | 0.8429 |
| 0.1629 | 1420 | - | 0.8404 |
| 0.1640 | 1430 | - | 0.8380 |
| 0.1652 | 1440 | - | 0.8357 |
| 0.1663 | 1450 | - | 0.8336 |
| 0.1675 | 1460 | - | 0.8312 |
| 0.1686 | 1470 | - | 0.8289 |
| 0.1698 | 1480 | - | 0.8262 |
| 0.1709 | 1490 | - | 0.8236 |
| 0.1721 | 1500 | 1.4177 | 0.8213 |
| 0.1732 | 1510 | - | 0.8189 |
| 0.1744 | 1520 | - | 0.8168 |
| 0.1755 | 1530 | - | 0.8147 |
| 0.1767 | 1540 | - | 0.8127 |
| 0.1778 | 1550 | - | 0.8107 |
| 0.1790 | 1560 | - | 0.8082 |
| 0.1801 | 1570 | - | 0.8059 |
| 0.1813 | 1580 | - | 0.8036 |
| 0.1824 | 1590 | - | 0.8015 |
| 0.1835 | 1600 | 1.3734 | 0.7993 |
| 0.1847 | 1610 | - | 0.7970 |
| 0.1858 | 1620 | - | 0.7948 |
| 0.1870 | 1630 | - | 0.7922 |
| 0.1881 | 1640 | - | 0.7900 |
| 0.1893 | 1650 | - | 0.7877 |
| 0.1904 | 1660 | - | 0.7852 |
| 0.1916 | 1670 | - | 0.7829 |
| 0.1927 | 1680 | - | 0.7804 |
| 0.1939 | 1690 | - | 0.7779 |
| 0.1950 | 1700 | 1.3327 | 0.7757 |
| 0.1962 | 1710 | - | 0.7738 |
| 0.1973 | 1720 | - | 0.7719 |
| 0.1985 | 1730 | - | 0.7700 |
| 0.1996 | 1740 | - | 0.7679 |
| 0.2008 | 1750 | - | 0.7658 |
| 0.2019 | 1760 | - | 0.7641 |
| 0.2031 | 1770 | - | 0.7621 |
| 0.2042 | 1780 | - | 0.7601 |
| 0.2053 | 1790 | - | 0.7580 |
| 0.2065 | 1800 | 1.2804 | 0.7558 |
| 0.2076 | 1810 | - | 0.7536 |
| 0.2088 | 1820 | - | 0.7514 |
| 0.2099 | 1830 | - | 0.7493 |
| 0.2111 | 1840 | - | 0.7473 |
| 0.2122 | 1850 | - | 0.7451 |
| 0.2134 | 1860 | - | 0.7429 |
| 0.2145 | 1870 | - | 0.7408 |
| 0.2157 | 1880 | - | 0.7389 |
| 0.2168 | 1890 | - | 0.7368 |
| 0.2180 | 1900 | 1.2255 | 0.7349 |
| 0.2191 | 1910 | - | 0.7328 |
| 0.2203 | 1920 | - | 0.7310 |
| 0.2214 | 1930 | - | 0.7293 |
| 0.2226 | 1940 | - | 0.7277 |
| 0.2237 | 1950 | - | 0.7259 |
| 0.2248 | 1960 | - | 0.7240 |
| 0.2260 | 1970 | - | 0.7221 |
| 0.2271 | 1980 | - | 0.7203 |
| 0.2283 | 1990 | - | 0.7184 |
| 0.2294 | 2000 | 1.2635 | 0.7165 |
| 0.2306 | 2010 | - | 0.7150 |
| 0.2317 | 2020 | - | 0.7135 |
| 0.2329 | 2030 | - | 0.7117 |
| 0.2340 | 2040 | - | 0.7099 |
| 0.2352 | 2050 | - | 0.7084 |
| 0.2363 | 2060 | - | 0.7068 |
| 0.2375 | 2070 | - | 0.7054 |
| 0.2386 | 2080 | - | 0.7037 |
| 0.2398 | 2090 | - | 0.7023 |
| 0.2409 | 2100 | 1.1912 | 0.7009 |
| 0.2421 | 2110 | - | 0.6991 |
| 0.2432 | 2120 | - | 0.6974 |
| 0.2444 | 2130 | - | 0.6962 |
| 0.2455 | 2140 | - | 0.6950 |
| 0.2466 | 2150 | - | 0.6938 |
| 0.2478 | 2160 | - | 0.6922 |
| 0.2489 | 2170 | - | 0.6909 |
| 0.2501 | 2180 | - | 0.6897 |
| 0.2512 | 2190 | - | 0.6884 |
| 0.2524 | 2200 | 1.2144 | 0.6868 |
| 0.2535 | 2210 | - | 0.6856 |
| 0.2547 | 2220 | - | 0.6843 |
| 0.2558 | 2230 | - | 0.6829 |
| 0.2570 | 2240 | - | 0.6817 |
| 0.2581 | 2250 | - | 0.6804 |
| 0.2593 | 2260 | - | 0.6789 |
| 0.2604 | 2270 | - | 0.6775 |
| 0.2616 | 2280 | - | 0.6763 |
| 0.2627 | 2290 | - | 0.6751 |
| 0.2639 | 2300 | 1.1498 | 0.6739 |
| 0.2650 | 2310 | - | 0.6725 |
| 0.2661 | 2320 | - | 0.6711 |
| 0.2673 | 2330 | - | 0.6698 |
| 0.2684 | 2340 | - | 0.6684 |
| 0.2696 | 2350 | - | 0.6666 |
| 0.2707 | 2360 | - | 0.6653 |
| 0.2719 | 2370 | - | 0.6638 |
| 0.2730 | 2380 | - | 0.6621 |
| 0.2742 | 2390 | - | 0.6609 |
| 0.2753 | 2400 | 1.1446 | 0.6596 |
| 0.2765 | 2410 | - | 0.6582 |
| 0.2776 | 2420 | - | 0.6568 |
| 0.2788 | 2430 | - | 0.6553 |
| 0.2799 | 2440 | - | 0.6541 |
| 0.2811 | 2450 | - | 0.6527 |
| 0.2822 | 2460 | - | 0.6513 |
| 0.2834 | 2470 | - | 0.6496 |
| 0.2845 | 2480 | - | 0.6483 |
| 0.2856 | 2490 | - | 0.6475 |
| 0.2868 | 2500 | 1.1309 | 0.6465 |
| 0.2879 | 2510 | - | 0.6455 |
| 0.2891 | 2520 | - | 0.6447 |
| 0.2902 | 2530 | - | 0.6437 |
| 0.2914 | 2540 | - | 0.6428 |
| 0.2925 | 2550 | - | 0.6415 |
| 0.2937 | 2560 | - | 0.6403 |
| 0.2948 | 2570 | - | 0.6392 |
| 0.2960 | 2580 | - | 0.6381 |
| 0.2971 | 2590 | - | 0.6371 |
| 0.2983 | 2600 | 1.1006 | 0.6358 |
| 0.2994 | 2610 | - | 0.6348 |
| 0.3006 | 2620 | - | 0.6340 |
| 0.3017 | 2630 | - | 0.6330 |
| 0.3029 | 2640 | - | 0.6319 |
| 0.3040 | 2650 | - | 0.6308 |
| 0.3052 | 2660 | - | 0.6300 |
| 0.3063 | 2670 | - | 0.6291 |
| 0.3074 | 2680 | - | 0.6280 |
| 0.3086 | 2690 | - | 0.6268 |
| 0.3097 | 2700 | 1.0772 | 0.6254 |
| 0.3109 | 2710 | - | 0.6243 |
| 0.3120 | 2720 | - | 0.6232 |
| 0.3132 | 2730 | - | 0.6224 |
| 0.3143 | 2740 | - | 0.6215 |
| 0.3155 | 2750 | - | 0.6205 |
| 0.3166 | 2760 | - | 0.6194 |
| 0.3178 | 2770 | - | 0.6183 |
| 0.3189 | 2780 | - | 0.6171 |
| 0.3201 | 2790 | - | 0.6160 |
| 0.3212 | 2800 | 1.0648 | 0.6153 |
| 0.3224 | 2810 | - | 0.6141 |
| 0.3235 | 2820 | - | 0.6129 |
| 0.3247 | 2830 | - | 0.6119 |
| 0.3258 | 2840 | - | 0.6109 |
| 0.3269 | 2850 | - | 0.6099 |
| 0.3281 | 2860 | - | 0.6088 |
| 0.3292 | 2870 | - | 0.6079 |
| 0.3304 | 2880 | - | 0.6073 |
| 0.3315 | 2890 | - | 0.6063 |
| 0.3327 | 2900 | 1.0398 | 0.6054 |
| 0.3338 | 2910 | - | 0.6044 |
| 0.3350 | 2920 | - | 0.6033 |
| 0.3361 | 2930 | - | 0.6022 |
| 0.3373 | 2940 | - | 0.6012 |
| 0.3384 | 2950 | - | 0.6003 |
| 0.3396 | 2960 | - | 0.5993 |
| 0.3407 | 2970 | - | 0.5986 |
| 0.3419 | 2980 | - | 0.5978 |
| 0.3430 | 2990 | - | 0.5967 |
| 0.3442 | 3000 | 1.0256 | 0.5959 |
| 0.3453 | 3010 | - | 0.5947 |
| 0.3464 | 3020 | - | 0.5937 |
| 0.3476 | 3030 | - | 0.5929 |
| 0.3487 | 3040 | - | 0.5920 |
| 0.3499 | 3050 | - | 0.5908 |
| 0.3510 | 3060 | - | 0.5897 |
| 0.3522 | 3070 | - | 0.5888 |
| 0.3533 | 3080 | - | 0.5882 |
| 0.3545 | 3090 | - | 0.5874 |
| 0.3556 | 3100 | 1.0489 | 0.5868 |
| 0.3568 | 3110 | - | 0.5860 |
| 0.3579 | 3120 | - | 0.5854 |
| 0.3591 | 3130 | - | 0.5839 |
| 0.3602 | 3140 | - | 0.5830 |
| 0.3614 | 3150 | - | 0.5822 |
| 0.3625 | 3160 | - | 0.5814 |
| 0.3637 | 3170 | - | 0.5808 |
| 0.3648 | 3180 | - | 0.5802 |
| 0.3660 | 3190 | - | 0.5794 |
| 0.3671 | 3200 | 1.038 | 0.5788 |
| 0.3682 | 3210 | - | 0.5778 |
| 0.3694 | 3220 | - | 0.5770 |
| 0.3705 | 3230 | - | 0.5763 |
| 0.3717 | 3240 | - | 0.5752 |
| 0.3728 | 3250 | - | 0.5745 |
| 0.3740 | 3260 | - | 0.5737 |
| 0.3751 | 3270 | - | 0.5728 |
| 0.3763 | 3280 | - | 0.5720 |
| 0.3774 | 3290 | - | 0.5713 |
| 0.3786 | 3300 | 1.0058 | 0.5707 |
| 0.3797 | 3310 | - | 0.5700 |
| 0.3809 | 3320 | - | 0.5690 |
| 0.3820 | 3330 | - | 0.5681 |
| 0.3832 | 3340 | - | 0.5673 |
| 0.3843 | 3350 | - | 0.5669 |
| 0.3855 | 3360 | - | 0.5667 |
| 0.3866 | 3370 | - | 0.5665 |
| 0.3877 | 3380 | - | 0.5659 |
| 0.3889 | 3390 | - | 0.5650 |
| 0.3900 | 3400 | 1.0413 | 0.5645 |
| 0.3912 | 3410 | - | 0.5641 |
| 0.3923 | 3420 | - | 0.5635 |
| 0.3935 | 3430 | - | 0.5629 |
| 0.3946 | 3440 | - | 0.5622 |
| 0.3958 | 3450 | - | 0.5617 |
| 0.3969 | 3460 | - | 0.5614 |
| 0.3981 | 3470 | - | 0.5607 |
| 0.3992 | 3480 | - | 0.5603 |
| 0.4004 | 3490 | - | 0.5598 |
| 0.4015 | 3500 | 0.938 | 0.5596 |
| 0.4027 | 3510 | - | 0.5589 |
| 0.4038 | 3520 | - | 0.5581 |
| 0.4050 | 3530 | - | 0.5571 |
| 0.4061 | 3540 | - | 0.5563 |
| 0.4073 | 3550 | - | 0.5557 |
| 0.4084 | 3560 | - | 0.5551 |
| 0.4095 | 3570 | - | 0.5546 |
| 0.4107 | 3580 | - | 0.5541 |
| 0.4118 | 3590 | - | 0.5535 |
| 0.4130 | 3600 | 0.955 | 0.5528 |
| 0.4141 | 3610 | - | 0.5522 |
| 0.4153 | 3620 | - | 0.5516 |
| 0.4164 | 3630 | - | 0.5509 |
| 0.4176 | 3640 | - | 0.5503 |
| 0.4187 | 3650 | - | 0.5495 |
| 0.4199 | 3660 | - | 0.5490 |
| 0.4210 | 3670 | - | 0.5481 |
| 0.4222 | 3680 | - | 0.5475 |
| 0.4233 | 3690 | - | 0.5467 |
| 0.4245 | 3700 | 0.9387 | 0.5463 |
| 0.4256 | 3710 | - | 0.5459 |
| 0.4268 | 3720 | - | 0.5452 |
| 0.4279 | 3730 | - | 0.5448 |
| 0.4290 | 3740 | - | 0.5443 |
| 0.4302 | 3750 | - | 0.5440 |
| 0.4313 | 3760 | - | 0.5435 |
| 0.4325 | 3770 | - | 0.5430 |
| 0.4336 | 3780 | - | 0.5423 |
| 0.4348 | 3790 | - | 0.5418 |
| 0.4359 | 3800 | 0.9672 | 0.5415 |
| 0.4371 | 3810 | - | 0.5413 |
| 0.4382 | 3820 | - | 0.5410 |
| 0.4394 | 3830 | - | 0.5406 |
| 0.4405 | 3840 | - | 0.5403 |
| 0.4417 | 3850 | - | 0.5397 |
| 0.4428 | 3860 | - | 0.5394 |
| 0.4440 | 3870 | - | 0.5386 |
| 0.4451 | 3880 | - | 0.5378 |
| 0.4463 | 3890 | - | 0.5370 |
| 0.4474 | 3900 | 0.926 | 0.5360 |
| 0.4485 | 3910 | - | 0.5351 |
| 0.4497 | 3920 | - | 0.5346 |
| 0.4508 | 3930 | - | 0.5343 |
| 0.4520 | 3940 | - | 0.5339 |
| 0.4531 | 3950 | - | 0.5337 |
| 0.4543 | 3960 | - | 0.5334 |
| 0.4554 | 3970 | - | 0.5330 |
| 0.4566 | 3980 | - | 0.5327 |
| 0.4577 | 3990 | - | 0.5324 |
| 0.4589 | 4000 | 0.867 | 0.5319 |
| 0.4600 | 4010 | - | 0.5313 |
| 0.4612 | 4020 | - | 0.5308 |
| 0.4623 | 4030 | - | 0.5300 |
| 0.4635 | 4040 | - | 0.5293 |
| 0.4646 | 4050 | - | 0.5287 |
| 0.4658 | 4060 | - | 0.5284 |
| 0.4669 | 4070 | - | 0.5281 |
| 0.4681 | 4080 | - | 0.5277 |
| 0.4692 | 4090 | - | 0.5272 |
| 0.4703 | 4100 | 0.916 | 0.5267 |
| 0.4715 | 4110 | - | 0.5260 |
| 0.4726 | 4120 | - | 0.5252 |
| 0.4738 | 4130 | - | 0.5246 |
| 0.4749 | 4140 | - | 0.5239 |
| 0.4761 | 4150 | - | 0.5232 |
| 0.4772 | 4160 | - | 0.5225 |
| 0.4784 | 4170 | - | 0.5221 |
| 0.4795 | 4180 | - | 0.5216 |
| 0.4807 | 4190 | - | 0.5211 |
| 0.4818 | 4200 | 0.9667 | 0.5206 |
| 0.4830 | 4210 | - | 0.5204 |
| 0.4841 | 4220 | - | 0.5200 |
| 0.4853 | 4230 | - | 0.5192 |
| 0.4864 | 4240 | - | 0.5187 |
| 0.4876 | 4250 | - | 0.5185 |
| 0.4887 | 4260 | - | 0.5179 |
| 0.4898 | 4270 | - | 0.5173 |
| 0.4910 | 4280 | - | 0.5170 |
| 0.4921 | 4290 | - | 0.5165 |
| 0.4933 | 4300 | 0.9276 | 0.5160 |
| 0.4944 | 4310 | - | 0.5154 |
| 0.4956 | 4320 | - | 0.5150 |
| 0.4967 | 4330 | - | 0.5144 |
| 0.4979 | 4340 | - | 0.5141 |
| 0.4990 | 4350 | - | 0.5139 |
| 0.5002 | 4360 | - | 0.5138 |
| 0.5013 | 4370 | - | 0.5136 |
| 0.5025 | 4380 | - | 0.5133 |
| 0.5036 | 4390 | - | 0.5129 |
| 0.5048 | 4400 | 0.9331 | 0.5126 |
| 0.5059 | 4410 | - | 0.5123 |
| 0.5071 | 4420 | - | 0.5117 |
| 0.5082 | 4430 | - | 0.5113 |
| 0.5093 | 4440 | - | 0.5108 |
| 0.5105 | 4450 | - | 0.5106 |
| 0.5116 | 4460 | - | 0.5106 |
| 0.5128 | 4470 | - | 0.5106 |
| 0.5139 | 4480 | - | 0.5104 |
| 0.5151 | 4490 | - | 0.5102 |
| 0.5162 | 4500 | 0.907 | 0.5097 |
| 0.5174 | 4510 | - | 0.5092 |
| 0.5185 | 4520 | - | 0.5086 |
| 0.5197 | 4530 | - | 0.5082 |
| 0.5208 | 4540 | - | 0.5079 |
| 0.5220 | 4550 | - | 0.5075 |
| 0.5231 | 4560 | - | 0.5071 |
| 0.5243 | 4570 | - | 0.5067 |
| 0.5254 | 4580 | - | 0.5066 |
| 0.5266 | 4590 | - | 0.5062 |
| 0.5277 | 4600 | 0.913 | 0.5059 |
| 0.5289 | 4610 | - | 0.5056 |
| 0.5300 | 4620 | - | 0.5052 |
| 0.5311 | 4630 | - | 0.5046 |
| 0.5323 | 4640 | - | 0.5039 |
| 0.5334 | 4650 | - | 0.5033 |
| 0.5346 | 4660 | - | 0.5030 |
| 0.5357 | 4670 | - | 0.5028 |
| 0.5369 | 4680 | - | 0.5027 |
| 0.5380 | 4690 | - | 0.5023 |
| 0.5392 | 4700 | 0.9047 | 0.5020 |
| 0.5403 | 4710 | - | 0.5018 |
| 0.5415 | 4720 | - | 0.5015 |
| 0.5426 | 4730 | - | 0.5009 |
| 0.5438 | 4740 | - | 0.5003 |
| 0.5449 | 4750 | - | 0.4997 |
| 0.5461 | 4760 | - | 0.4991 |
| 0.5472 | 4770 | - | 0.4984 |
| 0.5484 | 4780 | - | 0.4980 |
| 0.5495 | 4790 | - | 0.4980 |
| 0.5506 | 4800 | 0.887 | 0.4979 |
| 0.5518 | 4810 | - | 0.4975 |
| 0.5529 | 4820 | - | 0.4973 |
| 0.5541 | 4830 | - | 0.4969 |
| 0.5552 | 4840 | - | 0.4966 |
| 0.5564 | 4850 | - | 0.4964 |
| 0.5575 | 4860 | - | 0.4964 |
| 0.5587 | 4870 | - | 0.4960 |
| 0.5598 | 4880 | - | 0.4957 |
| 0.5610 | 4890 | - | 0.4955 |
| 0.5621 | 4900 | 0.8645 | 0.4952 |
| 0.5633 | 4910 | - | 0.4950 |
| 0.5644 | 4920 | - | 0.4952 |
| 0.5656 | 4930 | - | 0.4949 |
| 0.5667 | 4940 | - | 0.4943 |
| 0.5679 | 4950 | - | 0.4938 |
| 0.5690 | 4960 | - | 0.4936 |
| 0.5702 | 4970 | - | 0.4933 |
| 0.5713 | 4980 | - | 0.4931 |
| 0.5724 | 4990 | - | 0.4929 |
| 0.5736 | 5000 | 0.8348 | 0.4924 |
| 0.5747 | 5010 | - | 0.4921 |
| 0.5759 | 5020 | - | 0.4915 |
| 0.5770 | 5030 | - | 0.4911 |
| 0.5782 | 5040 | - | 0.4909 |
| 0.5793 | 5050 | - | 0.4905 |
| 0.5805 | 5060 | - | 0.4900 |
| 0.5816 | 5070 | - | 0.4892 |
| 0.5828 | 5080 | - | 0.4886 |
| 0.5839 | 5090 | - | 0.4883 |
| 0.5851 | 5100 | 0.871 | 0.4879 |
| 0.5862 | 5110 | - | 0.4877 |
| 0.5874 | 5120 | - | 0.4874 |
| 0.5885 | 5130 | - | 0.4870 |
| 0.5897 | 5140 | - | 0.4867 |
| 0.5908 | 5150 | - | 0.4864 |
| 0.5919 | 5160 | - | 0.4862 |
| 0.5931 | 5170 | - | 0.4860 |
| 0.5942 | 5180 | - | 0.4857 |
| 0.5954 | 5190 | - | 0.4855 |
| 0.5965 | 5200 | 0.8522 | 0.4850 |
| 0.5977 | 5210 | - | 0.4846 |
| 0.5988 | 5220 | - | 0.4844 |
| 0.6000 | 5230 | - | 0.4842 |
| 0.6011 | 5240 | - | 0.4837 |
| 0.6023 | 5250 | - | 0.4835 |
| 0.6034 | 5260 | - | 0.4831 |
| 0.6046 | 5270 | - | 0.4826 |
| 0.6057 | 5280 | - | 0.4822 |
| 0.6069 | 5290 | - | 0.4822 |
| 0.6080 | 5300 | 0.869 | 0.4820 |
| 0.6092 | 5310 | - | 0.4818 |
| 0.6103 | 5320 | - | 0.4819 |
| 0.6114 | 5330 | - | 0.4819 |
| 0.6126 | 5340 | - | 0.4815 |
| 0.6137 | 5350 | - | 0.4813 |
| 0.6149 | 5360 | - | 0.4812 |
| 0.6160 | 5370 | - | 0.4810 |
| 0.6172 | 5380 | - | 0.4809 |
| 0.6183 | 5390 | - | 0.4806 |
| 0.6195 | 5400 | 0.8548 | 0.4805 |
| 0.6206 | 5410 | - | 0.4800 |
| 0.6218 | 5420 | - | 0.4798 |
| 0.6229 | 5430 | - | 0.4795 |
| 0.6241 | 5440 | - | 0.4792 |
| 0.6252 | 5450 | - | 0.4790 |
| 0.6264 | 5460 | - | 0.4790 |
| 0.6275 | 5470 | - | 0.4791 |
| 0.6287 | 5480 | - | 0.4794 |
| 0.6298 | 5490 | - | 0.4792 |
| 0.6310 | 5500 | 0.8366 | 0.4790 |
| 0.6321 | 5510 | - | 0.4786 |
| 0.6332 | 5520 | - | 0.4780 |
| 0.6344 | 5530 | - | 0.4773 |
| 0.6355 | 5540 | - | 0.4768 |
| 0.6367 | 5550 | - | 0.4767 |
| 0.6378 | 5560 | - | 0.4765 |
| 0.6390 | 5570 | - | 0.4765 |
| 0.6401 | 5580 | - | 0.4763 |
| 0.6413 | 5590 | - | 0.4760 |
| 0.6424 | 5600 | 0.8696 | 0.4757 |
| 0.6436 | 5610 | - | 0.4754 |
| 0.6447 | 5620 | - | 0.4752 |
| 0.6459 | 5630 | - | 0.4751 |
| 0.6470 | 5640 | - | 0.4747 |
| 0.6482 | 5650 | - | 0.4747 |
| 0.6493 | 5660 | - | 0.4742 |
| 0.6505 | 5670 | - | 0.4740 |
| 0.6516 | 5680 | - | 0.4736 |
| 0.6527 | 5690 | - | 0.4730 |
| 0.6539 | 5700 | 0.8302 | 0.4725 |
| 0.6550 | 5710 | - | 0.4723 |
| 0.6562 | 5720 | - | 0.4720 |
| 0.6573 | 5730 | - | 0.4718 |
| 0.6585 | 5740 | - | 0.4715 |
| 0.6596 | 5750 | - | 0.4714 |
| 0.6608 | 5760 | - | 0.4711 |
| 0.6619 | 5770 | - | 0.4707 |
| 0.6631 | 5780 | - | 0.4707 |
| 0.6642 | 5790 | - | 0.4703 |
| 0.6654 | 5800 | 0.8128 | 0.4703 |
| 0.6665 | 5810 | - | 0.4701 |
| 0.6677 | 5820 | - | 0.4699 |
| 0.6688 | 5830 | - | 0.4697 |
| 0.6700 | 5840 | - | 0.4698 |
| 0.6711 | 5850 | - | 0.4695 |
| 0.6722 | 5860 | - | 0.4691 |
| 0.6734 | 5870 | - | 0.4689 |
| 0.6745 | 5880 | - | 0.4689 |
| 0.6757 | 5890 | - | 0.4688 |
| 0.6768 | 5900 | 0.8437 | 0.4683 |
| 0.6780 | 5910 | - | 0.4683 |
| 0.6791 | 5920 | - | 0.4681 |
| 0.6803 | 5930 | - | 0.4678 |
| 0.6814 | 5940 | - | 0.4677 |
| 0.6826 | 5950 | - | 0.4676 |
| 0.6837 | 5960 | - | 0.4673 |
| 0.6849 | 5970 | - | 0.4668 |
| 0.6860 | 5980 | - | 0.4667 |
| 0.6872 | 5990 | - | 0.4661 |
| 0.6883 | 6000 | 0.7774 | 0.4657 |
| 0.6895 | 6010 | - | 0.4654 |
| 0.6906 | 6020 | - | 0.4650 |
| 0.6918 | 6030 | - | 0.4648 |
| 0.6929 | 6040 | - | 0.4646 |
| 0.6940 | 6050 | - | 0.4644 |
| 0.6952 | 6060 | - | 0.4643 |
| 0.6963 | 6070 | - | 0.4641 |
| 0.6975 | 6080 | - | 0.4640 |
| 0.6986 | 6090 | - | 0.4638 |
| 0.6998 | 6100 | 0.834 | 0.4637 |
| 0.7009 | 6110 | - | 0.4633 |
| 0.7021 | 6120 | - | 0.4632 |
| 0.7032 | 6130 | - | 0.4631 |
| 0.7044 | 6140 | - | 0.4628 |
| 0.7055 | 6150 | - | 0.4627 |
| 0.7067 | 6160 | - | 0.4623 |
| 0.7078 | 6170 | - | 0.4617 |
| 0.7090 | 6180 | - | 0.4615 |
| 0.7101 | 6190 | - | 0.4614 |
| 0.7113 | 6200 | 0.8118 | 0.4612 |
| 0.7124 | 6210 | - | 0.4612 |
| 0.7135 | 6220 | - | 0.4612 |
| 0.7147 | 6230 | - | 0.4610 |
| 0.7158 | 6240 | - | 0.4609 |
| 0.7170 | 6250 | - | 0.4610 |
| 0.7181 | 6260 | - | 0.4611 |
| 0.7193 | 6270 | - | 0.4607 |
| 0.7204 | 6280 | - | 0.4599 |
| 0.7216 | 6290 | - | 0.4598 |
| 0.7227 | 6300 | 0.7884 | 0.4600 |
| 0.7239 | 6310 | - | 0.4599 |
| 0.7250 | 6320 | - | 0.4600 |
| 0.7262 | 6330 | - | 0.4601 |
| 0.7273 | 6340 | - | 0.4603 |
| 0.7285 | 6350 | - | 0.4603 |
| 0.7296 | 6360 | - | 0.4598 |
| 0.7308 | 6370 | - | 0.4597 |
| 0.7319 | 6380 | - | 0.4596 |
| 0.7331 | 6390 | - | 0.4594 |
| 0.7342 | 6400 | 0.8092 | 0.4590 |
| 0.7353 | 6410 | - | 0.4588 |
| 0.7365 | 6420 | - | 0.4585 |
| 0.7376 | 6430 | - | 0.4584 |
| 0.7388 | 6440 | - | 0.4580 |
| 0.7399 | 6450 | - | 0.4574 |
| 0.7411 | 6460 | - | 0.4570 |
| 0.7422 | 6470 | - | 0.4566 |
| 0.7434 | 6480 | - | 0.4563 |
| 0.7445 | 6490 | - | 0.4560 |
| 0.7457 | 6500 | 0.8195 | 0.4557 |
| 0.7468 | 6510 | - | 0.4556 |
| 0.7480 | 6520 | - | 0.4554 |
| 0.7491 | 6530 | - | 0.4551 |
| 0.7503 | 6540 | - | 0.4548 |
| 0.7514 | 6550 | - | 0.4545 |
| 0.7526 | 6560 | - | 0.4543 |
| 0.7537 | 6570 | - | 0.4541 |
| 0.7548 | 6580 | - | 0.4540 |
| 0.7560 | 6590 | - | 0.4538 |
| 0.7571 | 6600 | 0.8163 | 0.4535 |
| 0.7583 | 6610 | - | 0.4533 |
| 0.7594 | 6620 | - | 0.4536 |
| 0.7606 | 6630 | - | 0.4535 |
| 0.7617 | 6640 | - | 0.4533 |
| 0.7629 | 6650 | - | 0.4532 |
| 0.7640 | 6660 | - | 0.4531 |
| 0.7652 | 6670 | - | 0.4531 |
| 0.7663 | 6680 | - | 0.4530 |
| 0.7675 | 6690 | - | 0.4528 |
| 0.7686 | 6700 | 0.8091 | 0.4527 |
| 0.7698 | 6710 | - | 0.4527 |
| 0.7709 | 6720 | - | 0.4526 |
| 0.7721 | 6730 | - | 0.4525 |
| 0.7732 | 6740 | - | 0.4524 |
| 0.7743 | 6750 | - | 0.4521 |
| 0.7755 | 6760 | - | 0.4517 |
| 0.7766 | 6770 | - | 0.4514 |
| 0.7778 | 6780 | - | 0.4512 |
| 0.7789 | 6790 | - | 0.4514 |
| 0.7801 | 6800 | 0.8098 | 0.4515 |
| 0.7812 | 6810 | - | 0.4514 |
| 0.7824 | 6820 | - | 0.4511 |
| 0.7835 | 6830 | - | 0.4507 |
| 0.7847 | 6840 | - | 0.4505 |
| 0.7858 | 6850 | - | 0.4504 |
| 0.7870 | 6860 | - | 0.4503 |
| 0.7881 | 6870 | - | 0.4500 |
| 0.7893 | 6880 | - | 0.4498 |
| 0.7904 | 6890 | - | 0.4495 |
| 0.7916 | 6900 | 0.7857 | 0.4491 |
| 0.7927 | 6910 | - | 0.4490 |
| 0.7939 | 6920 | - | 0.4488 |
| 0.7950 | 6930 | - | 0.4488 |
| 0.7961 | 6940 | - | 0.4488 |
| 0.7973 | 6950 | - | 0.4487 |
| 0.7984 | 6960 | - | 0.4484 |
| 0.7996 | 6970 | - | 0.4482 |
| 0.8007 | 6980 | - | 0.4483 |
| 0.8019 | 6990 | - | 0.4481 |
| 0.8030 | 7000 | 0.7817 | 0.4477 |
| 0.8042 | 7010 | - | 0.4476 |
| 0.8053 | 7020 | - | 0.4471 |
| 0.8065 | 7030 | - | 0.4469 |
| 0.8076 | 7040 | - | 0.4468 |
| 0.8088 | 7050 | - | 0.4465 |
| 0.8099 | 7060 | - | 0.4460 |
| 0.8111 | 7070 | - | 0.4458 |
| 0.8122 | 7080 | - | 0.4458 |
| 0.8134 | 7090 | - | 0.4454 |
| 0.8145 | 7100 | 0.779 | 0.4452 |
| 0.8156 | 7110 | - | 0.4449 |
| 0.8168 | 7120 | - | 0.4448 |
| 0.8179 | 7130 | - | 0.4446 |
| 0.8191 | 7140 | - | 0.4442 |
| 0.8202 | 7150 | - | 0.4442 |
| 0.8214 | 7160 | - | 0.4441 |
| 0.8225 | 7170 | - | 0.4440 |
| 0.8237 | 7180 | - | 0.4437 |
| 0.8248 | 7190 | - | 0.4434 |
| 0.8260 | 7200 | 0.7807 | 0.4434 |
| 0.8271 | 7210 | - | 0.4435 |
| 0.8283 | 7220 | - | 0.4433 |
| 0.8294 | 7230 | - | 0.4431 |
| 0.8306 | 7240 | - | 0.4430 |
| 0.8317 | 7250 | - | 0.4428 |
| 0.8329 | 7260 | - | 0.4426 |
| 0.8340 | 7270 | - | 0.4424 |
| 0.8351 | 7280 | - | 0.4428 |
| 0.8363 | 7290 | - | 0.4426 |
| 0.8374 | 7300 | 0.7724 | 0.4423 |
| 0.8386 | 7310 | - | 0.4419 |
| 0.8397 | 7320 | - | 0.4418 |
| 0.8409 | 7330 | - | 0.4417 |
| 0.8420 | 7340 | - | 0.4415 |
| 0.8432 | 7350 | - | 0.4413 |
| 0.8443 | 7360 | - | 0.4409 |
| 0.8455 | 7370 | - | 0.4406 |
| 0.8466 | 7380 | - | 0.4405 |
| 0.8478 | 7390 | - | 0.4400 |
| 0.8489 | 7400 | 0.7898 | 0.4393 |
| 0.8501 | 7410 | - | 0.4389 |
| 0.8512 | 7420 | - | 0.4384 |
| 0.8524 | 7430 | - | 0.4381 |
| 0.8535 | 7440 | - | 0.4380 |
| 0.8547 | 7450 | - | 0.4380 |
| 0.8558 | 7460 | - | 0.4379 |
| 0.8569 | 7470 | - | 0.4377 |
| 0.8581 | 7480 | - | 0.4377 |
| 0.8592 | 7490 | - | 0.4376 |
| 0.8604 | 7500 | 0.8009 | 0.4375 |
| 0.8615 | 7510 | - | 0.4371 |
| 0.8627 | 7520 | - | 0.4369 |
| 0.8638 | 7530 | - | 0.4365 |
| 0.8650 | 7540 | - | 0.4362 |
| 0.8661 | 7550 | - | 0.4359 |
| 0.8673 | 7560 | - | 0.4357 |
| 0.8684 | 7570 | - | 0.4355 |
| 0.8696 | 7580 | - | 0.4351 |
| 0.8707 | 7590 | - | 0.4347 |
| 0.8719 | 7600 | 0.7847 | 0.4346 |
| 0.8730 | 7610 | - | 0.4346 |
| 0.8742 | 7620 | - | 0.4344 |
| 0.8753 | 7630 | - | 0.4343 |
| 0.8764 | 7640 | - | 0.4338 |
| 0.8776 | 7650 | - | 0.4336 |
| 0.8787 | 7660 | - | 0.4332 |
| 0.8799 | 7670 | - | 0.4331 |
| 0.8810 | 7680 | - | 0.4329 |
| 0.8822 | 7690 | - | 0.4326 |
| 0.8833 | 7700 | 0.7668 | 0.4324 |
| 0.8845 | 7710 | - | 0.4325 |
| 0.8856 | 7720 | - | 0.4327 |
| 0.8868 | 7730 | - | 0.4329 |
| 0.8879 | 7740 | - | 0.4328 |
| 0.8891 | 7750 | - | 0.4325 |
| 0.8902 | 7760 | - | 0.4325 |
| 0.8914 | 7770 | - | 0.4326 |
| 0.8925 | 7780 | - | 0.4324 |
| 0.8937 | 7790 | - | 0.4322 |
| 0.8948 | 7800 | 0.7987 | 0.4320 |
| 0.8960 | 7810 | - | 0.4319 |
| 0.8971 | 7820 | - | 0.4318 |
| 0.8982 | 7830 | - | 0.4315 |
| 0.8994 | 7840 | - | 0.4312 |
| 0.9005 | 7850 | - | 0.4308 |
| 0.9017 | 7860 | - | 0.4308 |
| 0.9028 | 7870 | - | 0.4309 |
| 0.9040 | 7880 | - | 0.4306 |
| 0.9051 | 7890 | - | 0.4305 |
| 0.9063 | 7900 | 0.7691 | 0.4305 |
| 0.9074 | 7910 | - | 0.4305 |
| 0.9086 | 7920 | - | 0.4308 |
| 0.9097 | 7930 | - | 0.4309 |
| 0.9109 | 7940 | - | 0.4309 |
| 0.9120 | 7950 | - | 0.4305 |
| 0.9132 | 7960 | - | 0.4297 |
| 0.9143 | 7970 | - | 0.4294 |
| 0.9155 | 7980 | - | 0.4292 |
| 0.9166 | 7990 | - | 0.4292 |
| 0.9177 | 8000 | 0.7828 | 0.4289 |
| 0.9189 | 8010 | - | 0.4288 |
| 0.9200 | 8020 | - | 0.4289 |
| 0.9212 | 8030 | - | 0.4285 |
| 0.9223 | 8040 | - | 0.4286 |
| 0.9235 | 8050 | - | 0.4289 |
| 0.9246 | 8060 | - | 0.4288 |
| 0.9258 | 8070 | - | 0.4290 |
| 0.9269 | 8080 | - | 0.4289 |
| 0.9281 | 8090 | - | 0.4287 |
| 0.9292 | 8100 | 0.7544 | 0.4288 |
| 0.9304 | 8110 | - | 0.4284 |
| 0.9315 | 8120 | - | 0.4287 |
| 0.9327 | 8130 | - | 0.4289 |
| 0.9338 | 8140 | - | 0.4293 |
| 0.9350 | 8150 | - | 0.4292 |
| 0.9361 | 8160 | - | 0.4289 |
| 0.9372 | 8170 | - | 0.4286 |
| 0.9384 | 8180 | - | 0.4280 |
| 0.9395 | 8190 | - | 0.4281 |
| 0.9407 | 8200 | 0.7502 | 0.4281 |
| 0.9418 | 8210 | - | 0.4278 |
| 0.9430 | 8220 | - | 0.4276 |
| 0.9441 | 8230 | - | 0.4274 |
| 0.9453 | 8240 | - | 0.4270 |
| 0.9464 | 8250 | - | 0.4267 |
| 0.9476 | 8260 | - | 0.4263 |
| 0.9487 | 8270 | - | 0.4261 |
| 0.9499 | 8280 | - | 0.4257 |
| 0.9510 | 8290 | - | 0.4254 |
| 0.9522 | 8300 | 0.7818 | 0.4255 |
| 0.9533 | 8310 | - | 0.4255 |
| 0.9545 | 8320 | - | 0.4254 |
| 0.9556 | 8330 | - | 0.4252 |
| 0.9568 | 8340 | - | 0.4249 |
| 0.9579 | 8350 | - | 0.4249 |
| 0.9590 | 8360 | - | 0.4248 |
| 0.9602 | 8370 | - | 0.4249 |
| 0.9613 | 8380 | - | 0.4248 |
| 0.9625 | 8390 | - | 0.4246 |
| 0.9636 | 8400 | 0.7606 | 0.4243 |
| 0.9648 | 8410 | - | 0.4242 |
| 0.9659 | 8420 | - | 0.4240 |
| 0.9671 | 8430 | - | 0.4239 |
| 0.9682 | 8440 | - | 0.4238 |
| 0.9694 | 8450 | - | 0.4238 |
| 0.9705 | 8460 | - | 0.4237 |
| 0.9717 | 8470 | - | 0.4236 |
| 0.9728 | 8480 | - | 0.4232 |
| 0.9740 | 8490 | - | 0.4229 |
| 0.9751 | 8500 | 0.7416 | 0.4227 |
| 0.9763 | 8510 | - | 0.4226 |
| 0.9774 | 8520 | - | 0.4220 |
| 0.9785 | 8530 | - | 0.4218 |
| 0.9797 | 8540 | - | 0.4217 |
| 0.9808 | 8550 | - | 0.4217 |
| 0.9820 | 8560 | - | 0.4215 |
| 0.9831 | 8570 | - | 0.4216 |
| 0.9843 | 8580 | - | 0.4217 |
| 0.9854 | 8590 | - | 0.4216 |
| 0.9866 | 8600 | 0.748 | 0.4217 |
| 0.9877 | 8610 | - | 0.4215 |
| 0.9889 | 8620 | - | 0.4216 |
| 0.9900 | 8630 | - | 0.4218 |
| 0.9912 | 8640 | - | 0.4218 |
| 0.9923 | 8650 | - | 0.4219 |
| 0.9935 | 8660 | - | 0.4217 |
| 0.9946 | 8670 | - | 0.4217 |
| 0.9958 | 8680 | - | 0.4214 |
| 0.9969 | 8690 | - | 0.4210 |
| 0.9980 | 8700 | 0.7553 | 0.4205 |
| 0.9992 | 8710 | - | 0.4200 |
| 1.0003 | 8720 | - | 0.4199 |
| 1.0015 | 8730 | - | 0.4199 |
| 1.0026 | 8740 | - | 0.4199 |
| 1.0038 | 8750 | - | 0.4198 |
| 1.0049 | 8760 | - | 0.4200 |
| 1.0061 | 8770 | - | 0.4198 |
| 1.0072 | 8780 | - | 0.4195 |
| 1.0084 | 8790 | - | 0.4194 |
| 1.0095 | 8800 | 0.7202 | 0.4191 |
| 1.0107 | 8810 | - | 0.4190 |
| 1.0118 | 8820 | - | 0.4188 |
| 1.0130 | 8830 | - | 0.4188 |
| 1.0141 | 8840 | - | 0.4192 |
| 1.0153 | 8850 | - | 0.4190 |
| 1.0164 | 8860 | - | 0.4191 |
| 1.0176 | 8870 | - | 0.4190 |
| 1.0187 | 8880 | - | 0.4192 |
| 1.0198 | 8890 | - | 0.4190 |
| 1.0210 | 8900 | 0.7567 | 0.4189 |
| 1.0221 | 8910 | - | 0.4188 |
| 1.0233 | 8920 | - | 0.4189 |
| 1.0244 | 8930 | - | 0.4188 |
| 1.0256 | 8940 | - | 0.4187 |
| 1.0267 | 8950 | - | 0.4183 |
| 1.0279 | 8960 | - | 0.4182 |
| 1.0290 | 8970 | - | 0.4182 |
| 1.0302 | 8980 | - | 0.4184 |
| 1.0313 | 8990 | - | 0.4181 |
| 1.0325 | 9000 | 0.7345 | 0.4177 |
| 1.0336 | 9010 | - | 0.4173 |
| 1.0348 | 9020 | - | 0.4171 |
| 1.0359 | 9030 | - | 0.4172 |
| 1.0371 | 9040 | - | 0.4171 |
| 1.0382 | 9050 | - | 0.4172 |
| 1.0393 | 9060 | - | 0.4172 |
| 1.0405 | 9070 | - | 0.4170 |
| 1.0416 | 9080 | - | 0.4165 |
| 1.0428 | 9090 | - | 0.4162 |
| 1.0439 | 9100 | 0.7344 | 0.4162 |
| 1.0451 | 9110 | - | 0.4160 |
| 1.0462 | 9120 | - | 0.4158 |
| 1.0474 | 9130 | - | 0.4157 |
| 1.0485 | 9140 | - | 0.4157 |
| 1.0497 | 9150 | - | 0.4156 |
| 1.0508 | 9160 | - | 0.4153 |
| 1.0520 | 9170 | - | 0.4153 |
| 1.0531 | 9180 | - | 0.4154 |
| 1.0543 | 9190 | - | 0.4154 |
| 1.0554 | 9200 | 0.7233 | 0.4157 |
| 1.0566 | 9210 | - | 0.4157 |
| 1.0577 | 9220 | - | 0.4156 |
| 1.0589 | 9230 | - | 0.4155 |
| 1.0600 | 9240 | - | 0.4153 |
| 1.0611 | 9250 | - | 0.4154 |
| 1.0623 | 9260 | - | 0.4155 |
| 1.0634 | 9270 | - | 0.4154 |
| 1.0646 | 9280 | - | 0.4151 |
| 1.0657 | 9290 | - | 0.4149 |
| 1.0669 | 9300 | 0.7442 | 0.4148 |
| 1.0680 | 9310 | - | 0.4144 |
| 1.0692 | 9320 | - | 0.4143 |
| 1.0703 | 9330 | - | 0.4141 |
| 1.0715 | 9340 | - | 0.4140 |
| 1.0726 | 9350 | - | 0.4138 |
| 1.0738 | 9360 | - | 0.4136 |
| 1.0749 | 9370 | - | 0.4133 |
| 1.0761 | 9380 | - | 0.4132 |
| 1.0772 | 9390 | - | 0.4130 |
| 1.0784 | 9400 | 0.722 | 0.4129 |
| 1.0795 | 9410 | - | 0.4131 |
| 1.0806 | 9420 | - | 0.4132 |
| 1.0818 | 9430 | - | 0.4133 |
| 1.0829 | 9440 | - | 0.4134 |
| 1.0841 | 9450 | - | 0.4134 |
| 1.0852 | 9460 | - | 0.4133 |
| 1.0864 | 9470 | - | 0.4132 |
| 1.0875 | 9480 | - | 0.4132 |
| 1.0887 | 9490 | - | 0.4134 |
| 1.0898 | 9500 | 0.7433 | 0.4133 |
| 1.0910 | 9510 | - | 0.4133 |
| 1.0921 | 9520 | - | 0.4133 |
| 1.0933 | 9530 | - | 0.4132 |
| 1.0944 | 9540 | - | 0.4131 |
| 1.0956 | 9550 | - | 0.4130 |
| 1.0967 | 9560 | - | 0.4130 |
| 1.0979 | 9570 | - | 0.4126 |
| 1.0990 | 9580 | - | 0.4125 |
| 1.1001 | 9590 | - | 0.4121 |
| 1.1013 | 9600 | 0.746 | 0.4119 |
| 1.1024 | 9610 | - | 0.4117 |
| 1.1036 | 9620 | - | 0.4112 |
| 1.1047 | 9630 | - | 0.4109 |
| 1.1059 | 9640 | - | 0.4106 |
| 1.1070 | 9650 | - | 0.4101 |
| 1.1082 | 9660 | - | 0.4101 |
| 1.1093 | 9670 | - | 0.4102 |
| 1.1105 | 9680 | - | 0.4102 |
| 1.1116 | 9690 | - | 0.4101 |
| 1.1128 | 9700 | 0.7447 | 0.4099 |
| 1.1139 | 9710 | - | 0.4100 |
| 1.1151 | 9720 | - | 0.4098 |
| 1.1162 | 9730 | - | 0.4097 |
| 1.1174 | 9740 | - | 0.4094 |
| 1.1185 | 9750 | - | 0.4097 |
| 1.1197 | 9760 | - | 0.4096 |
| 1.1208 | 9770 | - | 0.4096 |
| 1.1219 | 9780 | - | 0.4097 |
| 1.1231 | 9790 | - | 0.4097 |
| 1.1242 | 9800 | 0.7234 | 0.4094 |
| 1.1254 | 9810 | - | 0.4090 |
| 1.1265 | 9820 | - | 0.4090 |
| 1.1277 | 9830 | - | 0.4091 |
| 1.1288 | 9840 | - | 0.4091 |
| 1.1300 | 9850 | - | 0.4090 |
| 1.1311 | 9860 | - | 0.4088 |
| 1.1323 | 9870 | - | 0.4088 |
| 1.1334 | 9880 | - | 0.4085 |
| 1.1346 | 9890 | - | 0.4085 |
| 1.1357 | 9900 | 0.7054 | 0.4084 |
| 1.1369 | 9910 | - | 0.4087 |
| 1.1380 | 9920 | - | 0.4089 |
| 1.1392 | 9930 | - | 0.4089 |
| 1.1403 | 9940 | - | 0.4088 |
| 1.1414 | 9950 | - | 0.4091 |
| 1.1426 | 9960 | - | 0.4088 |
| 1.1437 | 9970 | - | 0.4086 |
| 1.1449 | 9980 | - | 0.4084 |
| 1.1460 | 9990 | - | 0.4089 |
| 1.1472 | 10000 | 0.7071 | 0.4088 |
| 1.1483 | 10010 | - | 0.4086 |
| 1.1495 | 10020 | - | 0.4081 |
| 1.1506 | 10030 | - | 0.4079 |
| 1.1518 | 10040 | - | 0.4079 |
| 1.1529 | 10050 | - | 0.4081 |
| 1.1541 | 10060 | - | 0.4081 |
| 1.1552 | 10070 | - | 0.4080 |
| 1.1564 | 10080 | - | 0.4079 |
| 1.1575 | 10090 | - | 0.4078 |
| 1.1587 | 10100 | 0.7289 | 0.4075 |
| 1.1598 | 10110 | - | 0.4072 |
| 1.1609 | 10120 | - | 0.4070 |
| 1.1621 | 10130 | - | 0.4070 |
| 1.1632 | 10140 | - | 0.4074 |
| 1.1644 | 10150 | - | 0.4074 |
| 1.1655 | 10160 | - | 0.4073 |
| 1.1667 | 10170 | - | 0.4073 |
| 1.1678 | 10180 | - | 0.4072 |
| 1.1690 | 10190 | - | 0.4073 |
| 1.1701 | 10200 | 0.758 | 0.4071 |
| 1.1713 | 10210 | - | 0.4071 |
| 1.1724 | 10220 | - | 0.4071 |
| 1.1736 | 10230 | - | 0.4068 |
| 1.1747 | 10240 | - | 0.4063 |
| 1.1759 | 10250 | - | 0.4062 |
| 1.1770 | 10260 | - | 0.4064 |
| 1.1782 | 10270 | - | 0.4065 |
| 1.1793 | 10280 | - | 0.4063 |
| 1.1805 | 10290 | - | 0.4065 |
| 1.1816 | 10300 | 0.7322 | 0.4066 |
| 1.1827 | 10310 | - | 0.4065 |
| 1.1839 | 10320 | - | 0.4065 |
| 1.1850 | 10330 | - | 0.4061 |
| 1.1862 | 10340 | - | 0.4060 |
| 1.1873 | 10350 | - | 0.4057 |
| 1.1885 | 10360 | - | 0.4056 |
| 1.1896 | 10370 | - | 0.4056 |
| 1.1908 | 10380 | - | 0.4059 |
| 1.1919 | 10390 | - | 0.4061 |
| 1.1931 | 10400 | 0.6948 | 0.4059 |
| 1.1942 | 10410 | - | 0.4059 |
| 1.1954 | 10420 | - | 0.4060 |
| 1.1965 | 10430 | - | 0.4058 |
| 1.1977 | 10440 | - | 0.4057 |
| 1.1988 | 10450 | - | 0.4056 |
| 1.2000 | 10460 | - | 0.4056 |
| 1.2011 | 10470 | - | 0.4056 |
| 1.2022 | 10480 | - | 0.4057 |
| 1.2034 | 10490 | - | 0.4056 |
| 1.2045 | 10500 | 0.7185 | 0.4055 |
| 1.2057 | 10510 | - | 0.4056 |
| 1.2068 | 10520 | - | 0.4054 |
| 1.2080 | 10530 | - | 0.4053 |
| 1.2091 | 10540 | - | 0.4051 |
| 1.2103 | 10550 | - | 0.4050 |
| 1.2114 | 10560 | - | 0.4051 |
| 1.2126 | 10570 | - | 0.4052 |
| 1.2137 | 10580 | - | 0.4053 |
| 1.2149 | 10590 | - | 0.4053 |
| 1.2160 | 10600 | 0.7039 | 0.4053 |
| 1.2172 | 10610 | - | 0.4054 |
| 1.2183 | 10620 | - | 0.4051 |
| 1.2195 | 10630 | - | 0.4050 |
| 1.2206 | 10640 | - | 0.4048 |
| 1.2218 | 10650 | - | 0.4044 |
| 1.2229 | 10660 | - | 0.4046 |
| 1.2240 | 10670 | - | 0.4044 |
| 1.2252 | 10680 | - | 0.4041 |
| 1.2263 | 10690 | - | 0.4039 |
| 1.2275 | 10700 | 0.6969 | 0.4037 |
| 1.2286 | 10710 | - | 0.4037 |
| 1.2298 | 10720 | - | 0.4035 |
| 1.2309 | 10730 | - | 0.4036 |
| 1.2321 | 10740 | - | 0.4035 |
| 1.2332 | 10750 | - | 0.4038 |
| 1.2344 | 10760 | - | 0.4038 |
| 1.2355 | 10770 | - | 0.4037 |
| 1.2367 | 10780 | - | 0.4037 |
| 1.2378 | 10790 | - | 0.4037 |
| 1.2390 | 10800 | 0.6921 | 0.4038 |
| 1.2401 | 10810 | - | 0.4039 |
| 1.2413 | 10820 | - | 0.4038 |
| 1.2424 | 10830 | - | 0.4037 |
| 1.2435 | 10840 | - | 0.4040 |
| 1.2447 | 10850 | - | 0.4042 |
| 1.2458 | 10860 | - | 0.4044 |
| 1.2470 | 10870 | - | 0.4043 |
| 1.2481 | 10880 | - | 0.4043 |
| 1.2493 | 10890 | - | 0.4044 |
| 1.2504 | 10900 | 0.728 | 0.4042 |
| 1.2516 | 10910 | - | 0.4044 |
| 1.2527 | 10920 | - | 0.4043 |
| 1.2539 | 10930 | - | 0.4039 |
| 1.2550 | 10940 | - | 0.4038 |
| 1.2562 | 10950 | - | 0.4037 |
| 1.2573 | 10960 | - | 0.4035 |
| 1.2585 | 10970 | - | 0.4032 |
| 1.2596 | 10980 | - | 0.4024 |
| 1.2608 | 10990 | - | 0.4019 |
| 1.2619 | 11000 | 0.713 | 0.4018 |
| 1.2630 | 11010 | - | 0.4015 |
| 1.2642 | 11020 | - | 0.4015 |
| 1.2653 | 11030 | - | 0.4014 |
| 1.2665 | 11040 | - | 0.4015 |
| 1.2676 | 11050 | - | 0.4014 |
| 1.2688 | 11060 | - | 0.4013 |
| 1.2699 | 11070 | - | 0.4015 |
| 1.2711 | 11080 | - | 0.4016 |
| 1.2722 | 11090 | - | 0.4017 |
| 1.2734 | 11100 | 0.668 | 0.4017 |
| 1.2745 | 11110 | - | 0.4016 |
| 1.2757 | 11120 | - | 0.4016 |
| 1.2768 | 11130 | - | 0.4019 |
| 1.2780 | 11140 | - | 0.4021 |
| 1.2791 | 11150 | - | 0.4019 |
| 1.2803 | 11160 | - | 0.4017 |
| 1.2814 | 11170 | - | 0.4017 |
| 1.2826 | 11180 | - | 0.4018 |
| 1.2837 | 11190 | - | 0.4013 |
| 1.2848 | 11200 | 0.7101 | 0.4011 |
| 1.2860 | 11210 | - | 0.4011 |
| 1.2871 | 11220 | - | 0.4014 |
| 1.2883 | 11230 | - | 0.4015 |
| 1.2894 | 11240 | - | 0.4010 |
| 1.2906 | 11250 | - | 0.4012 |
| 1.2917 | 11260 | - | 0.4013 |
| 1.2929 | 11270 | - | 0.4010 |
| 1.2940 | 11280 | - | 0.4006 |
| 1.2952 | 11290 | - | 0.4005 |
| 1.2963 | 11300 | 0.6963 | 0.4004 |
| 1.2975 | 11310 | - | 0.4003 |
| 1.2986 | 11320 | - | 0.4004 |
| 1.2998 | 11330 | - | 0.4003 |
| 1.3009 | 11340 | - | 0.3999 |
| 1.3021 | 11350 | - | 0.3997 |
| 1.3032 | 11360 | - | 0.3996 |
| 1.3043 | 11370 | - | 0.3997 |
| 1.3055 | 11380 | - | 0.3996 |
| 1.3066 | 11390 | - | 0.3994 |
| 1.3078 | 11400 | 0.6706 | 0.3993 |
| 1.3089 | 11410 | - | 0.3991 |
| 1.3101 | 11420 | - | 0.3990 |
| 1.3112 | 11430 | - | 0.3990 |
| 1.3124 | 11440 | - | 0.3987 |
| 1.3135 | 11450 | - | 0.3981 |
| 1.3147 | 11460 | - | 0.3978 |
| 1.3158 | 11470 | - | 0.3975 |
| 1.3170 | 11480 | - | 0.3974 |
| 1.3181 | 11490 | - | 0.3974 |
| 1.3193 | 11500 | 0.6962 | 0.3974 |
| 1.3204 | 11510 | - | 0.3975 |
| 1.3216 | 11520 | - | 0.3975 |
| 1.3227 | 11530 | - | 0.3976 |
| 1.3238 | 11540 | - | 0.3977 |
| 1.3250 | 11550 | - | 0.3975 |
| 1.3261 | 11560 | - | 0.3974 |
| 1.3273 | 11570 | - | 0.3973 |
| 1.3284 | 11580 | - | 0.3971 |
| 1.3296 | 11590 | - | 0.3969 |
| 1.3307 | 11600 | 0.7083 | 0.3970 |
| 1.3319 | 11610 | - | 0.3970 |
| 1.3330 | 11620 | - | 0.3971 |
| 1.3342 | 11630 | - | 0.3973 |
| 1.3353 | 11640 | - | 0.3975 |
| 1.3365 | 11650 | - | 0.3973 |
| 1.3376 | 11660 | - | 0.3973 |
| 1.3388 | 11670 | - | 0.3973 |
| 1.3399 | 11680 | - | 0.3976 |
| 1.3411 | 11690 | - | 0.3976 |
| 1.3422 | 11700 | 0.6757 | 0.3976 |
| 1.3434 | 11710 | - | 0.3975 |
| 1.3445 | 11720 | - | 0.3973 |
| 1.3456 | 11730 | - | 0.3971 |
| 1.3468 | 11740 | - | 0.3963 |
| 1.3479 | 11750 | - | 0.3964 |
| 1.3491 | 11760 | - | 0.3965 |
| 1.3502 | 11770 | - | 0.3967 |
| 1.3514 | 11780 | - | 0.3966 |
| 1.3525 | 11790 | - | 0.3964 |
| 1.3537 | 11800 | 0.7091 | 0.3965 |
| 1.3548 | 11810 | - | 0.3964 |
| 1.3560 | 11820 | - | 0.3964 |
| 1.3571 | 11830 | - | 0.3963 |
| 1.3583 | 11840 | - | 0.3962 |
| 1.3594 | 11850 | - | 0.3961 |
| 1.3606 | 11860 | - | 0.3956 |
| 1.3617 | 11870 | - | 0.3956 |
| 1.3629 | 11880 | - | 0.3961 |
| 1.3640 | 11890 | - | 0.3963 |
| 1.3651 | 11900 | 0.6977 | 0.3962 |
| 1.3663 | 11910 | - | 0.3958 |
| 1.3674 | 11920 | - | 0.3960 |
| 1.3686 | 11930 | - | 0.3963 |
| 1.3697 | 11940 | - | 0.3964 |
| 1.3709 | 11950 | - | 0.3961 |
| 1.3720 | 11960 | - | 0.3960 |
| 1.3732 | 11970 | - | 0.3958 |
| 1.3743 | 11980 | - | 0.3954 |
| 1.3755 | 11990 | - | 0.3948 |
| 1.3766 | 12000 | 0.7003 | 0.3944 |
</details>
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
Non_BioNLP
|
# SentenceTransformer based on google-t5/t5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) <!-- at revision a9723ea7f1b39c1eae772870f3b547bf6ef7e6c1 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.',
'A worker is looking out of a manhole.',
'The workers are both inside the manhole.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.79 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.02 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 19.41 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.69 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.35 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0011 | 10 | - | 1.8733 |
| 0.0023 | 20 | - | 1.8726 |
| 0.0034 | 30 | - | 1.8714 |
| 0.0046 | 40 | - | 1.8697 |
| 0.0057 | 50 | - | 1.8675 |
| 0.0069 | 60 | - | 1.8649 |
| 0.0080 | 70 | - | 1.8619 |
| 0.0092 | 80 | - | 1.8584 |
| 0.0103 | 90 | - | 1.8544 |
| 0.0115 | 100 | 3.1046 | 1.8499 |
| 0.0126 | 110 | - | 1.8451 |
| 0.0138 | 120 | - | 1.8399 |
| 0.0149 | 130 | - | 1.8343 |
| 0.0161 | 140 | - | 1.8283 |
| 0.0172 | 150 | - | 1.8223 |
| 0.0184 | 160 | - | 1.8159 |
| 0.0195 | 170 | - | 1.8091 |
| 0.0206 | 180 | - | 1.8016 |
| 0.0218 | 190 | - | 1.7938 |
| 0.0229 | 200 | 3.0303 | 1.7858 |
| 0.0241 | 210 | - | 1.7775 |
| 0.0252 | 220 | - | 1.7693 |
| 0.0264 | 230 | - | 1.7605 |
| 0.0275 | 240 | - | 1.7514 |
| 0.0287 | 250 | - | 1.7417 |
| 0.0298 | 260 | - | 1.7320 |
| 0.0310 | 270 | - | 1.7227 |
| 0.0321 | 280 | - | 1.7134 |
| 0.0333 | 290 | - | 1.7040 |
| 0.0344 | 300 | 2.9459 | 1.6941 |
| 0.0356 | 310 | - | 1.6833 |
| 0.0367 | 320 | - | 1.6725 |
| 0.0379 | 330 | - | 1.6614 |
| 0.0390 | 340 | - | 1.6510 |
| 0.0402 | 350 | - | 1.6402 |
| 0.0413 | 360 | - | 1.6296 |
| 0.0424 | 370 | - | 1.6187 |
| 0.0436 | 380 | - | 1.6073 |
| 0.0447 | 390 | - | 1.5962 |
| 0.0459 | 400 | 2.7813 | 1.5848 |
| 0.0470 | 410 | - | 1.5735 |
| 0.0482 | 420 | - | 1.5620 |
| 0.0493 | 430 | - | 1.5495 |
| 0.0505 | 440 | - | 1.5375 |
| 0.0516 | 450 | - | 1.5256 |
| 0.0528 | 460 | - | 1.5133 |
| 0.0539 | 470 | - | 1.5012 |
| 0.0551 | 480 | - | 1.4892 |
| 0.0562 | 490 | - | 1.4769 |
| 0.0574 | 500 | 2.6308 | 1.4640 |
| 0.0585 | 510 | - | 1.4513 |
| 0.0597 | 520 | - | 1.4391 |
| 0.0608 | 530 | - | 1.4262 |
| 0.0619 | 540 | - | 1.4130 |
| 0.0631 | 550 | - | 1.3998 |
| 0.0642 | 560 | - | 1.3874 |
| 0.0654 | 570 | - | 1.3752 |
| 0.0665 | 580 | - | 1.3620 |
| 0.0677 | 590 | - | 1.3485 |
| 0.0688 | 600 | 2.4452 | 1.3350 |
| 0.0700 | 610 | - | 1.3213 |
| 0.0711 | 620 | - | 1.3088 |
| 0.0723 | 630 | - | 1.2965 |
| 0.0734 | 640 | - | 1.2839 |
| 0.0746 | 650 | - | 1.2713 |
| 0.0757 | 660 | - | 1.2592 |
| 0.0769 | 670 | - | 1.2466 |
| 0.0780 | 680 | - | 1.2332 |
| 0.0792 | 690 | - | 1.2203 |
| 0.0803 | 700 | 2.2626 | 1.2077 |
| 0.0815 | 710 | - | 1.1959 |
| 0.0826 | 720 | - | 1.1841 |
| 0.0837 | 730 | - | 1.1725 |
| 0.0849 | 740 | - | 1.1619 |
| 0.0860 | 750 | - | 1.1516 |
| 0.0872 | 760 | - | 1.1416 |
| 0.0883 | 770 | - | 1.1320 |
| 0.0895 | 780 | - | 1.1227 |
| 0.0906 | 790 | - | 1.1138 |
| 0.0918 | 800 | 2.0044 | 1.1053 |
| 0.0929 | 810 | - | 1.0965 |
| 0.0941 | 820 | - | 1.0879 |
| 0.0952 | 830 | - | 1.0796 |
| 0.0964 | 840 | - | 1.0718 |
| 0.0975 | 850 | - | 1.0644 |
| 0.0987 | 860 | - | 1.0564 |
| 0.0998 | 870 | - | 1.0490 |
| 0.1010 | 880 | - | 1.0417 |
| 0.1021 | 890 | - | 1.0354 |
| 0.1032 | 900 | 1.8763 | 1.0296 |
| 0.1044 | 910 | - | 1.0239 |
| 0.1055 | 920 | - | 1.0180 |
| 0.1067 | 930 | - | 1.0123 |
| 0.1078 | 940 | - | 1.0065 |
| 0.1090 | 950 | - | 1.0008 |
| 0.1101 | 960 | - | 0.9950 |
| 0.1113 | 970 | - | 0.9894 |
| 0.1124 | 980 | - | 0.9840 |
| 0.1136 | 990 | - | 0.9793 |
| 0.1147 | 1000 | 1.7287 | 0.9752 |
| 0.1159 | 1010 | - | 0.9706 |
| 0.1170 | 1020 | - | 0.9659 |
| 0.1182 | 1030 | - | 0.9615 |
| 0.1193 | 1040 | - | 0.9572 |
| 0.1205 | 1050 | - | 0.9531 |
| 0.1216 | 1060 | - | 0.9494 |
| 0.1227 | 1070 | - | 0.9456 |
| 0.1239 | 1080 | - | 0.9415 |
| 0.1250 | 1090 | - | 0.9377 |
| 0.1262 | 1100 | 1.6312 | 0.9339 |
| 0.1273 | 1110 | - | 0.9303 |
| 0.1285 | 1120 | - | 0.9267 |
| 0.1296 | 1130 | - | 0.9232 |
| 0.1308 | 1140 | - | 0.9197 |
| 0.1319 | 1150 | - | 0.9162 |
| 0.1331 | 1160 | - | 0.9128 |
| 0.1342 | 1170 | - | 0.9097 |
| 0.1354 | 1180 | - | 0.9069 |
| 0.1365 | 1190 | - | 0.9040 |
| 0.1377 | 1200 | 1.5316 | 0.9010 |
| 0.1388 | 1210 | - | 0.8979 |
| 0.1400 | 1220 | - | 0.8947 |
| 0.1411 | 1230 | - | 0.8915 |
| 0.1423 | 1240 | - | 0.8888 |
| 0.1434 | 1250 | - | 0.8861 |
| 0.1445 | 1260 | - | 0.8833 |
| 0.1457 | 1270 | - | 0.8806 |
| 0.1468 | 1280 | - | 0.8779 |
| 0.1480 | 1290 | - | 0.8748 |
| 0.1491 | 1300 | 1.4961 | 0.8718 |
| 0.1503 | 1310 | - | 0.8690 |
| 0.1514 | 1320 | - | 0.8664 |
| 0.1526 | 1330 | - | 0.8635 |
| 0.1537 | 1340 | - | 0.8603 |
| 0.1549 | 1350 | - | 0.8574 |
| 0.1560 | 1360 | - | 0.8545 |
| 0.1572 | 1370 | - | 0.8521 |
| 0.1583 | 1380 | - | 0.8497 |
| 0.1595 | 1390 | - | 0.8474 |
| 0.1606 | 1400 | 1.451 | 0.8453 |
| 0.1618 | 1410 | - | 0.8429 |
| 0.1629 | 1420 | - | 0.8404 |
| 0.1640 | 1430 | - | 0.8380 |
| 0.1652 | 1440 | - | 0.8357 |
| 0.1663 | 1450 | - | 0.8336 |
| 0.1675 | 1460 | - | 0.8312 |
| 0.1686 | 1470 | - | 0.8289 |
| 0.1698 | 1480 | - | 0.8262 |
| 0.1709 | 1490 | - | 0.8236 |
| 0.1721 | 1500 | 1.4177 | 0.8213 |
| 0.1732 | 1510 | - | 0.8189 |
| 0.1744 | 1520 | - | 0.8168 |
| 0.1755 | 1530 | - | 0.8147 |
| 0.1767 | 1540 | - | 0.8127 |
| 0.1778 | 1550 | - | 0.8107 |
| 0.1790 | 1560 | - | 0.8082 |
| 0.1801 | 1570 | - | 0.8059 |
| 0.1813 | 1580 | - | 0.8036 |
| 0.1824 | 1590 | - | 0.8015 |
| 0.1835 | 1600 | 1.3734 | 0.7993 |
| 0.1847 | 1610 | - | 0.7970 |
| 0.1858 | 1620 | - | 0.7948 |
| 0.1870 | 1630 | - | 0.7922 |
| 0.1881 | 1640 | - | 0.7900 |
| 0.1893 | 1650 | - | 0.7877 |
| 0.1904 | 1660 | - | 0.7852 |
| 0.1916 | 1670 | - | 0.7829 |
| 0.1927 | 1680 | - | 0.7804 |
| 0.1939 | 1690 | - | 0.7779 |
| 0.1950 | 1700 | 1.3327 | 0.7757 |
| 0.1962 | 1710 | - | 0.7738 |
| 0.1973 | 1720 | - | 0.7719 |
| 0.1985 | 1730 | - | 0.7700 |
| 0.1996 | 1740 | - | 0.7679 |
| 0.2008 | 1750 | - | 0.7658 |
| 0.2019 | 1760 | - | 0.7641 |
| 0.2031 | 1770 | - | 0.7621 |
| 0.2042 | 1780 | - | 0.7601 |
| 0.2053 | 1790 | - | 0.7580 |
| 0.2065 | 1800 | 1.2804 | 0.7558 |
| 0.2076 | 1810 | - | 0.7536 |
| 0.2088 | 1820 | - | 0.7514 |
| 0.2099 | 1830 | - | 0.7493 |
| 0.2111 | 1840 | - | 0.7473 |
| 0.2122 | 1850 | - | 0.7451 |
| 0.2134 | 1860 | - | 0.7429 |
| 0.2145 | 1870 | - | 0.7408 |
| 0.2157 | 1880 | - | 0.7389 |
| 0.2168 | 1890 | - | 0.7368 |
| 0.2180 | 1900 | 1.2255 | 0.7349 |
| 0.2191 | 1910 | - | 0.7328 |
| 0.2203 | 1920 | - | 0.7310 |
| 0.2214 | 1930 | - | 0.7293 |
| 0.2226 | 1940 | - | 0.7277 |
| 0.2237 | 1950 | - | 0.7259 |
| 0.2248 | 1960 | - | 0.7240 |
| 0.2260 | 1970 | - | 0.7221 |
| 0.2271 | 1980 | - | 0.7203 |
| 0.2283 | 1990 | - | 0.7184 |
| 0.2294 | 2000 | 1.2635 | 0.7165 |
| 0.2306 | 2010 | - | 0.7150 |
| 0.2317 | 2020 | - | 0.7135 |
| 0.2329 | 2030 | - | 0.7117 |
| 0.2340 | 2040 | - | 0.7099 |
| 0.2352 | 2050 | - | 0.7084 |
| 0.2363 | 2060 | - | 0.7068 |
| 0.2375 | 2070 | - | 0.7054 |
| 0.2386 | 2080 | - | 0.7037 |
| 0.2398 | 2090 | - | 0.7023 |
| 0.2409 | 2100 | 1.1912 | 0.7009 |
| 0.2421 | 2110 | - | 0.6991 |
| 0.2432 | 2120 | - | 0.6974 |
| 0.2444 | 2130 | - | 0.6962 |
| 0.2455 | 2140 | - | 0.6950 |
| 0.2466 | 2150 | - | 0.6938 |
| 0.2478 | 2160 | - | 0.6922 |
| 0.2489 | 2170 | - | 0.6909 |
| 0.2501 | 2180 | - | 0.6897 |
| 0.2512 | 2190 | - | 0.6884 |
| 0.2524 | 2200 | 1.2144 | 0.6868 |
| 0.2535 | 2210 | - | 0.6856 |
| 0.2547 | 2220 | - | 0.6843 |
| 0.2558 | 2230 | - | 0.6829 |
| 0.2570 | 2240 | - | 0.6817 |
| 0.2581 | 2250 | - | 0.6804 |
| 0.2593 | 2260 | - | 0.6789 |
| 0.2604 | 2270 | - | 0.6775 |
| 0.2616 | 2280 | - | 0.6763 |
| 0.2627 | 2290 | - | 0.6751 |
| 0.2639 | 2300 | 1.1498 | 0.6739 |
| 0.2650 | 2310 | - | 0.6725 |
| 0.2661 | 2320 | - | 0.6711 |
| 0.2673 | 2330 | - | 0.6698 |
| 0.2684 | 2340 | - | 0.6684 |
| 0.2696 | 2350 | - | 0.6666 |
| 0.2707 | 2360 | - | 0.6653 |
| 0.2719 | 2370 | - | 0.6638 |
| 0.2730 | 2380 | - | 0.6621 |
| 0.2742 | 2390 | - | 0.6609 |
| 0.2753 | 2400 | 1.1446 | 0.6596 |
| 0.2765 | 2410 | - | 0.6582 |
| 0.2776 | 2420 | - | 0.6568 |
| 0.2788 | 2430 | - | 0.6553 |
| 0.2799 | 2440 | - | 0.6541 |
| 0.2811 | 2450 | - | 0.6527 |
| 0.2822 | 2460 | - | 0.6513 |
| 0.2834 | 2470 | - | 0.6496 |
| 0.2845 | 2480 | - | 0.6483 |
| 0.2856 | 2490 | - | 0.6475 |
| 0.2868 | 2500 | 1.1309 | 0.6465 |
| 0.2879 | 2510 | - | 0.6455 |
| 0.2891 | 2520 | - | 0.6447 |
| 0.2902 | 2530 | - | 0.6437 |
| 0.2914 | 2540 | - | 0.6428 |
| 0.2925 | 2550 | - | 0.6415 |
| 0.2937 | 2560 | - | 0.6403 |
| 0.2948 | 2570 | - | 0.6392 |
| 0.2960 | 2580 | - | 0.6381 |
| 0.2971 | 2590 | - | 0.6371 |
| 0.2983 | 2600 | 1.1006 | 0.6358 |
| 0.2994 | 2610 | - | 0.6348 |
| 0.3006 | 2620 | - | 0.6340 |
| 0.3017 | 2630 | - | 0.6330 |
| 0.3029 | 2640 | - | 0.6319 |
| 0.3040 | 2650 | - | 0.6308 |
| 0.3052 | 2660 | - | 0.6300 |
| 0.3063 | 2670 | - | 0.6291 |
| 0.3074 | 2680 | - | 0.6280 |
| 0.3086 | 2690 | - | 0.6268 |
| 0.3097 | 2700 | 1.0772 | 0.6254 |
| 0.3109 | 2710 | - | 0.6243 |
| 0.3120 | 2720 | - | 0.6232 |
| 0.3132 | 2730 | - | 0.6224 |
| 0.3143 | 2740 | - | 0.6215 |
| 0.3155 | 2750 | - | 0.6205 |
| 0.3166 | 2760 | - | 0.6194 |
| 0.3178 | 2770 | - | 0.6183 |
| 0.3189 | 2780 | - | 0.6171 |
| 0.3201 | 2790 | - | 0.6160 |
| 0.3212 | 2800 | 1.0648 | 0.6153 |
| 0.3224 | 2810 | - | 0.6141 |
| 0.3235 | 2820 | - | 0.6129 |
| 0.3247 | 2830 | - | 0.6119 |
| 0.3258 | 2840 | - | 0.6109 |
| 0.3269 | 2850 | - | 0.6099 |
| 0.3281 | 2860 | - | 0.6088 |
| 0.3292 | 2870 | - | 0.6079 |
| 0.3304 | 2880 | - | 0.6073 |
| 0.3315 | 2890 | - | 0.6063 |
| 0.3327 | 2900 | 1.0398 | 0.6054 |
| 0.3338 | 2910 | - | 0.6044 |
| 0.3350 | 2920 | - | 0.6033 |
| 0.3361 | 2930 | - | 0.6022 |
| 0.3373 | 2940 | - | 0.6012 |
| 0.3384 | 2950 | - | 0.6003 |
| 0.3396 | 2960 | - | 0.5993 |
| 0.3407 | 2970 | - | 0.5986 |
| 0.3419 | 2980 | - | 0.5978 |
| 0.3430 | 2990 | - | 0.5967 |
| 0.3442 | 3000 | 1.0256 | 0.5959 |
| 0.3453 | 3010 | - | 0.5947 |
| 0.3464 | 3020 | - | 0.5937 |
| 0.3476 | 3030 | - | 0.5929 |
| 0.3487 | 3040 | - | 0.5920 |
| 0.3499 | 3050 | - | 0.5908 |
| 0.3510 | 3060 | - | 0.5897 |
| 0.3522 | 3070 | - | 0.5888 |
| 0.3533 | 3080 | - | 0.5882 |
| 0.3545 | 3090 | - | 0.5874 |
| 0.3556 | 3100 | 1.0489 | 0.5868 |
| 0.3568 | 3110 | - | 0.5860 |
| 0.3579 | 3120 | - | 0.5854 |
| 0.3591 | 3130 | - | 0.5839 |
| 0.3602 | 3140 | - | 0.5830 |
| 0.3614 | 3150 | - | 0.5822 |
| 0.3625 | 3160 | - | 0.5814 |
| 0.3637 | 3170 | - | 0.5808 |
| 0.3648 | 3180 | - | 0.5802 |
| 0.3660 | 3190 | - | 0.5794 |
| 0.3671 | 3200 | 1.038 | 0.5788 |
| 0.3682 | 3210 | - | 0.5778 |
| 0.3694 | 3220 | - | 0.5770 |
| 0.3705 | 3230 | - | 0.5763 |
| 0.3717 | 3240 | - | 0.5752 |
| 0.3728 | 3250 | - | 0.5745 |
| 0.3740 | 3260 | - | 0.5737 |
| 0.3751 | 3270 | - | 0.5728 |
| 0.3763 | 3280 | - | 0.5720 |
| 0.3774 | 3290 | - | 0.5713 |
| 0.3786 | 3300 | 1.0058 | 0.5707 |
| 0.3797 | 3310 | - | 0.5700 |
| 0.3809 | 3320 | - | 0.5690 |
| 0.3820 | 3330 | - | 0.5681 |
| 0.3832 | 3340 | - | 0.5673 |
| 0.3843 | 3350 | - | 0.5669 |
| 0.3855 | 3360 | - | 0.5667 |
| 0.3866 | 3370 | - | 0.5665 |
| 0.3877 | 3380 | - | 0.5659 |
| 0.3889 | 3390 | - | 0.5650 |
| 0.3900 | 3400 | 1.0413 | 0.5645 |
| 0.3912 | 3410 | - | 0.5641 |
| 0.3923 | 3420 | - | 0.5635 |
| 0.3935 | 3430 | - | 0.5629 |
| 0.3946 | 3440 | - | 0.5622 |
| 0.3958 | 3450 | - | 0.5617 |
| 0.3969 | 3460 | - | 0.5614 |
| 0.3981 | 3470 | - | 0.5607 |
| 0.3992 | 3480 | - | 0.5603 |
| 0.4004 | 3490 | - | 0.5598 |
| 0.4015 | 3500 | 0.938 | 0.5596 |
| 0.4027 | 3510 | - | 0.5589 |
| 0.4038 | 3520 | - | 0.5581 |
| 0.4050 | 3530 | - | 0.5571 |
| 0.4061 | 3540 | - | 0.5563 |
| 0.4073 | 3550 | - | 0.5557 |
| 0.4084 | 3560 | - | 0.5551 |
| 0.4095 | 3570 | - | 0.5546 |
| 0.4107 | 3580 | - | 0.5541 |
| 0.4118 | 3590 | - | 0.5535 |
| 0.4130 | 3600 | 0.955 | 0.5528 |
| 0.4141 | 3610 | - | 0.5522 |
| 0.4153 | 3620 | - | 0.5516 |
| 0.4164 | 3630 | - | 0.5509 |
| 0.4176 | 3640 | - | 0.5503 |
| 0.4187 | 3650 | - | 0.5495 |
| 0.4199 | 3660 | - | 0.5490 |
| 0.4210 | 3670 | - | 0.5481 |
| 0.4222 | 3680 | - | 0.5475 |
| 0.4233 | 3690 | - | 0.5467 |
| 0.4245 | 3700 | 0.9387 | 0.5463 |
| 0.4256 | 3710 | - | 0.5459 |
| 0.4268 | 3720 | - | 0.5452 |
| 0.4279 | 3730 | - | 0.5448 |
| 0.4290 | 3740 | - | 0.5443 |
| 0.4302 | 3750 | - | 0.5440 |
| 0.4313 | 3760 | - | 0.5435 |
| 0.4325 | 3770 | - | 0.5430 |
| 0.4336 | 3780 | - | 0.5423 |
| 0.4348 | 3790 | - | 0.5418 |
| 0.4359 | 3800 | 0.9672 | 0.5415 |
| 0.4371 | 3810 | - | 0.5413 |
| 0.4382 | 3820 | - | 0.5410 |
| 0.4394 | 3830 | - | 0.5406 |
| 0.4405 | 3840 | - | 0.5403 |
| 0.4417 | 3850 | - | 0.5397 |
| 0.4428 | 3860 | - | 0.5394 |
| 0.4440 | 3870 | - | 0.5386 |
| 0.4451 | 3880 | - | 0.5378 |
| 0.4463 | 3890 | - | 0.5370 |
| 0.4474 | 3900 | 0.926 | 0.5360 |
| 0.4485 | 3910 | - | 0.5351 |
| 0.4497 | 3920 | - | 0.5346 |
| 0.4508 | 3930 | - | 0.5343 |
| 0.4520 | 3940 | - | 0.5339 |
| 0.4531 | 3950 | - | 0.5337 |
| 0.4543 | 3960 | - | 0.5334 |
| 0.4554 | 3970 | - | 0.5330 |
| 0.4566 | 3980 | - | 0.5327 |
| 0.4577 | 3990 | - | 0.5324 |
| 0.4589 | 4000 | 0.867 | 0.5319 |
| 0.4600 | 4010 | - | 0.5313 |
| 0.4612 | 4020 | - | 0.5308 |
| 0.4623 | 4030 | - | 0.5300 |
| 0.4635 | 4040 | - | 0.5293 |
| 0.4646 | 4050 | - | 0.5287 |
| 0.4658 | 4060 | - | 0.5284 |
| 0.4669 | 4070 | - | 0.5281 |
| 0.4681 | 4080 | - | 0.5277 |
| 0.4692 | 4090 | - | 0.5272 |
| 0.4703 | 4100 | 0.916 | 0.5267 |
| 0.4715 | 4110 | - | 0.5260 |
| 0.4726 | 4120 | - | 0.5252 |
| 0.4738 | 4130 | - | 0.5246 |
| 0.4749 | 4140 | - | 0.5239 |
| 0.4761 | 4150 | - | 0.5232 |
| 0.4772 | 4160 | - | 0.5225 |
| 0.4784 | 4170 | - | 0.5221 |
| 0.4795 | 4180 | - | 0.5216 |
| 0.4807 | 4190 | - | 0.5211 |
| 0.4818 | 4200 | 0.9667 | 0.5206 |
| 0.4830 | 4210 | - | 0.5204 |
| 0.4841 | 4220 | - | 0.5200 |
| 0.4853 | 4230 | - | 0.5192 |
| 0.4864 | 4240 | - | 0.5187 |
| 0.4876 | 4250 | - | 0.5185 |
| 0.4887 | 4260 | - | 0.5179 |
| 0.4898 | 4270 | - | 0.5173 |
| 0.4910 | 4280 | - | 0.5170 |
| 0.4921 | 4290 | - | 0.5165 |
| 0.4933 | 4300 | 0.9276 | 0.5160 |
| 0.4944 | 4310 | - | 0.5154 |
| 0.4956 | 4320 | - | 0.5150 |
| 0.4967 | 4330 | - | 0.5144 |
| 0.4979 | 4340 | - | 0.5141 |
| 0.4990 | 4350 | - | 0.5139 |
| 0.5002 | 4360 | - | 0.5138 |
| 0.5013 | 4370 | - | 0.5136 |
| 0.5025 | 4380 | - | 0.5133 |
| 0.5036 | 4390 | - | 0.5129 |
| 0.5048 | 4400 | 0.9331 | 0.5126 |
| 0.5059 | 4410 | - | 0.5123 |
| 0.5071 | 4420 | - | 0.5117 |
| 0.5082 | 4430 | - | 0.5113 |
| 0.5093 | 4440 | - | 0.5108 |
| 0.5105 | 4450 | - | 0.5106 |
| 0.5116 | 4460 | - | 0.5106 |
| 0.5128 | 4470 | - | 0.5106 |
| 0.5139 | 4480 | - | 0.5104 |
| 0.5151 | 4490 | - | 0.5102 |
| 0.5162 | 4500 | 0.907 | 0.5097 |
| 0.5174 | 4510 | - | 0.5092 |
| 0.5185 | 4520 | - | 0.5086 |
| 0.5197 | 4530 | - | 0.5082 |
| 0.5208 | 4540 | - | 0.5079 |
| 0.5220 | 4550 | - | 0.5075 |
| 0.5231 | 4560 | - | 0.5071 |
| 0.5243 | 4570 | - | 0.5067 |
| 0.5254 | 4580 | - | 0.5066 |
| 0.5266 | 4590 | - | 0.5062 |
| 0.5277 | 4600 | 0.913 | 0.5059 |
| 0.5289 | 4610 | - | 0.5056 |
| 0.5300 | 4620 | - | 0.5052 |
| 0.5311 | 4630 | - | 0.5046 |
| 0.5323 | 4640 | - | 0.5039 |
| 0.5334 | 4650 | - | 0.5033 |
| 0.5346 | 4660 | - | 0.5030 |
| 0.5357 | 4670 | - | 0.5028 |
| 0.5369 | 4680 | - | 0.5027 |
| 0.5380 | 4690 | - | 0.5023 |
| 0.5392 | 4700 | 0.9047 | 0.5020 |
| 0.5403 | 4710 | - | 0.5018 |
| 0.5415 | 4720 | - | 0.5015 |
| 0.5426 | 4730 | - | 0.5009 |
| 0.5438 | 4740 | - | 0.5003 |
| 0.5449 | 4750 | - | 0.4997 |
| 0.5461 | 4760 | - | 0.4991 |
| 0.5472 | 4770 | - | 0.4984 |
| 0.5484 | 4780 | - | 0.4980 |
| 0.5495 | 4790 | - | 0.4980 |
| 0.5506 | 4800 | 0.887 | 0.4979 |
| 0.5518 | 4810 | - | 0.4975 |
| 0.5529 | 4820 | - | 0.4973 |
| 0.5541 | 4830 | - | 0.4969 |
| 0.5552 | 4840 | - | 0.4966 |
| 0.5564 | 4850 | - | 0.4964 |
| 0.5575 | 4860 | - | 0.4964 |
| 0.5587 | 4870 | - | 0.4960 |
| 0.5598 | 4880 | - | 0.4957 |
| 0.5610 | 4890 | - | 0.4955 |
| 0.5621 | 4900 | 0.8645 | 0.4952 |
| 0.5633 | 4910 | - | 0.4950 |
| 0.5644 | 4920 | - | 0.4952 |
| 0.5656 | 4930 | - | 0.4949 |
| 0.5667 | 4940 | - | 0.4943 |
| 0.5679 | 4950 | - | 0.4938 |
| 0.5690 | 4960 | - | 0.4936 |
| 0.5702 | 4970 | - | 0.4933 |
| 0.5713 | 4980 | - | 0.4931 |
| 0.5724 | 4990 | - | 0.4929 |
| 0.5736 | 5000 | 0.8348 | 0.4924 |
| 0.5747 | 5010 | - | 0.4921 |
| 0.5759 | 5020 | - | 0.4915 |
| 0.5770 | 5030 | - | 0.4911 |
| 0.5782 | 5040 | - | 0.4909 |
| 0.5793 | 5050 | - | 0.4905 |
| 0.5805 | 5060 | - | 0.4900 |
| 0.5816 | 5070 | - | 0.4892 |
| 0.5828 | 5080 | - | 0.4886 |
| 0.5839 | 5090 | - | 0.4883 |
| 0.5851 | 5100 | 0.871 | 0.4879 |
| 0.5862 | 5110 | - | 0.4877 |
| 0.5874 | 5120 | - | 0.4874 |
| 0.5885 | 5130 | - | 0.4870 |
| 0.5897 | 5140 | - | 0.4867 |
| 0.5908 | 5150 | - | 0.4864 |
| 0.5919 | 5160 | - | 0.4862 |
| 0.5931 | 5170 | - | 0.4860 |
| 0.5942 | 5180 | - | 0.4857 |
| 0.5954 | 5190 | - | 0.4855 |
| 0.5965 | 5200 | 0.8522 | 0.4850 |
| 0.5977 | 5210 | - | 0.4846 |
| 0.5988 | 5220 | - | 0.4844 |
| 0.6000 | 5230 | - | 0.4842 |
| 0.6011 | 5240 | - | 0.4837 |
| 0.6023 | 5250 | - | 0.4835 |
| 0.6034 | 5260 | - | 0.4831 |
| 0.6046 | 5270 | - | 0.4826 |
| 0.6057 | 5280 | - | 0.4822 |
| 0.6069 | 5290 | - | 0.4822 |
| 0.6080 | 5300 | 0.869 | 0.4820 |
| 0.6092 | 5310 | - | 0.4818 |
| 0.6103 | 5320 | - | 0.4819 |
| 0.6114 | 5330 | - | 0.4819 |
| 0.6126 | 5340 | - | 0.4815 |
| 0.6137 | 5350 | - | 0.4813 |
| 0.6149 | 5360 | - | 0.4812 |
| 0.6160 | 5370 | - | 0.4810 |
| 0.6172 | 5380 | - | 0.4809 |
| 0.6183 | 5390 | - | 0.4806 |
| 0.6195 | 5400 | 0.8548 | 0.4805 |
| 0.6206 | 5410 | - | 0.4800 |
| 0.6218 | 5420 | - | 0.4798 |
| 0.6229 | 5430 | - | 0.4795 |
| 0.6241 | 5440 | - | 0.4792 |
| 0.6252 | 5450 | - | 0.4790 |
| 0.6264 | 5460 | - | 0.4790 |
| 0.6275 | 5470 | - | 0.4791 |
| 0.6287 | 5480 | - | 0.4794 |
| 0.6298 | 5490 | - | 0.4792 |
| 0.6310 | 5500 | 0.8366 | 0.4790 |
| 0.6321 | 5510 | - | 0.4786 |
| 0.6332 | 5520 | - | 0.4780 |
| 0.6344 | 5530 | - | 0.4773 |
| 0.6355 | 5540 | - | 0.4768 |
| 0.6367 | 5550 | - | 0.4767 |
| 0.6378 | 5560 | - | 0.4765 |
| 0.6390 | 5570 | - | 0.4765 |
| 0.6401 | 5580 | - | 0.4763 |
| 0.6413 | 5590 | - | 0.4760 |
| 0.6424 | 5600 | 0.8696 | 0.4757 |
| 0.6436 | 5610 | - | 0.4754 |
| 0.6447 | 5620 | - | 0.4752 |
| 0.6459 | 5630 | - | 0.4751 |
| 0.6470 | 5640 | - | 0.4747 |
| 0.6482 | 5650 | - | 0.4747 |
| 0.6493 | 5660 | - | 0.4742 |
| 0.6505 | 5670 | - | 0.4740 |
| 0.6516 | 5680 | - | 0.4736 |
| 0.6527 | 5690 | - | 0.4730 |
| 0.6539 | 5700 | 0.8302 | 0.4725 |
| 0.6550 | 5710 | - | 0.4723 |
| 0.6562 | 5720 | - | 0.4720 |
| 0.6573 | 5730 | - | 0.4718 |
| 0.6585 | 5740 | - | 0.4715 |
| 0.6596 | 5750 | - | 0.4714 |
| 0.6608 | 5760 | - | 0.4711 |
| 0.6619 | 5770 | - | 0.4707 |
| 0.6631 | 5780 | - | 0.4707 |
| 0.6642 | 5790 | - | 0.4703 |
| 0.6654 | 5800 | 0.8128 | 0.4703 |
| 0.6665 | 5810 | - | 0.4701 |
| 0.6677 | 5820 | - | 0.4699 |
| 0.6688 | 5830 | - | 0.4697 |
| 0.6700 | 5840 | - | 0.4698 |
| 0.6711 | 5850 | - | 0.4695 |
| 0.6722 | 5860 | - | 0.4691 |
| 0.6734 | 5870 | - | 0.4689 |
| 0.6745 | 5880 | - | 0.4689 |
| 0.6757 | 5890 | - | 0.4688 |
| 0.6768 | 5900 | 0.8437 | 0.4683 |
| 0.6780 | 5910 | - | 0.4683 |
| 0.6791 | 5920 | - | 0.4681 |
| 0.6803 | 5930 | - | 0.4678 |
| 0.6814 | 5940 | - | 0.4677 |
| 0.6826 | 5950 | - | 0.4676 |
| 0.6837 | 5960 | - | 0.4673 |
| 0.6849 | 5970 | - | 0.4668 |
| 0.6860 | 5980 | - | 0.4667 |
| 0.6872 | 5990 | - | 0.4661 |
| 0.6883 | 6000 | 0.7774 | 0.4657 |
| 0.6895 | 6010 | - | 0.4654 |
| 0.6906 | 6020 | - | 0.4650 |
| 0.6918 | 6030 | - | 0.4648 |
| 0.6929 | 6040 | - | 0.4646 |
| 0.6940 | 6050 | - | 0.4644 |
| 0.6952 | 6060 | - | 0.4643 |
| 0.6963 | 6070 | - | 0.4641 |
| 0.6975 | 6080 | - | 0.4640 |
| 0.6986 | 6090 | - | 0.4638 |
| 0.6998 | 6100 | 0.834 | 0.4637 |
| 0.7009 | 6110 | - | 0.4633 |
| 0.7021 | 6120 | - | 0.4632 |
| 0.7032 | 6130 | - | 0.4631 |
| 0.7044 | 6140 | - | 0.4628 |
| 0.7055 | 6150 | - | 0.4627 |
| 0.7067 | 6160 | - | 0.4623 |
| 0.7078 | 6170 | - | 0.4617 |
| 0.7090 | 6180 | - | 0.4615 |
| 0.7101 | 6190 | - | 0.4614 |
| 0.7113 | 6200 | 0.8118 | 0.4612 |
| 0.7124 | 6210 | - | 0.4612 |
| 0.7135 | 6220 | - | 0.4612 |
| 0.7147 | 6230 | - | 0.4610 |
| 0.7158 | 6240 | - | 0.4609 |
| 0.7170 | 6250 | - | 0.4610 |
| 0.7181 | 6260 | - | 0.4611 |
| 0.7193 | 6270 | - | 0.4607 |
| 0.7204 | 6280 | - | 0.4599 |
| 0.7216 | 6290 | - | 0.4598 |
| 0.7227 | 6300 | 0.7884 | 0.4600 |
| 0.7239 | 6310 | - | 0.4599 |
| 0.7250 | 6320 | - | 0.4600 |
| 0.7262 | 6330 | - | 0.4601 |
| 0.7273 | 6340 | - | 0.4603 |
| 0.7285 | 6350 | - | 0.4603 |
| 0.7296 | 6360 | - | 0.4598 |
| 0.7308 | 6370 | - | 0.4597 |
| 0.7319 | 6380 | - | 0.4596 |
| 0.7331 | 6390 | - | 0.4594 |
| 0.7342 | 6400 | 0.8092 | 0.4590 |
| 0.7353 | 6410 | - | 0.4588 |
| 0.7365 | 6420 | - | 0.4585 |
| 0.7376 | 6430 | - | 0.4584 |
| 0.7388 | 6440 | - | 0.4580 |
| 0.7399 | 6450 | - | 0.4574 |
| 0.7411 | 6460 | - | 0.4570 |
| 0.7422 | 6470 | - | 0.4566 |
| 0.7434 | 6480 | - | 0.4563 |
| 0.7445 | 6490 | - | 0.4560 |
| 0.7457 | 6500 | 0.8195 | 0.4557 |
| 0.7468 | 6510 | - | 0.4556 |
| 0.7480 | 6520 | - | 0.4554 |
| 0.7491 | 6530 | - | 0.4551 |
| 0.7503 | 6540 | - | 0.4548 |
| 0.7514 | 6550 | - | 0.4545 |
| 0.7526 | 6560 | - | 0.4543 |
| 0.7537 | 6570 | - | 0.4541 |
| 0.7548 | 6580 | - | 0.4540 |
| 0.7560 | 6590 | - | 0.4538 |
| 0.7571 | 6600 | 0.8163 | 0.4535 |
| 0.7583 | 6610 | - | 0.4533 |
| 0.7594 | 6620 | - | 0.4536 |
| 0.7606 | 6630 | - | 0.4535 |
| 0.7617 | 6640 | - | 0.4533 |
| 0.7629 | 6650 | - | 0.4532 |
| 0.7640 | 6660 | - | 0.4531 |
| 0.7652 | 6670 | - | 0.4531 |
| 0.7663 | 6680 | - | 0.4530 |
| 0.7675 | 6690 | - | 0.4528 |
| 0.7686 | 6700 | 0.8091 | 0.4527 |
| 0.7698 | 6710 | - | 0.4527 |
| 0.7709 | 6720 | - | 0.4526 |
| 0.7721 | 6730 | - | 0.4525 |
| 0.7732 | 6740 | - | 0.4524 |
| 0.7743 | 6750 | - | 0.4521 |
| 0.7755 | 6760 | - | 0.4517 |
| 0.7766 | 6770 | - | 0.4514 |
| 0.7778 | 6780 | - | 0.4512 |
| 0.7789 | 6790 | - | 0.4514 |
| 0.7801 | 6800 | 0.8098 | 0.4515 |
| 0.7812 | 6810 | - | 0.4514 |
| 0.7824 | 6820 | - | 0.4511 |
| 0.7835 | 6830 | - | 0.4507 |
| 0.7847 | 6840 | - | 0.4505 |
| 0.7858 | 6850 | - | 0.4504 |
| 0.7870 | 6860 | - | 0.4503 |
| 0.7881 | 6870 | - | 0.4500 |
| 0.7893 | 6880 | - | 0.4498 |
| 0.7904 | 6890 | - | 0.4495 |
| 0.7916 | 6900 | 0.7857 | 0.4491 |
| 0.7927 | 6910 | - | 0.4490 |
| 0.7939 | 6920 | - | 0.4488 |
| 0.7950 | 6930 | - | 0.4488 |
| 0.7961 | 6940 | - | 0.4488 |
| 0.7973 | 6950 | - | 0.4487 |
| 0.7984 | 6960 | - | 0.4484 |
| 0.7996 | 6970 | - | 0.4482 |
| 0.8007 | 6980 | - | 0.4483 |
| 0.8019 | 6990 | - | 0.4481 |
| 0.8030 | 7000 | 0.7817 | 0.4477 |
| 0.8042 | 7010 | - | 0.4476 |
| 0.8053 | 7020 | - | 0.4471 |
| 0.8065 | 7030 | - | 0.4469 |
| 0.8076 | 7040 | - | 0.4468 |
| 0.8088 | 7050 | - | 0.4465 |
| 0.8099 | 7060 | - | 0.4460 |
| 0.8111 | 7070 | - | 0.4458 |
| 0.8122 | 7080 | - | 0.4458 |
| 0.8134 | 7090 | - | 0.4454 |
| 0.8145 | 7100 | 0.779 | 0.4452 |
| 0.8156 | 7110 | - | 0.4449 |
| 0.8168 | 7120 | - | 0.4448 |
| 0.8179 | 7130 | - | 0.4446 |
| 0.8191 | 7140 | - | 0.4442 |
| 0.8202 | 7150 | - | 0.4442 |
| 0.8214 | 7160 | - | 0.4441 |
| 0.8225 | 7170 | - | 0.4440 |
| 0.8237 | 7180 | - | 0.4437 |
| 0.8248 | 7190 | - | 0.4434 |
| 0.8260 | 7200 | 0.7807 | 0.4434 |
| 0.8271 | 7210 | - | 0.4435 |
| 0.8283 | 7220 | - | 0.4433 |
| 0.8294 | 7230 | - | 0.4431 |
| 0.8306 | 7240 | - | 0.4430 |
| 0.8317 | 7250 | - | 0.4428 |
| 0.8329 | 7260 | - | 0.4426 |
| 0.8340 | 7270 | - | 0.4424 |
| 0.8351 | 7280 | - | 0.4428 |
| 0.8363 | 7290 | - | 0.4426 |
| 0.8374 | 7300 | 0.7724 | 0.4423 |
| 0.8386 | 7310 | - | 0.4419 |
| 0.8397 | 7320 | - | 0.4418 |
| 0.8409 | 7330 | - | 0.4417 |
| 0.8420 | 7340 | - | 0.4415 |
| 0.8432 | 7350 | - | 0.4413 |
| 0.8443 | 7360 | - | 0.4409 |
| 0.8455 | 7370 | - | 0.4406 |
| 0.8466 | 7380 | - | 0.4405 |
| 0.8478 | 7390 | - | 0.4400 |
| 0.8489 | 7400 | 0.7898 | 0.4393 |
| 0.8501 | 7410 | - | 0.4389 |
| 0.8512 | 7420 | - | 0.4384 |
| 0.8524 | 7430 | - | 0.4381 |
| 0.8535 | 7440 | - | 0.4380 |
| 0.8547 | 7450 | - | 0.4380 |
| 0.8558 | 7460 | - | 0.4379 |
| 0.8569 | 7470 | - | 0.4377 |
| 0.8581 | 7480 | - | 0.4377 |
| 0.8592 | 7490 | - | 0.4376 |
| 0.8604 | 7500 | 0.8009 | 0.4375 |
| 0.8615 | 7510 | - | 0.4371 |
| 0.8627 | 7520 | - | 0.4369 |
| 0.8638 | 7530 | - | 0.4365 |
| 0.8650 | 7540 | - | 0.4362 |
| 0.8661 | 7550 | - | 0.4359 |
| 0.8673 | 7560 | - | 0.4357 |
| 0.8684 | 7570 | - | 0.4355 |
| 0.8696 | 7580 | - | 0.4351 |
| 0.8707 | 7590 | - | 0.4347 |
| 0.8719 | 7600 | 0.7847 | 0.4346 |
| 0.8730 | 7610 | - | 0.4346 |
| 0.8742 | 7620 | - | 0.4344 |
| 0.8753 | 7630 | - | 0.4343 |
| 0.8764 | 7640 | - | 0.4338 |
| 0.8776 | 7650 | - | 0.4336 |
| 0.8787 | 7660 | - | 0.4332 |
| 0.8799 | 7670 | - | 0.4331 |
| 0.8810 | 7680 | - | 0.4329 |
| 0.8822 | 7690 | - | 0.4326 |
| 0.8833 | 7700 | 0.7668 | 0.4324 |
| 0.8845 | 7710 | - | 0.4325 |
| 0.8856 | 7720 | - | 0.4327 |
| 0.8868 | 7730 | - | 0.4329 |
| 0.8879 | 7740 | - | 0.4328 |
| 0.8891 | 7750 | - | 0.4325 |
| 0.8902 | 7760 | - | 0.4325 |
| 0.8914 | 7770 | - | 0.4326 |
| 0.8925 | 7780 | - | 0.4324 |
| 0.8937 | 7790 | - | 0.4322 |
| 0.8948 | 7800 | 0.7987 | 0.4320 |
| 0.8960 | 7810 | - | 0.4319 |
| 0.8971 | 7820 | - | 0.4318 |
| 0.8982 | 7830 | - | 0.4315 |
| 0.8994 | 7840 | - | 0.4312 |
| 0.9005 | 7850 | - | 0.4308 |
| 0.9017 | 7860 | - | 0.4308 |
| 0.9028 | 7870 | - | 0.4309 |
| 0.9040 | 7880 | - | 0.4306 |
| 0.9051 | 7890 | - | 0.4305 |
| 0.9063 | 7900 | 0.7691 | 0.4305 |
| 0.9074 | 7910 | - | 0.4305 |
| 0.9086 | 7920 | - | 0.4308 |
| 0.9097 | 7930 | - | 0.4309 |
| 0.9109 | 7940 | - | 0.4309 |
| 0.9120 | 7950 | - | 0.4305 |
| 0.9132 | 7960 | - | 0.4297 |
| 0.9143 | 7970 | - | 0.4294 |
| 0.9155 | 7980 | - | 0.4292 |
| 0.9166 | 7990 | - | 0.4292 |
| 0.9177 | 8000 | 0.7828 | 0.4289 |
| 0.9189 | 8010 | - | 0.4288 |
| 0.9200 | 8020 | - | 0.4289 |
| 0.9212 | 8030 | - | 0.4285 |
| 0.9223 | 8040 | - | 0.4286 |
| 0.9235 | 8050 | - | 0.4289 |
| 0.9246 | 8060 | - | 0.4288 |
| 0.9258 | 8070 | - | 0.4290 |
| 0.9269 | 8080 | - | 0.4289 |
| 0.9281 | 8090 | - | 0.4287 |
| 0.9292 | 8100 | 0.7544 | 0.4288 |
| 0.9304 | 8110 | - | 0.4284 |
| 0.9315 | 8120 | - | 0.4287 |
| 0.9327 | 8130 | - | 0.4289 |
| 0.9338 | 8140 | - | 0.4293 |
| 0.9350 | 8150 | - | 0.4292 |
| 0.9361 | 8160 | - | 0.4289 |
| 0.9372 | 8170 | - | 0.4286 |
| 0.9384 | 8180 | - | 0.4280 |
| 0.9395 | 8190 | - | 0.4281 |
| 0.9407 | 8200 | 0.7502 | 0.4281 |
| 0.9418 | 8210 | - | 0.4278 |
| 0.9430 | 8220 | - | 0.4276 |
| 0.9441 | 8230 | - | 0.4274 |
| 0.9453 | 8240 | - | 0.4270 |
| 0.9464 | 8250 | - | 0.4267 |
| 0.9476 | 8260 | - | 0.4263 |
| 0.9487 | 8270 | - | 0.4261 |
| 0.9499 | 8280 | - | 0.4257 |
| 0.9510 | 8290 | - | 0.4254 |
| 0.9522 | 8300 | 0.7818 | 0.4255 |
| 0.9533 | 8310 | - | 0.4255 |
| 0.9545 | 8320 | - | 0.4254 |
| 0.9556 | 8330 | - | 0.4252 |
| 0.9568 | 8340 | - | 0.4249 |
| 0.9579 | 8350 | - | 0.4249 |
| 0.9590 | 8360 | - | 0.4248 |
| 0.9602 | 8370 | - | 0.4249 |
| 0.9613 | 8380 | - | 0.4248 |
| 0.9625 | 8390 | - | 0.4246 |
| 0.9636 | 8400 | 0.7606 | 0.4243 |
| 0.9648 | 8410 | - | 0.4242 |
| 0.9659 | 8420 | - | 0.4240 |
| 0.9671 | 8430 | - | 0.4239 |
| 0.9682 | 8440 | - | 0.4238 |
| 0.9694 | 8450 | - | 0.4238 |
| 0.9705 | 8460 | - | 0.4237 |
| 0.9717 | 8470 | - | 0.4236 |
| 0.9728 | 8480 | - | 0.4232 |
| 0.9740 | 8490 | - | 0.4229 |
| 0.9751 | 8500 | 0.7416 | 0.4227 |
| 0.9763 | 8510 | - | 0.4226 |
| 0.9774 | 8520 | - | 0.4220 |
| 0.9785 | 8530 | - | 0.4218 |
| 0.9797 | 8540 | - | 0.4217 |
| 0.9808 | 8550 | - | 0.4217 |
| 0.9820 | 8560 | - | 0.4215 |
| 0.9831 | 8570 | - | 0.4216 |
| 0.9843 | 8580 | - | 0.4217 |
| 0.9854 | 8590 | - | 0.4216 |
| 0.9866 | 8600 | 0.748 | 0.4217 |
| 0.9877 | 8610 | - | 0.4215 |
| 0.9889 | 8620 | - | 0.4216 |
| 0.9900 | 8630 | - | 0.4218 |
| 0.9912 | 8640 | - | 0.4218 |
| 0.9923 | 8650 | - | 0.4219 |
| 0.9935 | 8660 | - | 0.4217 |
| 0.9946 | 8670 | - | 0.4217 |
| 0.9958 | 8680 | - | 0.4214 |
| 0.9969 | 8690 | - | 0.4210 |
| 0.9980 | 8700 | 0.7553 | 0.4205 |
| 0.9992 | 8710 | - | 0.4200 |
| 1.0003 | 8720 | - | 0.4199 |
| 1.0015 | 8730 | - | 0.4199 |
| 1.0026 | 8740 | - | 0.4199 |
| 1.0038 | 8750 | - | 0.4198 |
| 1.0049 | 8760 | - | 0.4200 |
| 1.0061 | 8770 | - | 0.4198 |
| 1.0072 | 8780 | - | 0.4195 |
| 1.0084 | 8790 | - | 0.4194 |
| 1.0095 | 8800 | 0.7202 | 0.4191 |
| 1.0107 | 8810 | - | 0.4190 |
| 1.0118 | 8820 | - | 0.4188 |
| 1.0130 | 8830 | - | 0.4188 |
| 1.0141 | 8840 | - | 0.4192 |
| 1.0153 | 8850 | - | 0.4190 |
| 1.0164 | 8860 | - | 0.4191 |
| 1.0176 | 8870 | - | 0.4190 |
| 1.0187 | 8880 | - | 0.4192 |
| 1.0198 | 8890 | - | 0.4190 |
| 1.0210 | 8900 | 0.7567 | 0.4189 |
| 1.0221 | 8910 | - | 0.4188 |
| 1.0233 | 8920 | - | 0.4189 |
| 1.0244 | 8930 | - | 0.4188 |
| 1.0256 | 8940 | - | 0.4187 |
| 1.0267 | 8950 | - | 0.4183 |
| 1.0279 | 8960 | - | 0.4182 |
| 1.0290 | 8970 | - | 0.4182 |
| 1.0302 | 8980 | - | 0.4184 |
| 1.0313 | 8990 | - | 0.4181 |
| 1.0325 | 9000 | 0.7345 | 0.4177 |
| 1.0336 | 9010 | - | 0.4173 |
| 1.0348 | 9020 | - | 0.4171 |
| 1.0359 | 9030 | - | 0.4172 |
| 1.0371 | 9040 | - | 0.4171 |
| 1.0382 | 9050 | - | 0.4172 |
| 1.0393 | 9060 | - | 0.4172 |
| 1.0405 | 9070 | - | 0.4170 |
| 1.0416 | 9080 | - | 0.4165 |
| 1.0428 | 9090 | - | 0.4162 |
| 1.0439 | 9100 | 0.7344 | 0.4162 |
| 1.0451 | 9110 | - | 0.4160 |
| 1.0462 | 9120 | - | 0.4158 |
| 1.0474 | 9130 | - | 0.4157 |
| 1.0485 | 9140 | - | 0.4157 |
| 1.0497 | 9150 | - | 0.4156 |
| 1.0508 | 9160 | - | 0.4153 |
| 1.0520 | 9170 | - | 0.4153 |
| 1.0531 | 9180 | - | 0.4154 |
| 1.0543 | 9190 | - | 0.4154 |
| 1.0554 | 9200 | 0.7233 | 0.4157 |
| 1.0566 | 9210 | - | 0.4157 |
| 1.0577 | 9220 | - | 0.4156 |
| 1.0589 | 9230 | - | 0.4155 |
| 1.0600 | 9240 | - | 0.4153 |
| 1.0611 | 9250 | - | 0.4154 |
| 1.0623 | 9260 | - | 0.4155 |
| 1.0634 | 9270 | - | 0.4154 |
| 1.0646 | 9280 | - | 0.4151 |
| 1.0657 | 9290 | - | 0.4149 |
| 1.0669 | 9300 | 0.7442 | 0.4148 |
| 1.0680 | 9310 | - | 0.4144 |
| 1.0692 | 9320 | - | 0.4143 |
| 1.0703 | 9330 | - | 0.4141 |
| 1.0715 | 9340 | - | 0.4140 |
| 1.0726 | 9350 | - | 0.4138 |
| 1.0738 | 9360 | - | 0.4136 |
| 1.0749 | 9370 | - | 0.4133 |
| 1.0761 | 9380 | - | 0.4132 |
| 1.0772 | 9390 | - | 0.4130 |
| 1.0784 | 9400 | 0.722 | 0.4129 |
| 1.0795 | 9410 | - | 0.4131 |
| 1.0806 | 9420 | - | 0.4132 |
| 1.0818 | 9430 | - | 0.4133 |
| 1.0829 | 9440 | - | 0.4134 |
| 1.0841 | 9450 | - | 0.4134 |
| 1.0852 | 9460 | - | 0.4133 |
| 1.0864 | 9470 | - | 0.4132 |
| 1.0875 | 9480 | - | 0.4132 |
| 1.0887 | 9490 | - | 0.4134 |
| 1.0898 | 9500 | 0.7433 | 0.4133 |
| 1.0910 | 9510 | - | 0.4133 |
| 1.0921 | 9520 | - | 0.4133 |
| 1.0933 | 9530 | - | 0.4132 |
| 1.0944 | 9540 | - | 0.4131 |
| 1.0956 | 9550 | - | 0.4130 |
| 1.0967 | 9560 | - | 0.4130 |
| 1.0979 | 9570 | - | 0.4126 |
| 1.0990 | 9580 | - | 0.4125 |
| 1.1001 | 9590 | - | 0.4121 |
| 1.1013 | 9600 | 0.746 | 0.4119 |
| 1.1024 | 9610 | - | 0.4117 |
| 1.1036 | 9620 | - | 0.4112 |
| 1.1047 | 9630 | - | 0.4109 |
| 1.1059 | 9640 | - | 0.4106 |
| 1.1070 | 9650 | - | 0.4101 |
| 1.1082 | 9660 | - | 0.4101 |
| 1.1093 | 9670 | - | 0.4102 |
| 1.1105 | 9680 | - | 0.4102 |
| 1.1116 | 9690 | - | 0.4101 |
| 1.1128 | 9700 | 0.7447 | 0.4099 |
| 1.1139 | 9710 | - | 0.4100 |
| 1.1151 | 9720 | - | 0.4098 |
| 1.1162 | 9730 | - | 0.4097 |
| 1.1174 | 9740 | - | 0.4094 |
| 1.1185 | 9750 | - | 0.4097 |
| 1.1197 | 9760 | - | 0.4096 |
| 1.1208 | 9770 | - | 0.4096 |
| 1.1219 | 9780 | - | 0.4097 |
| 1.1231 | 9790 | - | 0.4097 |
| 1.1242 | 9800 | 0.7234 | 0.4094 |
| 1.1254 | 9810 | - | 0.4090 |
| 1.1265 | 9820 | - | 0.4090 |
| 1.1277 | 9830 | - | 0.4091 |
| 1.1288 | 9840 | - | 0.4091 |
| 1.1300 | 9850 | - | 0.4090 |
| 1.1311 | 9860 | - | 0.4088 |
| 1.1323 | 9870 | - | 0.4088 |
| 1.1334 | 9880 | - | 0.4085 |
| 1.1346 | 9890 | - | 0.4085 |
| 1.1357 | 9900 | 0.7054 | 0.4084 |
| 1.1369 | 9910 | - | 0.4087 |
| 1.1380 | 9920 | - | 0.4089 |
| 1.1392 | 9930 | - | 0.4089 |
| 1.1403 | 9940 | - | 0.4088 |
| 1.1414 | 9950 | - | 0.4091 |
| 1.1426 | 9960 | - | 0.4088 |
| 1.1437 | 9970 | - | 0.4086 |
| 1.1449 | 9980 | - | 0.4084 |
| 1.1460 | 9990 | - | 0.4089 |
| 1.1472 | 10000 | 0.7071 | 0.4088 |
| 1.1483 | 10010 | - | 0.4086 |
| 1.1495 | 10020 | - | 0.4081 |
| 1.1506 | 10030 | - | 0.4079 |
| 1.1518 | 10040 | - | 0.4079 |
| 1.1529 | 10050 | - | 0.4081 |
| 1.1541 | 10060 | - | 0.4081 |
| 1.1552 | 10070 | - | 0.4080 |
| 1.1564 | 10080 | - | 0.4079 |
| 1.1575 | 10090 | - | 0.4078 |
| 1.1587 | 10100 | 0.7289 | 0.4075 |
| 1.1598 | 10110 | - | 0.4072 |
| 1.1609 | 10120 | - | 0.4070 |
| 1.1621 | 10130 | - | 0.4070 |
| 1.1632 | 10140 | - | 0.4074 |
| 1.1644 | 10150 | - | 0.4074 |
| 1.1655 | 10160 | - | 0.4073 |
| 1.1667 | 10170 | - | 0.4073 |
| 1.1678 | 10180 | - | 0.4072 |
| 1.1690 | 10190 | - | 0.4073 |
| 1.1701 | 10200 | 0.758 | 0.4071 |
| 1.1713 | 10210 | - | 0.4071 |
| 1.1724 | 10220 | - | 0.4071 |
| 1.1736 | 10230 | - | 0.4068 |
| 1.1747 | 10240 | - | 0.4063 |
| 1.1759 | 10250 | - | 0.4062 |
| 1.1770 | 10260 | - | 0.4064 |
| 1.1782 | 10270 | - | 0.4065 |
| 1.1793 | 10280 | - | 0.4063 |
| 1.1805 | 10290 | - | 0.4065 |
| 1.1816 | 10300 | 0.7322 | 0.4066 |
| 1.1827 | 10310 | - | 0.4065 |
| 1.1839 | 10320 | - | 0.4065 |
| 1.1850 | 10330 | - | 0.4061 |
| 1.1862 | 10340 | - | 0.4060 |
| 1.1873 | 10350 | - | 0.4057 |
| 1.1885 | 10360 | - | 0.4056 |
| 1.1896 | 10370 | - | 0.4056 |
| 1.1908 | 10380 | - | 0.4059 |
| 1.1919 | 10390 | - | 0.4061 |
| 1.1931 | 10400 | 0.6948 | 0.4059 |
| 1.1942 | 10410 | - | 0.4059 |
| 1.1954 | 10420 | - | 0.4060 |
| 1.1965 | 10430 | - | 0.4058 |
| 1.1977 | 10440 | - | 0.4057 |
| 1.1988 | 10450 | - | 0.4056 |
| 1.2000 | 10460 | - | 0.4056 |
| 1.2011 | 10470 | - | 0.4056 |
| 1.2022 | 10480 | - | 0.4057 |
| 1.2034 | 10490 | - | 0.4056 |
| 1.2045 | 10500 | 0.7185 | 0.4055 |
| 1.2057 | 10510 | - | 0.4056 |
| 1.2068 | 10520 | - | 0.4054 |
| 1.2080 | 10530 | - | 0.4053 |
| 1.2091 | 10540 | - | 0.4051 |
| 1.2103 | 10550 | - | 0.4050 |
| 1.2114 | 10560 | - | 0.4051 |
| 1.2126 | 10570 | - | 0.4052 |
| 1.2137 | 10580 | - | 0.4053 |
| 1.2149 | 10590 | - | 0.4053 |
| 1.2160 | 10600 | 0.7039 | 0.4053 |
| 1.2172 | 10610 | - | 0.4054 |
| 1.2183 | 10620 | - | 0.4051 |
| 1.2195 | 10630 | - | 0.4050 |
| 1.2206 | 10640 | - | 0.4048 |
| 1.2218 | 10650 | - | 0.4044 |
| 1.2229 | 10660 | - | 0.4046 |
| 1.2240 | 10670 | - | 0.4044 |
| 1.2252 | 10680 | - | 0.4041 |
| 1.2263 | 10690 | - | 0.4039 |
| 1.2275 | 10700 | 0.6969 | 0.4037 |
| 1.2286 | 10710 | - | 0.4037 |
| 1.2298 | 10720 | - | 0.4035 |
| 1.2309 | 10730 | - | 0.4036 |
| 1.2321 | 10740 | - | 0.4035 |
| 1.2332 | 10750 | - | 0.4038 |
| 1.2344 | 10760 | - | 0.4038 |
| 1.2355 | 10770 | - | 0.4037 |
| 1.2367 | 10780 | - | 0.4037 |
| 1.2378 | 10790 | - | 0.4037 |
| 1.2390 | 10800 | 0.6921 | 0.4038 |
| 1.2401 | 10810 | - | 0.4039 |
| 1.2413 | 10820 | - | 0.4038 |
| 1.2424 | 10830 | - | 0.4037 |
| 1.2435 | 10840 | - | 0.4040 |
| 1.2447 | 10850 | - | 0.4042 |
| 1.2458 | 10860 | - | 0.4044 |
| 1.2470 | 10870 | - | 0.4043 |
| 1.2481 | 10880 | - | 0.4043 |
| 1.2493 | 10890 | - | 0.4044 |
| 1.2504 | 10900 | 0.728 | 0.4042 |
| 1.2516 | 10910 | - | 0.4044 |
| 1.2527 | 10920 | - | 0.4043 |
| 1.2539 | 10930 | - | 0.4039 |
| 1.2550 | 10940 | - | 0.4038 |
| 1.2562 | 10950 | - | 0.4037 |
| 1.2573 | 10960 | - | 0.4035 |
| 1.2585 | 10970 | - | 0.4032 |
| 1.2596 | 10980 | - | 0.4024 |
| 1.2608 | 10990 | - | 0.4019 |
| 1.2619 | 11000 | 0.713 | 0.4018 |
| 1.2630 | 11010 | - | 0.4015 |
| 1.2642 | 11020 | - | 0.4015 |
| 1.2653 | 11030 | - | 0.4014 |
| 1.2665 | 11040 | - | 0.4015 |
| 1.2676 | 11050 | - | 0.4014 |
| 1.2688 | 11060 | - | 0.4013 |
| 1.2699 | 11070 | - | 0.4015 |
| 1.2711 | 11080 | - | 0.4016 |
| 1.2722 | 11090 | - | 0.4017 |
| 1.2734 | 11100 | 0.668 | 0.4017 |
| 1.2745 | 11110 | - | 0.4016 |
| 1.2757 | 11120 | - | 0.4016 |
| 1.2768 | 11130 | - | 0.4019 |
| 1.2780 | 11140 | - | 0.4021 |
| 1.2791 | 11150 | - | 0.4019 |
| 1.2803 | 11160 | - | 0.4017 |
| 1.2814 | 11170 | - | 0.4017 |
| 1.2826 | 11180 | - | 0.4018 |
| 1.2837 | 11190 | - | 0.4013 |
| 1.2848 | 11200 | 0.7101 | 0.4011 |
| 1.2860 | 11210 | - | 0.4011 |
| 1.2871 | 11220 | - | 0.4014 |
| 1.2883 | 11230 | - | 0.4015 |
| 1.2894 | 11240 | - | 0.4010 |
| 1.2906 | 11250 | - | 0.4012 |
| 1.2917 | 11260 | - | 0.4013 |
| 1.2929 | 11270 | - | 0.4010 |
| 1.2940 | 11280 | - | 0.4006 |
| 1.2952 | 11290 | - | 0.4005 |
| 1.2963 | 11300 | 0.6963 | 0.4004 |
| 1.2975 | 11310 | - | 0.4003 |
| 1.2986 | 11320 | - | 0.4004 |
| 1.2998 | 11330 | - | 0.4003 |
| 1.3009 | 11340 | - | 0.3999 |
| 1.3021 | 11350 | - | 0.3997 |
| 1.3032 | 11360 | - | 0.3996 |
| 1.3043 | 11370 | - | 0.3997 |
| 1.3055 | 11380 | - | 0.3996 |
| 1.3066 | 11390 | - | 0.3994 |
| 1.3078 | 11400 | 0.6706 | 0.3993 |
| 1.3089 | 11410 | - | 0.3991 |
| 1.3101 | 11420 | - | 0.3990 |
| 1.3112 | 11430 | - | 0.3990 |
| 1.3124 | 11440 | - | 0.3987 |
| 1.3135 | 11450 | - | 0.3981 |
| 1.3147 | 11460 | - | 0.3978 |
| 1.3158 | 11470 | - | 0.3975 |
| 1.3170 | 11480 | - | 0.3974 |
| 1.3181 | 11490 | - | 0.3974 |
| 1.3193 | 11500 | 0.6962 | 0.3974 |
| 1.3204 | 11510 | - | 0.3975 |
| 1.3216 | 11520 | - | 0.3975 |
| 1.3227 | 11530 | - | 0.3976 |
| 1.3238 | 11540 | - | 0.3977 |
| 1.3250 | 11550 | - | 0.3975 |
| 1.3261 | 11560 | - | 0.3974 |
| 1.3273 | 11570 | - | 0.3973 |
| 1.3284 | 11580 | - | 0.3971 |
| 1.3296 | 11590 | - | 0.3969 |
| 1.3307 | 11600 | 0.7083 | 0.3970 |
| 1.3319 | 11610 | - | 0.3970 |
| 1.3330 | 11620 | - | 0.3971 |
| 1.3342 | 11630 | - | 0.3973 |
| 1.3353 | 11640 | - | 0.3975 |
| 1.3365 | 11650 | - | 0.3973 |
| 1.3376 | 11660 | - | 0.3973 |
| 1.3388 | 11670 | - | 0.3973 |
| 1.3399 | 11680 | - | 0.3976 |
| 1.3411 | 11690 | - | 0.3976 |
| 1.3422 | 11700 | 0.6757 | 0.3976 |
| 1.3434 | 11710 | - | 0.3975 |
| 1.3445 | 11720 | - | 0.3973 |
| 1.3456 | 11730 | - | 0.3971 |
| 1.3468 | 11740 | - | 0.3963 |
| 1.3479 | 11750 | - | 0.3964 |
| 1.3491 | 11760 | - | 0.3965 |
| 1.3502 | 11770 | - | 0.3967 |
| 1.3514 | 11780 | - | 0.3966 |
| 1.3525 | 11790 | - | 0.3964 |
| 1.3537 | 11800 | 0.7091 | 0.3965 |
| 1.3548 | 11810 | - | 0.3964 |
| 1.3560 | 11820 | - | 0.3964 |
| 1.3571 | 11830 | - | 0.3963 |
| 1.3583 | 11840 | - | 0.3962 |
| 1.3594 | 11850 | - | 0.3961 |
| 1.3606 | 11860 | - | 0.3956 |
| 1.3617 | 11870 | - | 0.3956 |
| 1.3629 | 11880 | - | 0.3961 |
| 1.3640 | 11890 | - | 0.3963 |
| 1.3651 | 11900 | 0.6977 | 0.3962 |
| 1.3663 | 11910 | - | 0.3958 |
| 1.3674 | 11920 | - | 0.3960 |
| 1.3686 | 11930 | - | 0.3963 |
| 1.3697 | 11940 | - | 0.3964 |
| 1.3709 | 11950 | - | 0.3961 |
| 1.3720 | 11960 | - | 0.3960 |
| 1.3732 | 11970 | - | 0.3958 |
| 1.3743 | 11980 | - | 0.3954 |
| 1.3755 | 11990 | - | 0.3948 |
| 1.3766 | 12000 | 0.7003 | 0.3944 |
</details>
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "google-t5/t5-base", "datasets": ["sentence-transformers/all-nli"], "language": ["en"], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:557850", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "A man is jumping unto his filthy bed.", "sentences": ["A young male is looking at a newspaper while 2 females walks past him.", "The bed is dirty.", "The man is on the moon."]}, {"source_sentence": "A carefully balanced male stands on one foot near a clean ocean beach area.", "sentences": ["A man is ouside near the beach.", "Three policemen patrol the streets on bikes", "A man is sitting on his couch."]}, {"source_sentence": "The man is wearing a blue shirt.", "sentences": ["Near the trashcan the man stood and smoked", "A man in a blue shirt leans on a wall beside a road with a blue van and red car with water in the background.", "A man in a black shirt is playing a guitar."]}, {"source_sentence": "The girls are outdoors.", "sentences": ["Two girls riding on an amusement part ride.", "a guy laughs while doing laundry", "Three girls are standing together in a room, one is listening, one is writing on a wall and the third is talking to them."]}, {"source_sentence": "A construction worker peeking out of a manhole while his coworker sits on the sidewalk smiling.", "sentences": ["A worker is looking out of a manhole.", "A man is giving a presentation.", "The workers are both inside the manhole."]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,129 |
Helsinki-NLP/opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"acf",
"af",
"an",
"ang",
"anp",
"as",
"ast",
"awa",
"bal",
"bar",
"be",
"bg",
"bho",
"bi",
"bn",
"bpy",
"br",
"bs",
"bzj",
"ca",
"cbk",
"co",
"crs",
"cs",
"csb",
"cu",
"cy",
"da",
"de",
"diq",
"djk",
"drt",
"dsb",
"dv",
"egl",
"el",
"en",
"enm",
"es",
"ext",
"fa",
"fo",
"fr",
"frm",
"fro",
"frp",
"frr",
"fur",
"fy",
"ga",
"gbm",
"gcf",
"gd",
"gl",
"glk",
"gos",
"got",
"grc",
"gsw",
"gu",
"gv",
"hi",
"hif",
"hne",
"hns",
"hr",
"hrx",
"hsb",
"ht",
"hwc",
"hy",
"hyw",
"icr",
"is",
"it",
"jam",
"jdt",
"kea",
"kok",
"kri",
"ks",
"ksh",
"ku",
"kw",
"la",
"lad",
"lah",
"lb",
"li",
"lij",
"lld",
"lmo",
"lou",
"lrc",
"lt",
"lv",
"mag",
"mai",
"mfe",
"mk",
"mo",
"mr",
"mwl",
"mzn",
"nap",
"nb",
"nds",
"ne",
"nl",
"nn",
"no",
"non",
"oc",
"ofs",
"or",
"orv",
"os",
"osp",
"pa",
"pal",
"pap",
"pcd",
"pcm",
"pdc",
"pfl",
"pi",
"pih",
"pis",
"pl",
"pms",
"pnt",
"prg",
"ps",
"pt",
"rhg",
"rm",
"rmy",
"ro",
"rom",
"rop",
"ru",
"rue",
"rup",
"sa",
"sc",
"scn",
"sco",
"sd",
"sgs",
"sh",
"si",
"sk",
"skr",
"sl",
"sq",
"sr",
"srm",
"srn",
"stq",
"sv",
"swg",
"syl",
"szl",
"tcs",
"tg",
"tly",
"tpi",
"uk",
"ur",
"vec",
"vls",
"wa",
"wae",
"xcl",
"yi",
"zea",
"zza",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-10-08T08:49:55Z |
2024-10-08T08:50:07+00:00
| 13 | 0 |
---
language:
- acf
- af
- an
- ang
- anp
- as
- ast
- awa
- bal
- bar
- be
- bg
- bho
- bi
- bn
- bpy
- br
- bs
- bzj
- ca
- cbk
- co
- crs
- cs
- csb
- cu
- cy
- da
- de
- diq
- djk
- drt
- dsb
- dv
- egl
- el
- en
- enm
- es
- ext
- fa
- fo
- fr
- frm
- fro
- frp
- frr
- fur
- fy
- ga
- gbm
- gcf
- gd
- gl
- glk
- gos
- got
- grc
- gsw
- gu
- gv
- hi
- hif
- hne
- hns
- hr
- hrx
- hsb
- ht
- hwc
- hy
- hyw
- icr
- is
- it
- jam
- jdt
- kea
- kok
- kri
- ks
- ksh
- ku
- kw
- la
- lad
- lah
- lb
- li
- lij
- lld
- lmo
- lou
- lrc
- lt
- lv
- mag
- mai
- mfe
- mk
- mo
- mr
- mwl
- mzn
- nap
- nb
- nds
- ne
- nl
- nn
- false
- non
- oc
- ofs
- or
- orv
- os
- osp
- pa
- pal
- pap
- pcd
- pcm
- pdc
- pfl
- pi
- pih
- pis
- pl
- pms
- pnt
- prg
- ps
- pt
- rhg
- rm
- rmy
- ro
- rom
- rop
- ru
- rue
- rup
- sa
- sc
- scn
- sco
- sd
- sgs
- sh
- si
- sk
- skr
- sl
- sq
- sr
- srm
- srn
- stq
- sv
- swg
- syl
- szl
- tcs
- tg
- tly
- tpi
- uk
- ur
- vec
- vls
- wa
- wae
- xcl
- yi
- zea
- zza
library_name: transformers
license: apache-2.0
tags:
- translation
- opus-mt-tc-bible
language_bcp47:
- bs_Latn
- ku_Latn
- sr_Cyrl
model-index:
- name: opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa
results:
- task:
type: translation
name: Translation afr-deu
dataset:
name: flores200-devtest
type: flores200-devtest
args: afr-deu
metrics:
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.57712
name: chr-F
- type: bleu
value: 53.4
name: BLEU
- type: chrf
value: 0.7369
name: chr-F
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.61332
name: chr-F
- type: bleu
value: 35.1
name: BLEU
- type: chrf
value: 0.60899
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.50836
name: chr-F
- type: bleu
value: 13.4
name: BLEU
- type: chrf
value: 0.42432
name: chr-F
- type: bleu
value: 10.1
name: BLEU
- type: chrf
value: 0.35035
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.52402
name: chr-F
- type: bleu
value: 35.1
name: BLEU
- type: chrf
value: 0.6064
name: chr-F
- type: bleu
value: 31.5
name: BLEU
- type: chrf
value: 0.5706
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.56982
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.49452
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.47101
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.55042
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.5023
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.49701
name: chr-F
- type: bleu
value: 15.7
name: BLEU
- type: chrf
value: 0.43913
name: chr-F
- type: bleu
value: 12.7
name: BLEU
- type: chrf
value: 0.46906
name: chr-F
- type: bleu
value: 16.5
name: BLEU
- type: chrf
value: 0.49995
name: chr-F
- type: bleu
value: 17.1
name: BLEU
- type: chrf
value: 0.49987
name: chr-F
- type: bleu
value: 15.7
name: BLEU
- type: chrf
value: 0.48319
name: chr-F
- type: bleu
value: 14.4
name: BLEU
- type: chrf
value: 0.45393
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.46413
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.54681
name: chr-F
- type: bleu
value: 21.9
name: BLEU
- type: chrf
value: 0.49843
name: chr-F
- type: bleu
value: 21.0
name: BLEU
- type: chrf
value: 0.49129
name: chr-F
- type: bleu
value: 14.9
name: BLEU
- type: chrf
value: 0.4331
name: chr-F
- type: bleu
value: 12.4
name: BLEU
- type: chrf
value: 0.41875
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.48319
name: chr-F
- type: bleu
value: 16.1
name: BLEU
- type: chrf
value: 0.44504
name: chr-F
- type: bleu
value: 15.5
name: BLEU
- type: chrf
value: 0.43627
name: chr-F
- type: bleu
value: 12.6
name: BLEU
- type: chrf
value: 0.40189
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.56591
name: chr-F
- type: bleu
value: 37.8
name: BLEU
- type: chrf
value: 0.64922
name: chr-F
- type: bleu
value: 33.3
name: BLEU
- type: chrf
value: 0.60386
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.5907
name: chr-F
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.50968
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.5703
name: chr-F
- type: bleu
value: 43.0
name: BLEU
- type: chrf
value: 0.67842
name: chr-F
- type: bleu
value: 38.1
name: BLEU
- type: chrf
value: 0.63034
name: chr-F
- type: bleu
value: 37.3
name: BLEU
- type: chrf
value: 0.62567
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.5326
name: chr-F
- type: bleu
value: 27.1
name: BLEU
- type: chrf
value: 0.56613
name: chr-F
- type: bleu
value: 36.5
name: BLEU
- type: chrf
value: 0.63574
name: chr-F
- type: bleu
value: 32.8
name: BLEU
- type: chrf
value: 0.59573
name: chr-F
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.58096
name: chr-F
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.50295
name: chr-F
- type: bleu
value: 11.1
name: BLEU
- type: chrf
value: 0.3865
name: chr-F
- type: bleu
value: 16.7
name: BLEU
- type: chrf
value: 0.43075
name: chr-F
- type: bleu
value: 15.7
name: BLEU
- type: chrf
value: 0.41038
name: chr-F
- type: bleu
value: 14.3
name: BLEU
- type: chrf
value: 0.39883
name: chr-F
- type: bleu
value: 11.2
name: BLEU
- type: chrf
value: 0.36422
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.51003
name: chr-F
- type: bleu
value: 45.7
name: BLEU
- type: chrf
value: 0.67808
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.55779
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.5393
name: chr-F
- type: bleu
value: 19.6
name: BLEU
- type: chrf
value: 0.47129
name: chr-F
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.59897
name: chr-F
- type: bleu
value: 46.2
name: BLEU
- type: chrf
value: 0.70142
name: chr-F
- type: bleu
value: 37.1
name: BLEU
- type: chrf
value: 0.62669
name: chr-F
- type: bleu
value: 35.3
name: BLEU
- type: chrf
value: 0.61338
name: chr-F
- type: bleu
value: 24.2
name: BLEU
- type: chrf
value: 0.5236
name: chr-F
- type: bleu
value: 40.3
name: BLEU
- type: chrf
value: 0.66096
name: chr-F
- type: bleu
value: 35.4
name: BLEU
- type: chrf
value: 0.61562
name: chr-F
- type: bleu
value: 33.3
name: BLEU
- type: chrf
value: 0.59775
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.51787
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.52003
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.59074
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.56636
name: chr-F
- type: bleu
value: 27.2
name: BLEU
- type: chrf
value: 0.54903
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.48701
name: chr-F
- type: bleu
value: 36.8
name: BLEU
- type: chrf
value: 0.63747
name: chr-F
- type: bleu
value: 47.2
name: BLEU
- type: chrf
value: 0.69505
name: chr-F
- type: bleu
value: 47.3
name: BLEU
- type: chrf
value: 0.69743
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.54954
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.42943
name: chr-F
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.46227
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.41404
name: chr-F
- type: bleu
value: 17.6
name: BLEU
- type: chrf
value: 0.4185
name: chr-F
- type: bleu
value: 13.2
name: BLEU
- type: chrf
value: 0.37492
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.57718
name: chr-F
- type: bleu
value: 41.4
name: BLEU
- type: chrf
value: 0.66534
name: chr-F
- type: bleu
value: 36.2
name: BLEU
- type: chrf
value: 0.61987
name: chr-F
- type: bleu
value: 24.1
name: BLEU
- type: chrf
value: 0.52646
name: chr-F
- type: bleu
value: 20.5
name: BLEU
- type: chrf
value: 0.50429
name: chr-F
- type: bleu
value: 32.0
name: BLEU
- type: chrf
value: 0.58954
name: chr-F
- type: bleu
value: 28.6
name: BLEU
- type: chrf
value: 0.55699
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.54977
name: chr-F
- type: bleu
value: 19.0
name: BLEU
- type: chrf
value: 0.4755
name: chr-F
- type: bleu
value: 11.3
name: BLEU
- type: chrf
value: 0.39116
name: chr-F
- type: bleu
value: 16.2
name: BLEU
- type: chrf
value: 0.43561
name: chr-F
- type: bleu
value: 15.3
name: BLEU
- type: chrf
value: 0.4177
name: chr-F
- type: bleu
value: 14.7
name: BLEU
- type: chrf
value: 0.40473
name: chr-F
- type: bleu
value: 12.0
name: BLEU
- type: chrf
value: 0.37498
name: chr-F
- type: bleu
value: 18.1
name: BLEU
- type: chrf
value: 0.48622
name: chr-F
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.58337
name: chr-F
- type: bleu
value: 24.6
name: BLEU
- type: chrf
value: 0.52798
name: chr-F
- type: bleu
value: 23.6
name: BLEU
- type: chrf
value: 0.51712
name: chr-F
- type: bleu
value: 18.1
name: BLEU
- type: chrf
value: 0.45954
name: chr-F
- type: bleu
value: 25.8
name: BLEU
- type: chrf
value: 0.56174
name: chr-F
- type: bleu
value: 38.4
name: BLEU
- type: chrf
value: 0.65391
name: chr-F
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.61762
name: chr-F
- type: bleu
value: 32.9
name: BLEU
- type: chrf
value: 0.6017
name: chr-F
- type: bleu
value: 24.3
name: BLEU
- type: chrf
value: 0.53214
name: chr-F
- type: bleu
value: 14.2
name: BLEU
- type: chrf
value: 0.43101
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.55857
name: chr-F
- type: bleu
value: 19.8
name: BLEU
- type: chrf
value: 0.47047
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.45641
name: chr-F
- type: bleu
value: 14.5
name: BLEU
- type: chrf
value: 0.42457
name: chr-F
- type: bleu
value: 19.2
name: BLEU
- type: chrf
value: 0.49247
name: chr-F
- type: bleu
value: 31.7
name: BLEU
- type: chrf
value: 0.58655
name: chr-F
- type: bleu
value: 34.2
name: BLEU
- type: chrf
value: 0.60736
name: chr-F
- type: bleu
value: 27.3
name: BLEU
- type: chrf
value: 0.54733
name: chr-F
- type: bleu
value: 17.9
name: BLEU
- type: chrf
value: 0.46963
name: chr-F
- type: bleu
value: 20.3
name: BLEU
- type: chrf
value: 0.50305
name: chr-F
- type: bleu
value: 34.0
name: BLEU
- type: chrf
value: 0.60811
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.53919
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.53151
name: chr-F
- type: bleu
value: 17.4
name: BLEU
- type: chrf
value: 0.46051
name: chr-F
- type: bleu
value: 18.4
name: BLEU
- type: chrf
value: 0.48386
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.59671
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.52013
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.51345
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.44481
name: chr-F
- type: bleu
value: 26.0
name: BLEU
- type: chrf
value: 0.55524
name: chr-F
- type: bleu
value: 34.9
name: BLEU
- type: chrf
value: 0.61977
name: chr-F
- type: bleu
value: 32.7
name: BLEU
- type: chrf
value: 0.59318
name: chr-F
- type: bleu
value: 30.2
name: BLEU
- type: chrf
value: 0.57603
name: chr-F
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.50242
name: chr-F
- type: bleu
value: 19.2
name: BLEU
- type: chrf
value: 0.48676
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.55729
name: chr-F
- type: bleu
value: 25.2
name: BLEU
- type: chrf
value: 0.52152
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.51026
name: chr-F
- type: bleu
value: 17.8
name: BLEU
- type: chrf
value: 0.45459
name: chr-F
- type: bleu
value: 20.5
name: BLEU
- type: chrf
value: 0.48677
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.54804
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.51362
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.50201
name: chr-F
- type: bleu
value: 17.5
name: BLEU
- type: chrf
value: 0.44801
name: chr-F
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.54589
name: chr-F
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.6066
name: chr-F
- type: bleu
value: 31.0
name: BLEU
- type: chrf
value: 0.59811
name: chr-F
- type: bleu
value: 28.4
name: BLEU
- type: chrf
value: 0.57808
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.52244
name: chr-F
- type: bleu
value: 19.2
name: BLEU
- type: chrf
value: 0.48107
name: chr-F
- type: bleu
value: 34.5
name: BLEU
- type: chrf
value: 0.5957
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.53683
name: chr-F
- type: bleu
value: 30.3
name: BLEU
- type: chrf
value: 0.57642
name: chr-F
- type: bleu
value: 18.6
name: BLEU
- type: chrf
value: 0.47048
name: chr-F
- type: bleu
value: 11.3
name: BLEU
- type: chrf
value: 0.36876
name: chr-F
- type: bleu
value: 10.5
name: BLEU
- type: chrf
value: 0.34323
name: chr-F
- type: bleu
value: 10.0
name: BLEU
- type: chrf
value: 0.33904
name: chr-F
- type: bleu
value: 19.4
name: BLEU
- type: chrf
value: 0.4927
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.58369
name: chr-F
- type: bleu
value: 28.6
name: BLEU
- type: chrf
value: 0.55002
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.54155
name: chr-F
- type: bleu
value: 18.7
name: BLEU
- type: chrf
value: 0.46656
name: chr-F
- type: bleu
value: 15.0
name: BLEU
- type: chrf
value: 0.44183
name: chr-F
- type: bleu
value: 20.3
name: BLEU
- type: chrf
value: 0.46674
name: chr-F
- type: bleu
value: 17.8
name: BLEU
- type: chrf
value: 0.43685
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.42699
name: chr-F
- type: bleu
value: 13.7
name: BLEU
- type: chrf
value: 0.39587
name: chr-F
- type: bleu
value: 21.9
name: BLEU
- type: chrf
value: 0.51669
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.57849
name: chr-F
- type: bleu
value: 29.0
name: BLEU
- type: chrf
value: 0.55896
name: chr-F
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.5396
name: chr-F
- type: bleu
value: 19.7
name: BLEU
- type: chrf
value: 0.4812
name: chr-F
- type: bleu
value: 14.2
name: BLEU
- type: chrf
value: 0.44732
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.5171
name: chr-F
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.49129
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.49153
name: chr-F
- type: bleu
value: 15.4
name: BLEU
- type: chrf
value: 0.43363
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.58897
name: chr-F
- type: bleu
value: 36.2
name: BLEU
- type: chrf
value: 0.6225
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.5746
name: chr-F
- type: bleu
value: 27.1
name: BLEU
- type: chrf
value: 0.53674
name: chr-F
- type: bleu
value: 18.8
name: BLEU
- type: chrf
value: 0.46048
name: chr-F
- type: bleu
value: 18.9
name: BLEU
- type: chrf
value: 0.49176
name: chr-F
- type: bleu
value: 32.2
name: BLEU
- type: chrf
value: 0.59691
name: chr-F
- type: bleu
value: 24.1
name: BLEU
- type: chrf
value: 0.52068
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.52006
name: chr-F
- type: bleu
value: 16.5
name: BLEU
- type: chrf
value: 0.44945
name: chr-F
- type: bleu
value: 16.5
name: BLEU
- type: chrf
value: 0.46893
name: chr-F
- type: bleu
value: 27.7
name: BLEU
- type: chrf
value: 0.56282
name: chr-F
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.50286
name: chr-F
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.49523
name: chr-F
- type: bleu
value: 15.9
name: BLEU
- type: chrf
value: 0.44271
name: chr-F
- type: bleu
value: 14.8
name: BLEU
- type: chrf
value: 0.44712
name: chr-F
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.54222
name: chr-F
- type: bleu
value: 19.6
name: BLEU
- type: chrf
value: 0.47383
name: chr-F
- type: bleu
value: 18.7
name: BLEU
- type: chrf
value: 0.46593
name: chr-F
- type: bleu
value: 14.0
name: BLEU
- type: chrf
value: 0.41912
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.56267
name: chr-F
- type: bleu
value: 38.8
name: BLEU
- type: chrf
value: 0.64902
name: chr-F
- type: bleu
value: 33.9
name: BLEU
- type: chrf
value: 0.60051
name: chr-F
- type: bleu
value: 32.9
name: BLEU
- type: chrf
value: 0.59197
name: chr-F
- type: bleu
value: 22.8
name: BLEU
- type: chrf
value: 0.50972
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.53072
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.58671
name: chr-F
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.55677
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.53989
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.48443
name: chr-F
- type: bleu
value: 27.3
name: BLEU
- type: chrf
value: 0.56707
name: chr-F
- type: bleu
value: 43.2
name: BLEU
- type: chrf
value: 0.67683
name: chr-F
- type: bleu
value: 34.3
name: BLEU
- type: chrf
value: 0.59829
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.58723
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.50217
name: chr-F
- type: bleu
value: 26.5
name: BLEU
- type: chrf
value: 0.56197
name: chr-F
- type: bleu
value: 41.7
name: BLEU
- type: chrf
value: 0.66428
name: chr-F
- type: bleu
value: 33.1
name: BLEU
- type: chrf
value: 0.59531
name: chr-F
- type: bleu
value: 31.7
name: BLEU
- type: chrf
value: 0.58521
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.50418
name: chr-F
- type: bleu
value: 14.6
name: BLEU
- type: chrf
value: 0.44364
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.54309
name: chr-F
- type: bleu
value: 19.7
name: BLEU
- type: chrf
value: 0.47458
name: chr-F
- type: bleu
value: 18.9
name: BLEU
- type: chrf
value: 0.46702
name: chr-F
- type: bleu
value: 13.9
name: BLEU
- type: chrf
value: 0.4172
name: chr-F
- type: bleu
value: 26.9
name: BLEU
- type: chrf
value: 0.56668
name: chr-F
- type: bleu
value: 46.8
name: BLEU
- type: chrf
value: 0.70282
name: chr-F
- type: bleu
value: 39.1
name: BLEU
- type: chrf
value: 0.64408
name: chr-F
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.62256
name: chr-F
- type: bleu
value: 22.3
name: BLEU
- type: chrf
value: 0.51705
name: chr-F
- type: bleu
value: 15.1
name: BLEU
- type: chrf
value: 0.44428
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.52652
name: chr-F
- type: bleu
value: 19.9
name: BLEU
- type: chrf
value: 0.47743
name: chr-F
- type: bleu
value: 18.8
name: BLEU
- type: chrf
value: 0.46585
name: chr-F
- type: bleu
value: 14.5
name: BLEU
- type: chrf
value: 0.41798
name: chr-F
- type: bleu
value: 23.5
name: BLEU
- type: chrf
value: 0.53397
name: chr-F
- type: bleu
value: 43.1
name: BLEU
- type: chrf
value: 0.67741
name: chr-F
- type: bleu
value: 31.1
name: BLEU
- type: chrf
value: 0.57787
name: chr-F
- type: bleu
value: 32.9
name: BLEU
- type: chrf
value: 0.59003
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.49768
name: chr-F
- type: bleu
value: 20.9
name: BLEU
- type: chrf
value: 0.50787
name: chr-F
- type: bleu
value: 31.1
name: BLEU
- type: chrf
value: 0.58693
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.5506
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.54139
name: chr-F
- type: bleu
value: 18.6
name: BLEU
- type: chrf
value: 0.4723
name: chr-F
- type: bleu
value: 20.8
name: BLEU
- type: chrf
value: 0.51514
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.56021
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.55176
name: chr-F
- type: bleu
value: 24.3
name: BLEU
- type: chrf
value: 0.52998
name: chr-F
- type: bleu
value: 19.4
name: BLEU
- type: chrf
value: 0.48344
name: chr-F
- type: bleu
value: 29.3
name: BLEU
- type: chrf
value: 0.58002
name: chr-F
- type: bleu
value: 46.0
name: BLEU
- type: chrf
value: 0.69694
name: chr-F
- type: bleu
value: 39.6
name: BLEU
- type: chrf
value: 0.64146
name: chr-F
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.53508
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.49849
name: chr-F
- type: bleu
value: 32.0
name: BLEU
- type: chrf
value: 0.5812
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.53939
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.53479
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.46241
name: chr-F
- type: bleu
value: 27.4
name: BLEU
- type: chrf
value: 0.57214
name: chr-F
- type: bleu
value: 40.4
name: BLEU
- type: chrf
value: 0.66701
name: chr-F
- type: bleu
value: 37.2
name: BLEU
- type: chrf
value: 0.63234
name: chr-F
- type: bleu
value: 35.4
name: BLEU
- type: chrf
value: 0.61838
name: chr-F
- type: bleu
value: 24.3
name: BLEU
- type: chrf
value: 0.52856
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.54446
name: chr-F
- type: bleu
value: 32.0
name: BLEU
- type: chrf
value: 0.60131
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.57986
name: chr-F
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.566
name: chr-F
- type: bleu
value: 21.2
name: BLEU
- type: chrf
value: 0.49871
name: chr-F
- type: bleu
value: 17.0
name: BLEU
- type: chrf
value: 0.46523
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.53341
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.51481
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.50343
name: chr-F
- type: bleu
value: 17.1
name: BLEU
- type: chrf
value: 0.44756
name: chr-F
- type: bleu
value: 10.2
name: BLEU
- type: chrf
value: 0.38685
name: chr-F
- type: bleu
value: 23.6
name: BLEU
- type: chrf
value: 0.53932
name: chr-F
- type: bleu
value: 35.4
name: BLEU
- type: chrf
value: 0.63137
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.56587
name: chr-F
- type: bleu
value: 27.3
name: BLEU
- type: chrf
value: 0.54523
name: chr-F
- type: bleu
value: 20.1
name: BLEU
- type: chrf
value: 0.48275
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.54583
name: chr-F
- type: bleu
value: 32.4
name: BLEU
- type: chrf
value: 0.59952
name: chr-F
- type: bleu
value: 30.3
name: BLEU
- type: chrf
value: 0.57418
name: chr-F
- type: bleu
value: 28.4
name: BLEU
- type: chrf
value: 0.55838
name: chr-F
- type: bleu
value: 20.7
name: BLEU
- type: chrf
value: 0.49438
name: chr-F
- type: bleu
value: 20.0
name: BLEU
- type: chrf
value: 0.52303
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.57648
name: chr-F
- type: bleu
value: 18.6
name: BLEU
- type: chrf
value: 0.47651
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.56624
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.52746
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.52301
name: chr-F
- type: bleu
value: 17.7
name: BLEU
- type: chrf
value: 0.45213
name: chr-F
- type: bleu
value: 27.7
name: BLEU
- type: chrf
value: 0.57563
name: chr-F
- type: bleu
value: 39.9
name: BLEU
- type: chrf
value: 0.66201
name: chr-F
- type: bleu
value: 35.0
name: BLEU
- type: chrf
value: 0.6157
name: chr-F
- type: bleu
value: 33.6
name: BLEU
- type: chrf
value: 0.60561
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.515
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.59607
name: chr-F
- type: bleu
value: 46.0
name: BLEU
- type: chrf
value: 0.69032
name: chr-F
- type: bleu
value: 37.8
name: BLEU
- type: chrf
value: 0.6261
name: chr-F
- type: bleu
value: 35.0
name: BLEU
- type: chrf
value: 0.60692
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.51448
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.51005
name: chr-F
- type: bleu
value: 30.6
name: BLEU
- type: chrf
value: 0.57536
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.54029
name: chr-F
- type: bleu
value: 26.5
name: BLEU
- type: chrf
value: 0.52911
name: chr-F
- type: bleu
value: 18.8
name: BLEU
- type: chrf
value: 0.4628
name: chr-F
- type: bleu
value: 15.8
name: BLEU
- type: chrf
value: 0.45372
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.51096
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.4862
name: chr-F
- type: bleu
value: 19.4
name: BLEU
- type: chrf
value: 0.4687
name: chr-F
- type: bleu
value: 15.1
name: BLEU
- type: chrf
value: 0.42689
name: chr-F
- type: bleu
value: 11.1
name: BLEU
- type: chrf
value: 0.41078
name: chr-F
- type: bleu
value: 20.1
name: BLEU
- type: chrf
value: 0.48619
name: chr-F
- type: bleu
value: 16.3
name: BLEU
- type: chrf
value: 0.4385
name: chr-F
- type: bleu
value: 15.8
name: BLEU
- type: chrf
value: 0.4304
name: chr-F
- type: bleu
value: 13.4
name: BLEU
- type: chrf
value: 0.39849
name: chr-F
- type: bleu
value: 25.1
name: BLEU
- type: chrf
value: 0.5529
name: chr-F
- type: bleu
value: 34.9
name: BLEU
- type: chrf
value: 0.6215
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.59093
name: chr-F
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.57706
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.50128
name: chr-F
- type: bleu
value: 15.6
name: BLEU
- type: chrf
value: 0.45107
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.5313
name: chr-F
- type: bleu
value: 20.7
name: BLEU
- type: chrf
value: 0.48377
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.4529
name: chr-F
- type: bleu
value: 13.8
name: BLEU
- type: chrf
value: 0.41342
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.48212
name: chr-F
- type: bleu
value: 29.3
name: BLEU
- type: chrf
value: 0.56243
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.5334
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.52845
name: chr-F
- type: bleu
value: 17.9
name: BLEU
- type: chrf
value: 0.46136
name: chr-F
- task:
type: translation
name: Translation afr-deu
dataset:
name: flores101-devtest
type: flores_101
args: afr deu devtest
metrics:
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.5709
name: chr-F
- type: bleu
value: 52.4
name: BLEU
- type: chrf
value: 0.73127
name: chr-F
- type: bleu
value: 34.8
name: BLEU
- type: chrf
value: 0.60726
name: chr-F
- type: bleu
value: 34.4
name: BLEU
- type: chrf
value: 0.60399
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.50655
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.56575
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.56438
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.49455
name: chr-F
- type: bleu
value: 11.8
name: BLEU
- type: chrf
value: 0.46177
name: chr-F
- type: bleu
value: 15.6
name: BLEU
- type: chrf
value: 0.49344
name: chr-F
- type: bleu
value: 16.5
name: BLEU
- type: chrf
value: 0.49372
name: chr-F
- type: bleu
value: 13.8
name: BLEU
- type: chrf
value: 0.44802
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.53648
name: chr-F
- type: bleu
value: 19.9
name: BLEU
- type: chrf
value: 0.48236
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.58471
name: chr-F
- type: bleu
value: 27.4
name: BLEU
- type: chrf
value: 0.56499
name: chr-F
- type: bleu
value: 42.3
name: BLEU
- type: chrf
value: 0.67443
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.5314
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.57503
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.4986
name: chr-F
- type: bleu
value: 10.2
name: BLEU
- type: chrf
value: 0.36979
name: chr-F
- type: bleu
value: 15.8
name: BLEU
- type: chrf
value: 0.4131
name: chr-F
- type: bleu
value: 28.6
name: BLEU
- type: chrf
value: 0.5461
name: chr-F
- type: bleu
value: 34.7
name: BLEU
- type: chrf
value: 0.60877
name: chr-F
- type: bleu
value: 39.8
name: BLEU
- type: chrf
value: 0.65706
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.54336
name: chr-F
- type: bleu
value: 41.0
name: BLEU
- type: chrf
value: 0.66301
name: chr-F
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.61592
name: chr-F
- type: bleu
value: 17.0
name: BLEU
- type: chrf
value: 0.47354
name: chr-F
- type: bleu
value: 21.7
name: BLEU
- type: chrf
value: 0.50115
name: chr-F
- type: bleu
value: 13.5
name: BLEU
- type: chrf
value: 0.42069
name: chr-F
- type: bleu
value: 19.6
name: BLEU
- type: chrf
value: 0.4948
name: chr-F
- type: bleu
value: 32.6
name: BLEU
- type: chrf
value: 0.59392
name: chr-F
- type: bleu
value: 29.5
name: BLEU
- type: chrf
value: 0.57004
name: chr-F
- type: bleu
value: 17.5
name: BLEU
- type: chrf
value: 0.47323
name: chr-F
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.5445
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.53875
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.54033
name: chr-F
- type: bleu
value: 30.6
name: BLEU
- type: chrf
value: 0.59488
name: chr-F
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.51946
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.46784
name: chr-F
- type: bleu
value: 24.6
name: BLEU
- type: chrf
value: 0.54017
name: chr-F
- type: bleu
value: 19.3
name: BLEU
- type: chrf
value: 0.48185
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.51261
name: chr-F
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.53223
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.58286
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.53241
name: chr-F
- type: bleu
value: 14.1
name: BLEU
- type: chrf
value: 0.44237
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.52755
name: chr-F
- type: bleu
value: 18.1
name: BLEU
- type: chrf
value: 0.45667
name: chr-F
- type: bleu
value: 32.8
name: BLEU
- type: chrf
value: 0.59219
name: chr-F
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.52899
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.5823
name: chr-F
- type: bleu
value: 21.2
name: BLEU
- type: chrf
value: 0.50054
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.53179
name: chr-F
- type: bleu
value: 13.6
name: BLEU
- type: chrf
value: 0.41165
name: chr-F
- type: bleu
value: 13.6
name: BLEU
- type: chrf
value: 0.42831
name: chr-F
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.51203
name: chr-F
- type: bleu
value: 19.2
name: BLEU
- type: chrf
value: 0.46357
name: chr-F
- type: bleu
value: 17.4
name: BLEU
- type: chrf
value: 0.44885
name: chr-F
- type: bleu
value: 20.1
name: BLEU
- type: chrf
value: 0.50973
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.55772
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.5459
name: chr-F
- type: bleu
value: 18.9
name: BLEU
- type: chrf
value: 0.47816
name: chr-F
- type: bleu
value: 45.5
name: BLEU
- type: chrf
value: 0.69438
name: chr-F
- type: bleu
value: 38.9
name: BLEU
- type: chrf
value: 0.63701
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.53216
name: chr-F
- type: bleu
value: 36.2
name: BLEU
- type: chrf
value: 0.62744
name: chr-F
- type: bleu
value: 23.1
name: BLEU
- type: chrf
value: 0.53823
name: chr-F
- type: bleu
value: 31.7
name: BLEU
- type: chrf
value: 0.59829
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.57384
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.56082
name: chr-F
- type: bleu
value: 34.4
name: BLEU
- type: chrf
value: 0.62376
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.54486
name: chr-F
- type: bleu
value: 20.0
name: BLEU
- type: chrf
value: 0.48253
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.5413
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.56838
name: chr-F
- type: bleu
value: 28.1
name: BLEU
- type: chrf
value: 0.55554
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.51807
name: chr-F
- type: bleu
value: 22.8
name: BLEU
- type: chrf
value: 0.51211
name: chr-F
- type: bleu
value: 19.6
name: BLEU
- type: chrf
value: 0.4729
name: chr-F
- type: bleu
value: 14.3
name: BLEU
- type: chrf
value: 0.41393
name: chr-F
- type: bleu
value: 34.3
name: BLEU
- type: chrf
value: 0.61588
name: chr-F
- type: bleu
value: 31.3
name: BLEU
- type: chrf
value: 0.58296
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.49535
name: chr-F
- type: bleu
value: 15.2
name: BLEU
- type: chrf
value: 0.44211
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: generaltest2022
type: generaltest2022
args: ces-eng
metrics:
- type: bleu
value: 40.2
name: BLEU
- type: chrf
value: 0.64599
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.54993
name: chr-F
- type: bleu
value: 35.6
name: BLEU
- type: chrf
value: 0.59361
name: chr-F
- type: bleu
value: 31.9
name: BLEU
- type: chrf
value: 0.59885
name: chr-F
- type: bleu
value: 40.1
name: BLEU
- type: chrf
value: 0.64266
name: chr-F
- type: bleu
value: 37.8
name: BLEU
- type: chrf
value: 0.63746
name: chr-F
- type: bleu
value: 35.9
name: BLEU
- type: chrf
value: 0.60704
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: ces-deu
metrics:
- type: bleu
value: 26.9
name: BLEU
- type: chrf
value: 0.5637
name: chr-F
- type: bleu
value: 32.7
name: BLEU
- type: chrf
value: 0.57217
name: chr-F
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.57498
name: chr-F
- type: bleu
value: 39.1
name: BLEU
- type: chrf
value: 0.60234
name: chr-F
- type: bleu
value: 36.7
name: BLEU
- type: chrf
value: 0.60951
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.62191
name: chr-F
- type: bleu
value: 47.9
name: BLEU
- type: chrf
value: 0.69376
name: chr-F
- type: bleu
value: 29.3
name: BLEU
- type: chrf
value: 0.59597
name: chr-F
- type: bleu
value: 45.4
name: BLEU
- type: chrf
value: 0.6481
name: chr-F
- task:
type: translation
name: Translation deu-eng
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: deu-eng
metrics:
- type: bleu
value: 38.9
name: BLEU
- type: chrf
value: 0.61895
name: chr-F
- type: bleu
value: 34.6
name: BLEU
- type: chrf
value: 0.6057
name: chr-F
- type: bleu
value: 32.1
name: BLEU
- type: chrf
value: 0.61458
name: chr-F
- type: bleu
value: 48.1
name: BLEU
- type: chrf
value: 0.6963
name: chr-F
- type: bleu
value: 27.7
name: BLEU
- type: chrf
value: 0.58207
name: chr-F
- type: bleu
value: 48.0
name: BLEU
- type: chrf
value: 0.67447
name: chr-F
- task:
type: translation
name: Translation deu-eng
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: deu-eng
metrics:
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.54299
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.57789
name: chr-F
- type: bleu
value: 27.3
name: BLEU
- type: chrf
value: 0.56164
name: chr-F
- type: bleu
value: 51.9
name: BLEU
- type: chrf
value: 0.71453
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.53897
name: chr-F
- type: bleu
value: 46.5
name: BLEU
- type: chrf
value: 0.65274
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: ces-deu
metrics:
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.51543
name: chr-F
- type: bleu
value: 33.1
name: BLEU
- type: chrf
value: 0.57995
name: chr-F
- type: bleu
value: 26.0
name: BLEU
- type: chrf
value: 0.53232
name: chr-F
- type: bleu
value: 35.3
name: BLEU
- type: chrf
value: 0.58274
name: chr-F
- type: bleu
value: 29.3
name: BLEU
- type: chrf
value: 0.55809
name: chr-F
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.58395
name: chr-F
- type: bleu
value: 39.3
name: BLEU
- type: chrf
value: 0.6377
name: chr-F
- type: bleu
value: 22.6
name: BLEU
- type: chrf
value: 0.53677
name: chr-F
- type: bleu
value: 41.0
name: BLEU
- type: chrf
value: 0.62909
name: chr-F
- task:
type: translation
name: Translation eng-fra
dataset:
name: newsdiscusstest2015
type: newsdiscusstest2015
args: eng-fra
metrics:
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.62144
name: chr-F
- type: bleu
value: 37.5
name: BLEU
- type: chrf
value: 0.60513
name: chr-F
- task:
type: translation
name: Translation deu-eng
dataset:
name: newstestALL2020
type: newstestALL2020
args: deu-eng
metrics:
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.56898
name: chr-F
- type: bleu
value: 30.2
name: BLEU
- type: chrf
value: 0.58436
name: chr-F
- type: bleu
value: 33.6
name: BLEU
- type: chrf
value: 0.62387
name: chr-F
- task:
type: translation
name: Translation afr-deu
dataset:
name: ntrex128
type: ntrex128
args: afr-deu
metrics:
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.54806
name: chr-F
- type: bleu
value: 50.6
name: BLEU
- type: chrf
value: 0.71452
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.55624
name: chr-F
- type: bleu
value: 26.9
name: BLEU
- type: chrf
value: 0.54364
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.57498
name: chr-F
- type: bleu
value: 17.8
name: BLEU
- type: chrf
value: 0.48215
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.55146
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.49288
name: chr-F
- type: bleu
value: 19.9
name: BLEU
- type: chrf
value: 0.48488
name: chr-F
- type: bleu
value: 23.7
name: BLEU
- type: chrf
value: 0.50933
name: chr-F
- type: bleu
value: 13.7
name: BLEU
- type: chrf
value: 0.43995
name: chr-F
- type: bleu
value: 24.9
name: BLEU
- type: chrf
value: 0.53312
name: chr-F
- type: bleu
value: 17.1
name: BLEU
- type: chrf
value: 0.45297
name: chr-F
- type: bleu
value: 15.5
name: BLEU
- type: chrf
value: 0.44323
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.46993
name: chr-F
- type: bleu
value: 20.9
name: BLEU
- type: chrf
value: 0.51786
name: chr-F
- type: bleu
value: 31.3
name: BLEU
- type: chrf
value: 0.5951
name: chr-F
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.53787
name: chr-F
- type: bleu
value: 24.2
name: BLEU
- type: chrf
value: 0.5265
name: chr-F
- type: bleu
value: 28.4
name: BLEU
- type: chrf
value: 0.5495
name: chr-F
- type: bleu
value: 22.5
name: BLEU
- type: chrf
value: 0.52907
name: chr-F
- type: bleu
value: 34.6
name: BLEU
- type: chrf
value: 0.62247
name: chr-F
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.55858
name: chr-F
- type: bleu
value: 28.3
name: BLEU
- type: chrf
value: 0.55916
name: chr-F
- type: bleu
value: 35.6
name: BLEU
- type: chrf
value: 0.61209
name: chr-F
- type: bleu
value: 22.5
name: BLEU
- type: chrf
value: 0.52704
name: chr-F
- type: bleu
value: 33.1
name: BLEU
- type: chrf
value: 0.60742
name: chr-F
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.54283
name: chr-F
- type: bleu
value: 24.1
name: BLEU
- type: chrf
value: 0.52392
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.55467
name: chr-F
- type: bleu
value: 19.1
name: BLEU
- type: chrf
value: 0.48064
name: chr-F
- type: bleu
value: 34.7
name: BLEU
- type: chrf
value: 0.60592
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.50667
name: chr-F
- type: bleu
value: 20.5
name: BLEU
- type: chrf
value: 0.48189
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.5216
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.53284
name: chr-F
- type: bleu
value: 37.5
name: BLEU
- type: chrf
value: 0.62092
name: chr-F
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.53068
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.52754
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.55304
name: chr-F
- type: bleu
value: 33.7
name: BLEU
- type: chrf
value: 0.61371
name: chr-F
- type: bleu
value: 27.4
name: BLEU
- type: chrf
value: 0.54844
name: chr-F
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.53694
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.56148
name: chr-F
- type: bleu
value: 21.1
name: BLEU
- type: chrf
value: 0.51567
name: chr-F
- type: bleu
value: 34.0
name: BLEU
- type: chrf
value: 0.60389
name: chr-F
- type: bleu
value: 25.1
name: BLEU
- type: chrf
value: 0.53343
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.5303
name: chr-F
- type: bleu
value: 29.7
name: BLEU
- type: chrf
value: 0.55542
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.57592
name: chr-F
- type: bleu
value: 33.9
name: BLEU
- type: chrf
value: 0.60159
name: chr-F
- type: bleu
value: 32.6
name: BLEU
- type: chrf
value: 0.5902
name: chr-F
- type: bleu
value: 38.6
name: BLEU
- type: chrf
value: 0.62826
name: chr-F
- type: bleu
value: 16.1
name: BLEU
- type: chrf
value: 0.42717
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.4821
name: chr-F
- type: bleu
value: 16.9
name: BLEU
- type: chrf
value: 0.4077
name: chr-F
- type: bleu
value: 16.2
name: BLEU
- type: chrf
value: 0.40603
name: chr-F
- type: bleu
value: 18.8
name: BLEU
- type: chrf
value: 0.4298
name: chr-F
- type: bleu
value: 15.7
name: BLEU
- type: chrf
value: 0.47062
name: chr-F
- type: bleu
value: 24.0
name: BLEU
- type: chrf
value: 0.53552
name: chr-F
- type: bleu
value: 20.1
name: BLEU
- type: chrf
value: 0.48958
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.47091
name: chr-F
- type: bleu
value: 22.5
name: BLEU
- type: chrf
value: 0.49946
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.52037
name: chr-F
- type: bleu
value: 32.7
name: BLEU
- type: chrf
value: 0.59918
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.53484
name: chr-F
- type: bleu
value: 30.3
name: BLEU
- type: chrf
value: 0.565
name: chr-F
- type: bleu
value: 16.0
name: BLEU
- type: chrf
value: 0.45357
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.5496
name: chr-F
- type: bleu
value: 18.7
name: BLEU
- type: chrf
value: 0.47041
name: chr-F
- type: bleu
value: 17.5
name: BLEU
- type: chrf
value: 0.45725
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.48897
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.5271
name: chr-F
- type: bleu
value: 37.0
name: BLEU
- type: chrf
value: 0.63076
name: chr-F
- type: bleu
value: 27.2
name: BLEU
- type: chrf
value: 0.55231
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.56272
name: chr-F
- type: bleu
value: 36.6
name: BLEU
- type: chrf
value: 0.61675
name: chr-F
- type: bleu
value: 11.9
name: BLEU
- type: chrf
value: 0.40361
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.52283
name: chr-F
- type: bleu
value: 14.7
name: BLEU
- type: chrf
value: 0.41597
name: chr-F
- type: bleu
value: 13.0
name: BLEU
- type: chrf
value: 0.40085
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.448
name: chr-F
- type: bleu
value: 14.4
name: BLEU
- type: chrf
value: 0.45618
name: chr-F
- type: bleu
value: 27.9
name: BLEU
- type: chrf
value: 0.57183
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.47504
name: chr-F
- type: bleu
value: 16.9
name: BLEU
- type: chrf
value: 0.45829
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.48784
name: chr-F
- type: bleu
value: 23.2
name: BLEU
- type: chrf
value: 0.53567
name: chr-F
- type: bleu
value: 34.8
name: BLEU
- type: chrf
value: 0.61932
name: chr-F
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.55306
name: chr-F
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.53968
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.56765
name: chr-F
- type: bleu
value: 14.0
name: BLEU
- type: chrf
value: 0.42987
name: chr-F
- type: bleu
value: 20.9
name: BLEU
- type: chrf
value: 0.49189
name: chr-F
- type: bleu
value: 17.2
name: BLEU
- type: chrf
value: 0.44434
name: chr-F
- type: bleu
value: 16.0
name: BLEU
- type: chrf
value: 0.43069
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.45889
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.48392
name: chr-F
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.5472
name: chr-F
- type: bleu
value: 22.5
name: BLEU
- type: chrf
value: 0.49971
name: chr-F
- type: bleu
value: 20.2
name: BLEU
- type: chrf
value: 0.47811
name: chr-F
- type: bleu
value: 25.1
name: BLEU
- type: chrf
value: 0.5106
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.53354
name: chr-F
- type: bleu
value: 37.1
name: BLEU
- type: chrf
value: 0.63069
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.56721
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.56298
name: chr-F
- type: bleu
value: 32.6
name: BLEU
- type: chrf
value: 0.58483
name: chr-F
- type: bleu
value: 11.6
name: BLEU
- type: chrf
value: 0.3662
name: chr-F
- type: bleu
value: 10.3
name: BLEU
- type: chrf
value: 0.33936
name: chr-F
- type: bleu
value: 10.8
name: BLEU
- type: chrf
value: 0.34636
name: chr-F
- type: bleu
value: 17.5
name: BLEU
- type: chrf
value: 0.48637
name: chr-F
- type: bleu
value: 25.5
name: BLEU
- type: chrf
value: 0.55909
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.49579
name: chr-F
- type: bleu
value: 18.9
name: BLEU
- type: chrf
value: 0.47936
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.51105
name: chr-F
- type: bleu
value: 18.0
name: BLEU
- type: chrf
value: 0.49203
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.55075
name: chr-F
- type: bleu
value: 21.9
name: BLEU
- type: chrf
value: 0.50667
name: chr-F
- type: bleu
value: 20.8
name: BLEU
- type: chrf
value: 0.49771
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.52333
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.51232
name: chr-F
- type: bleu
value: 32.4
name: BLEU
- type: chrf
value: 0.58218
name: chr-F
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.49182
name: chr-F
- type: bleu
value: 20.3
name: BLEU
- type: chrf
value: 0.46871
name: chr-F
- type: bleu
value: 23.6
name: BLEU
- type: chrf
value: 0.48975
name: chr-F
- type: bleu
value: 12.5
name: BLEU
- type: chrf
value: 0.42225
name: chr-F
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.51583
name: chr-F
- type: bleu
value: 15.1
name: BLEU
- type: chrf
value: 0.43088
name: chr-F
- type: bleu
value: 14.6
name: BLEU
- type: chrf
value: 0.42394
name: chr-F
- type: bleu
value: 17.7
name: BLEU
- type: chrf
value: 0.44945
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.52537
name: chr-F
- type: bleu
value: 35.8
name: BLEU
- type: chrf
value: 0.62757
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.54428
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.52919
name: chr-F
- type: bleu
value: 30.0
name: BLEU
- type: chrf
value: 0.56365
name: chr-F
- type: bleu
value: 11.6
name: BLEU
- type: chrf
value: 0.40783
name: chr-F
- type: bleu
value: 23.1
name: BLEU
- type: chrf
value: 0.51242
name: chr-F
- type: bleu
value: 14.5
name: BLEU
- type: chrf
value: 0.41414
name: chr-F
- type: bleu
value: 13.8
name: BLEU
- type: chrf
value: 0.41356
name: chr-F
- type: bleu
value: 17.0
name: BLEU
- type: chrf
value: 0.43667
name: chr-F
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.55633
name: chr-F
- type: bleu
value: 36.0
name: BLEU
- type: chrf
value: 0.63172
name: chr-F
- type: bleu
value: 27.1
name: BLEU
- type: chrf
value: 0.55161
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.54074
name: chr-F
- type: bleu
value: 31.7
name: BLEU
- type: chrf
value: 0.57106
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.52489
name: chr-F
- type: bleu
value: 41.6
name: BLEU
- type: chrf
value: 0.64889
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.53358
name: chr-F
- type: bleu
value: 24.7
name: BLEU
- type: chrf
value: 0.52089
name: chr-F
- type: bleu
value: 29.4
name: BLEU
- type: chrf
value: 0.54863
name: chr-F
- type: bleu
value: 25.5
name: BLEU
- type: chrf
value: 0.5465
name: chr-F
- type: bleu
value: 39.3
name: BLEU
- type: chrf
value: 0.64444
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.55024
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.53537
name: chr-F
- type: bleu
value: 31.4
name: BLEU
- type: chrf
value: 0.56899
name: chr-F
- type: bleu
value: 11.6
name: BLEU
- type: chrf
value: 0.40429
name: chr-F
- type: bleu
value: 20.6
name: BLEU
- type: chrf
value: 0.49942
name: chr-F
- type: bleu
value: 14.8
name: BLEU
- type: chrf
value: 0.4144
name: chr-F
- type: bleu
value: 13.1
name: BLEU
- type: chrf
value: 0.39925
name: chr-F
- type: bleu
value: 16.6
name: BLEU
- type: chrf
value: 0.4284
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.50884
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.55781
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.52511
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.50796
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.53122
name: chr-F
- type: bleu
value: 23.7
name: BLEU
- type: chrf
value: 0.54003
name: chr-F
- type: bleu
value: 37.6
name: BLEU
- type: chrf
value: 0.63798
name: chr-F
- type: bleu
value: 28.3
name: BLEU
- type: chrf
value: 0.56317
name: chr-F
- type: bleu
value: 33.9
name: BLEU
- type: chrf
value: 0.59244
name: chr-F
- type: bleu
value: 14.3
name: BLEU
- type: chrf
value: 0.44878
name: chr-F
- type: bleu
value: 24.2
name: BLEU
- type: chrf
value: 0.52855
name: chr-F
- type: bleu
value: 17.6
name: BLEU
- type: chrf
value: 0.46323
name: chr-F
- type: bleu
value: 16.9
name: BLEU
- type: chrf
value: 0.45211
name: chr-F
- type: bleu
value: 20.5
name: BLEU
- type: chrf
value: 0.47595
name: chr-F
- type: bleu
value: 13.0
name: BLEU
- type: chrf
value: 0.4063
name: chr-F
- type: bleu
value: 10.6
name: BLEU
- type: chrf
value: 0.37292
name: chr-F
- type: bleu
value: 10.0
name: BLEU
- type: chrf
value: 0.36366
name: chr-F
- type: bleu
value: 12.4
name: BLEU
- type: chrf
value: 0.38558
name: chr-F
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.52534
name: chr-F
- type: bleu
value: 32.2
name: BLEU
- type: chrf
value: 0.60733
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.55222
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.54549
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.57503
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.49519
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.55126
name: chr-F
- type: bleu
value: 22.8
name: BLEU
- type: chrf
value: 0.51684
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.49329
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.52316
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.52066
name: chr-F
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.6094
name: chr-F
- type: bleu
value: 25.8
name: BLEU
- type: chrf
value: 0.53303
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.51245
name: chr-F
- type: bleu
value: 28.3
name: BLEU
- type: chrf
value: 0.54489
name: chr-F
- type: bleu
value: 22.0
name: BLEU
- type: chrf
value: 0.52189
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.58552
name: chr-F
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.53247
name: chr-F
- type: bleu
value: 23.4
name: BLEU
- type: chrf
value: 0.51817
name: chr-F
- type: bleu
value: 27.7
name: BLEU
- type: chrf
value: 0.54582
name: chr-F
- type: bleu
value: 28.3
name: BLEU
- type: chrf
value: 0.56549
name: chr-F
- type: bleu
value: 28.5
name: BLEU
- type: chrf
value: 0.56372
name: chr-F
- type: bleu
value: 21.7
name: BLEU
- type: chrf
value: 0.52259
name: chr-F
- type: bleu
value: 36.2
name: BLEU
- type: chrf
value: 0.62439
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.54643
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.53857
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.56804
name: chr-F
- type: bleu
value: 18.6
name: BLEU
- type: chrf
value: 0.48837
name: chr-F
- type: bleu
value: 24.5
name: BLEU
- type: chrf
value: 0.54292
name: chr-F
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.48977
name: chr-F
- type: bleu
value: 20.5
name: BLEU
- type: chrf
value: 0.48429
name: chr-F
- type: bleu
value: 24.9
name: BLEU
- type: chrf
value: 0.51373
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.54871
name: chr-F
- type: bleu
value: 41.2
name: BLEU
- type: chrf
value: 0.65427
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.55294
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.53911
name: chr-F
- type: bleu
value: 31.9
name: BLEU
- type: chrf
value: 0.57293
name: chr-F
- type: bleu
value: 11.8
name: BLEU
- type: chrf
value: 0.40503
name: chr-F
- type: bleu
value: 16.4
name: BLEU
- type: chrf
value: 0.45221
name: chr-F
- type: bleu
value: 14.4
name: BLEU
- type: chrf
value: 0.4193
name: chr-F
- type: bleu
value: 12.7
name: BLEU
- type: chrf
value: 0.40576
name: chr-F
- type: bleu
value: 16.4
name: BLEU
- type: chrf
value: 0.43095
name: chr-F
- type: bleu
value: 18.5
name: BLEU
- type: chrf
value: 0.49644
name: chr-F
- type: bleu
value: 25.7
name: BLEU
- type: chrf
value: 0.55193
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.50914
name: chr-F
- type: bleu
value: 21.3
name: BLEU
- type: chrf
value: 0.49879
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.5264
name: chr-F
- type: bleu
value: 14.1
name: BLEU
- type: chrf
value: 0.43742
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.52486
name: chr-F
- type: bleu
value: 17.4
name: BLEU
- type: chrf
value: 0.45409
name: chr-F
- type: bleu
value: 14.6
name: BLEU
- type: chrf
value: 0.4266
name: chr-F
- type: bleu
value: 19.4
name: BLEU
- type: chrf
value: 0.46414
name: chr-F
- task:
type: translation
name: Translation afr-deu
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: afr-deu
metrics:
- type: bleu
value: 48.8
name: BLEU
- type: chrf
value: 0.68516
name: chr-F
- type: bleu
value: 60.8
name: BLEU
- type: chrf
value: 0.73535
name: chr-F
- type: bleu
value: 57.6
name: BLEU
- type: chrf
value: 0.72814
name: chr-F
- type: bleu
value: 42.4
name: BLEU
- type: chrf
value: 0.62154
name: chr-F
- type: bleu
value: 44.1
name: BLEU
- type: chrf
value: 0.65145
name: chr-F
- type: bleu
value: 44.8
name: BLEU
- type: chrf
value: 0.62648
name: chr-F
- type: bleu
value: 47.4
name: BLEU
- type: chrf
value: 0.66291
name: chr-F
- type: bleu
value: 46.5
name: BLEU
- type: chrf
value: 0.66644
name: chr-F
- type: bleu
value: 46.1
name: BLEU
- type: chrf
value: 0.62742
name: chr-F
- type: bleu
value: 62.5
name: BLEU
- type: chrf
value: 0.76603
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.47135
name: chr-F
- type: bleu
value: 49.1
name: BLEU
- type: chrf
value: 0.68593
name: chr-F
- type: bleu
value: 55.5
name: BLEU
- type: chrf
value: 0.6998
name: chr-F
- type: bleu
value: 52.4
name: BLEU
- type: chrf
value: 0.69233
name: chr-F
- type: bleu
value: 49.2
name: BLEU
- type: chrf
value: 0.66731
name: chr-F
- type: bleu
value: 45.7
name: BLEU
- type: chrf
value: 0.65296
name: chr-F
- type: bleu
value: 55.6
name: BLEU
- type: chrf
value: 0.70714
name: chr-F
- type: bleu
value: 53.7
name: BLEU
- type: chrf
value: 0.71112
name: chr-F
- type: bleu
value: 56.3
name: BLEU
- type: chrf
value: 0.74022
name: chr-F
- type: bleu
value: 74.0
name: BLEU
- type: chrf
value: 0.85238
name: chr-F
- type: bleu
value: 50.1
name: BLEU
- type: chrf
value: 0.68073
name: chr-F
- type: bleu
value: 53.6
name: BLEU
- type: chrf
value: 0.68902
name: chr-F
- type: bleu
value: 53.5
name: BLEU
- type: chrf
value: 0.70071
name: chr-F
- type: bleu
value: 52.5
name: BLEU
- type: chrf
value: 0.69957
name: chr-F
- type: bleu
value: 47.5
name: BLEU
- type: chrf
value: 0.65153
name: chr-F
- type: bleu
value: 53.7
name: BLEU
- type: chrf
value: 0.7232
name: chr-F
- type: bleu
value: 62.3
name: BLEU
- type: chrf
value: 0.75679
name: chr-F
- type: bleu
value: 61.8
name: BLEU
- type: chrf
value: 0.76077
name: chr-F
- type: bleu
value: 58.8
name: BLEU
- type: chrf
value: 0.7646
name: chr-F
- type: bleu
value: 53.8
name: BLEU
- type: chrf
value: 0.71685
name: chr-F
- type: bleu
value: 37.6
name: BLEU
- type: chrf
value: 0.60029
name: chr-F
- type: bleu
value: 48.4
name: BLEU
- type: chrf
value: 0.65647
name: chr-F
- type: bleu
value: 48.7
name: BLEU
- type: chrf
value: 0.66811
name: chr-F
- type: bleu
value: 42.2
name: BLEU
- type: chrf
value: 0.62766
name: chr-F
- type: bleu
value: 48.2
name: BLEU
- type: chrf
value: 0.67276
name: chr-F
- type: bleu
value: 34.5
name: BLEU
- type: chrf
value: 0.55993
name: chr-F
- type: bleu
value: 51.9
name: BLEU
- type: chrf
value: 0.68199
name: chr-F
- type: bleu
value: 63.6
name: BLEU
- type: chrf
value: 0.76316
name: chr-F
- type: bleu
value: 59.1
name: BLEU
- type: chrf
value: 0.74291
name: chr-F
- type: bleu
value: 50.0
name: BLEU
- type: chrf
value: 0.69593
name: chr-F
- type: bleu
value: 47.9
name: BLEU
- type: chrf
value: 0.64482
name: chr-F
- type: bleu
value: 39.7
name: BLEU
- type: chrf
value: 0.61606
name: chr-F
- type: bleu
value: 65.4
name: BLEU
- type: chrf
value: 0.82285
name: chr-F
- type: bleu
value: 49.4
name: BLEU
- type: chrf
value: 0.67435
name: chr-F
- type: bleu
value: 51.9
name: BLEU
- type: chrf
value: 0.70975
name: chr-F
- type: bleu
value: 53.9
name: BLEU
- type: chrf
value: 0.71497
name: chr-F
- type: bleu
value: 40.1
name: BLEU
- type: chrf
value: 0.55253
name: chr-F
- type: bleu
value: 34.2
name: BLEU
- type: chrf
value: 0.57907
name: chr-F
- type: bleu
value: 39.2
name: BLEU
- type: chrf
value: 0.5828
name: chr-F
- type: bleu
value: 35.7
name: BLEU
- type: chrf
value: 0.57554
name: chr-F
- type: bleu
value: 47.6
name: BLEU
- type: chrf
value: 0.67258
name: chr-F
- type: bleu
value: 56.3
name: BLEU
- type: chrf
value: 0.71355
name: chr-F
- type: bleu
value: 43.9
name: BLEU
- type: chrf
value: 0.63538
name: chr-F
- type: bleu
value: 50.7
name: BLEU
- type: chrf
value: 0.69703
name: chr-F
- type: bleu
value: 53.3
name: BLEU
- type: chrf
value: 0.71014
name: chr-F
- type: bleu
value: 37.9
name: BLEU
- type: chrf
value: 0.55802
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.44054
name: chr-F
- type: bleu
value: 23.0
name: BLEU
- type: chrf
value: 0.44549
name: chr-F
- type: bleu
value: 48.1
name: BLEU
- type: chrf
value: 0.63566
name: chr-F
- type: bleu
value: 54.1
name: BLEU
- type: chrf
value: 0.69249
name: chr-F
- type: bleu
value: 61.5
name: BLEU
- type: chrf
value: 0.76777
name: chr-F
- type: bleu
value: 68.8
name: BLEU
- type: chrf
value: 0.80359
name: chr-F
- type: bleu
value: 20.8
name: BLEU
- type: chrf
value: 0.37952
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.4836
name: chr-F
- type: bleu
value: 50.7
name: BLEU
- type: chrf
value: 0.68769
name: chr-F
- type: bleu
value: 54.9
name: BLEU
- type: chrf
value: 0.68956
name: chr-F
- type: bleu
value: 47.0
name: BLEU
- type: chrf
value: 0.66551
name: chr-F
- type: bleu
value: 53.4
name: BLEU
- type: chrf
value: 0.70241
name: chr-F
- type: bleu
value: 47.1
name: BLEU
- type: chrf
value: 0.64048
name: chr-F
- type: bleu
value: 48.9
name: BLEU
- type: chrf
value: 0.66676
name: chr-F
- type: bleu
value: 56.8
name: BLEU
- type: chrf
value: 0.71884
name: chr-F
- type: bleu
value: 42.3
name: BLEU
- type: chrf
value: 0.62438
name: chr-F
- type: bleu
value: 52.9
name: BLEU
- type: chrf
value: 0.68433
name: chr-F
- type: bleu
value: 40.9
name: BLEU
- type: chrf
value: 0.61176
name: chr-F
- type: bleu
value: 29.0
name: BLEU
- type: chrf
value: 0.50806
name: chr-F
- type: bleu
value: 47.4
name: BLEU
- type: chrf
value: 0.66238
name: chr-F
- type: bleu
value: 48.1
name: BLEU
- type: chrf
value: 0.64466
name: chr-F
- type: bleu
value: 42.5
name: BLEU
- type: chrf
value: 0.6198
name: chr-F
- type: bleu
value: 47.8
name: BLEU
- type: chrf
value: 0.67198
name: chr-F
- type: bleu
value: 68.3
name: BLEU
- type: chrf
value: 0.79538
name: chr-F
- type: bleu
value: 62.7
name: BLEU
- type: chrf
value: 0.7654
name: chr-F
- type: bleu
value: 54.1
name: BLEU
- type: chrf
value: 0.73006
name: chr-F
- type: bleu
value: 61.0
name: BLEU
- type: chrf
value: 0.76476
name: chr-F
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.38732
name: chr-F
- type: bleu
value: 22.8
name: BLEU
- type: chrf
value: 0.39058
name: chr-F
- type: bleu
value: 26.5
name: BLEU
- type: chrf
value: 0.47244
name: chr-F
- type: bleu
value: 26.7
name: BLEU
- type: chrf
value: 0.51096
name: chr-F
- type: bleu
value: 37.2
name: BLEU
- type: chrf
value: 0.53303
name: chr-F
- type: bleu
value: 42.3
name: BLEU
- type: chrf
value: 0.59686
name: chr-F
- type: bleu
value: 25.2
name: BLEU
- type: chrf
value: 0.42426
name: chr-F
- type: bleu
value: 23.5
name: BLEU
- type: chrf
value: 0.41822
name: chr-F
- type: bleu
value: 23.3
name: BLEU
- type: chrf
value: 0.44259
name: chr-F
- type: bleu
value: 55.0
name: BLEU
- type: chrf
value: 0.70077
name: chr-F
- type: bleu
value: 46.5
name: BLEU
- type: chrf
value: 0.6572
name: chr-F
- type: bleu
value: 57.3
name: BLEU
- type: chrf
value: 0.7163
name: chr-F
- type: bleu
value: 50.9
name: BLEU
- type: chrf
value: 0.67909
name: chr-F
- type: bleu
value: 47.0
name: BLEU
- type: chrf
value: 0.6342
name: chr-F
- type: bleu
value: 53.6
name: BLEU
- type: chrf
value: 0.64228
name: chr-F
- type: bleu
value: 47.0
name: BLEU
- type: chrf
value: 0.64526
name: chr-F
- type: bleu
value: 52.4
name: BLEU
- type: chrf
value: 0.66313
name: chr-F
- type: bleu
value: 55.7
name: BLEU
- type: chrf
value: 0.71066
name: chr-F
- type: bleu
value: 49.9
name: BLEU
- type: chrf
value: 0.67499
name: chr-F
- type: bleu
value: 47.6
name: BLEU
- type: chrf
value: 0.66221
name: chr-F
- type: bleu
value: 44.4
name: BLEU
- type: chrf
value: 0.6148
name: chr-F
- type: bleu
value: 45.9
name: BLEU
- type: chrf
value: 0.61459
name: chr-F
- type: bleu
value: 41.8
name: BLEU
- type: chrf
value: 0.60646
name: chr-F
- type: bleu
value: 44.6
name: BLEU
- type: chrf
value: 0.63982
name: chr-F
- type: bleu
value: 54.8
name: BLEU
- type: chrf
value: 0.72111
name: chr-F
- type: bleu
value: 59.3
name: BLEU
- type: chrf
value: 0.73199
name: chr-F
- type: bleu
value: 46.7
name: BLEU
- type: chrf
value: 0.67269
name: chr-F
- type: bleu
value: 48.9
name: BLEU
- type: chrf
value: 0.68204
name: chr-F
- type: bleu
value: 51.0
name: BLEU
- type: chrf
value: 0.69314
name: chr-F
- type: bleu
value: 55.8
name: BLEU
- type: chrf
value: 0.6923
name: chr-F
- type: bleu
value: 48.8
name: BLEU
- type: chrf
value: 0.68483
name: chr-F
- type: bleu
value: 57.4
name: BLEU
- type: chrf
value: 0.71685
name: chr-F
- type: bleu
value: 52.6
name: BLEU
- type: chrf
value: 0.70312
name: chr-F
- type: bleu
value: 56.2
name: BLEU
- type: chrf
value: 0.7388
name: chr-F
- type: bleu
value: 48.9
name: BLEU
- type: chrf
value: 0.68518
name: chr-F
- type: bleu
value: 57.3
name: BLEU
- type: chrf
value: 0.71465
name: chr-F
- type: bleu
value: 55.2
name: BLEU
- type: chrf
value: 0.71415
name: chr-F
- type: bleu
value: 45.8
name: BLEU
- type: chrf
value: 0.67705
name: chr-F
- type: bleu
value: 56.0
name: BLEU
- type: chrf
value: 0.73721
name: chr-F
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.41564
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.47832
name: chr-F
- type: bleu
value: 39.7
name: BLEU
- type: chrf
value: 0.58486
name: chr-F
- type: bleu
value: 20.2
name: BLEU
- type: chrf
value: 0.39772
name: chr-F
- type: bleu
value: 47.9
name: BLEU
- type: chrf
value: 0.66592
name: chr-F
- type: bleu
value: 51.8
name: BLEU
- type: chrf
value: 0.6768
name: chr-F
- type: bleu
value: 47.7
name: BLEU
- type: chrf
value: 0.65788
name: chr-F
- type: bleu
value: 43.1
name: BLEU
- type: chrf
value: 0.64124
name: chr-F
- type: bleu
value: 46.9
name: BLEU
- type: chrf
value: 0.65488
name: chr-F
- type: bleu
value: 46.8
name: BLEU
- type: chrf
value: 0.66941
name: chr-F
- type: bleu
value: 62.4
name: BLEU
- type: chrf
value: 0.75755
name: chr-F
- type: bleu
value: 58.6
name: BLEU
- type: chrf
value: 0.74773
name: chr-F
- type: bleu
value: 51.8
name: BLEU
- type: chrf
value: 0.72256
name: chr-F
- type: bleu
value: 63.6
name: BLEU
- type: chrf
value: 0.78598
name: chr-F
- type: bleu
value: 49.1
name: BLEU
- type: chrf
value: 0.67249
name: chr-F
- type: bleu
value: 57.3
name: BLEU
- type: chrf
value: 0.7174
name: chr-F
- type: bleu
value: 53.0
name: BLEU
- type: chrf
value: 0.69777
name: chr-F
- type: bleu
value: 53.5
name: BLEU
- type: chrf
value: 0.72413
name: chr-F
- type: bleu
value: 56.3
name: BLEU
- type: chrf
value: 0.7296
name: chr-F
- type: bleu
value: 48.2
name: BLEU
- type: chrf
value: 0.67364
name: chr-F
- type: bleu
value: 53.7
name: BLEU
- type: chrf
value: 0.68851
name: chr-F
- type: bleu
value: 49.1
name: BLEU
- type: chrf
value: 0.66299
name: chr-F
- type: bleu
value: 43.4
name: BLEU
- type: chrf
value: 0.64106
name: chr-F
- type: bleu
value: 49.1
name: BLEU
- type: chrf
value: 0.6761
name: chr-F
- type: bleu
value: 55.2
name: BLEU
- type: chrf
value: 0.72746
name: chr-F
- type: bleu
value: 55.4
name: BLEU
- type: chrf
value: 0.7058
name: chr-F
- type: bleu
value: 43.0
name: BLEU
- type: chrf
value: 0.61642
name: chr-F
- type: bleu
value: 47.0
name: BLEU
- type: chrf
value: 0.66185
name: chr-F
- type: bleu
value: 56.5
name: BLEU
- type: chrf
value: 0.71252
name: chr-F
- type: bleu
value: 52.0
name: BLEU
- type: chrf
value: 0.65934
name: chr-F
- type: bleu
value: 53.5
name: BLEU
- type: chrf
value: 0.70356
name: chr-F
- type: bleu
value: 62.7
name: BLEU
- type: chrf
value: 0.74751
name: chr-F
- type: bleu
value: 56.7
name: BLEU
- type: chrf
value: 0.71714
name: chr-F
- type: bleu
value: 48.7
name: BLEU
- type: chrf
value: 0.68849
name: chr-F
- type: bleu
value: 53.3
name: BLEU
- type: chrf
value: 0.7016
name: chr-F
- type: bleu
value: 50.8
name: BLEU
- type: chrf
value: 0.68602
name: chr-F
- type: bleu
value: 52.4
name: BLEU
- type: chrf
value: 0.68162
name: chr-F
- type: bleu
value: 48.4
name: BLEU
- type: chrf
value: 0.66118
name: chr-F
- type: bleu
value: 46.6
name: BLEU
- type: chrf
value: 0.65923
name: chr-F
- type: bleu
value: 49.7
name: BLEU
- type: chrf
value: 0.67601
name: chr-F
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.52376
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.44187
name: chr-F
- type: bleu
value: 20.2
name: BLEU
- type: chrf
value: 0.4341
name: chr-F
- task:
type: translation
name: Translation ben-eng
dataset:
name: tico19-test
type: tico19-test
args: ben-eng
metrics:
- type: bleu
value: 27.3
name: BLEU
- type: chrf
value: 0.55418
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.45176
name: chr-F
- type: bleu
value: 20.9
name: BLEU
- type: chrf
value: 0.49778
name: chr-F
- type: bleu
value: 25.8
name: BLEU
- type: chrf
value: 0.51344
name: chr-F
- type: bleu
value: 15.0
name: BLEU
- type: chrf
value: 0.39153
name: chr-F
- type: bleu
value: 12.4
name: BLEU
- type: chrf
value: 0.35348
name: chr-F
- type: bleu
value: 13.1
name: BLEU
- type: chrf
value: 0.36879
name: chr-F
- type: bleu
value: 14.7
name: BLEU
- type: chrf
value: 0.38526
name: chr-F
- type: bleu
value: 38.2
name: BLEU
- type: chrf
value: 0.62001
name: chr-F
- type: bleu
value: 48.3
name: BLEU
- type: chrf
value: 0.71654
name: chr-F
- type: bleu
value: 50.2
name: BLEU
- type: chrf
value: 0.71947
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.58617
name: chr-F
- type: bleu
value: 23.9
name: BLEU
- type: chrf
value: 0.50453
name: chr-F
- type: bleu
value: 28.1
name: BLEU
- type: chrf
value: 0.55031
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.56113
name: chr-F
- type: bleu
value: 35.8
name: BLEU
- type: chrf
value: 0.60512
name: chr-F
- type: bleu
value: 33.0
name: BLEU
- type: chrf
value: 0.5753
name: chr-F
- type: bleu
value: 35.6
name: BLEU
- type: chrf
value: 0.58823
name: chr-F
- type: bleu
value: 39.6
name: BLEU
- type: chrf
value: 0.64146
name: chr-F
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.51582
name: chr-F
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.57182
name: chr-F
- type: bleu
value: 33.7
name: BLEU
- type: chrf
value: 0.58341
name: chr-F
- type: bleu
value: 21.4
name: BLEU
- type: chrf
value: 0.51194
name: chr-F
- type: bleu
value: 16.8
name: BLEU
- type: chrf
value: 0.43359
name: chr-F
- type: bleu
value: 20.3
name: BLEU
- type: chrf
value: 0.47089
name: chr-F
- type: bleu
value: 22.8
name: BLEU
- type: chrf
value: 0.48435
name: chr-F
- type: bleu
value: 30.1
name: BLEU
- type: chrf
value: 0.5706
name: chr-F
- type: bleu
value: 19.7
name: BLEU
- type: chrf
value: 0.46212
name: chr-F
- type: bleu
value: 24.0
name: BLEU
- type: chrf
value: 0.51024
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.51651
name: chr-F
- type: bleu
value: 47.4
name: BLEU
- type: chrf
value: 0.72228
name: chr-F
- type: bleu
value: 33.4
name: BLEU
- type: chrf
value: 0.58934
name: chr-F
- type: bleu
value: 44.1
name: BLEU
- type: chrf
value: 0.67509
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.54979
name: chr-F
- type: bleu
value: 21.0
name: BLEU
- type: chrf
value: 0.47627
name: chr-F
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.52
name: chr-F
- type: bleu
value: 28.5
name: BLEU
- type: chrf
value: 0.54172
name: chr-F
- type: bleu
value: 23.1
name: BLEU
- type: chrf
value: 0.48655
name: chr-F
- type: bleu
value: 16.2
name: BLEU
- type: chrf
value: 0.4098
name: chr-F
- type: bleu
value: 19.5
name: BLEU
- type: chrf
value: 0.44879
name: chr-F
- type: bleu
value: 20.4
name: BLEU
- type: chrf
value: 0.4528
name: chr-F
- type: bleu
value: 30.4
name: BLEU
- type: chrf
value: 0.59787
name: chr-F
- type: bleu
value: 24.1
name: BLEU
- type: chrf
value: 0.52211
name: chr-F
- type: bleu
value: 26.9
name: BLEU
- type: chrf
value: 0.56473
name: chr-F
- type: bleu
value: 31.1
name: BLEU
- type: chrf
value: 0.58626
name: chr-F
- type: bleu
value: 33.1
name: BLEU
- type: chrf
value: 0.59078
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.51957
name: chr-F
- type: bleu
value: 17.2
name: BLEU
- type: chrf
value: 0.43707
name: chr-F
- type: bleu
value: 20.1
name: BLEU
- type: chrf
value: 0.47484
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.48812
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2008
type: wmt-2008-news
args: ces-deu
metrics:
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.5245
name: chr-F
- type: bleu
value: 24.9
name: BLEU
- type: chrf
value: 0.52805
name: chr-F
- type: bleu
value: 25.4
name: BLEU
- type: chrf
value: 0.54135
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.53925
name: chr-F
- type: bleu
value: 26.2
name: BLEU
- type: chrf
value: 0.53756
name: chr-F
- type: bleu
value: 25.5
name: BLEU
- type: chrf
value: 0.54147
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.53296
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.52399
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.54809
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.56027
name: chr-F
- type: bleu
value: 21.8
name: BLEU
- type: chrf
value: 0.52211
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.53878
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.58122
name: chr-F
- type: bleu
value: 20.9
name: BLEU
- type: chrf
value: 0.51468
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2009
type: wmt-2009-news
args: ces-deu
metrics:
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.52537
name: chr-F
- type: bleu
value: 27.1
name: BLEU
- type: chrf
value: 0.54467
name: chr-F
- type: bleu
value: 26.1
name: BLEU
- type: chrf
value: 0.54545
name: chr-F
- type: bleu
value: 26.3
name: BLEU
- type: chrf
value: 0.54339
name: chr-F
- type: bleu
value: 25.9
name: BLEU
- type: chrf
value: 0.53323
name: chr-F
- type: bleu
value: 25.0
name: BLEU
- type: chrf
value: 0.53408
name: chr-F
- type: bleu
value: 24.4
name: BLEU
- type: chrf
value: 0.52999
name: chr-F
- type: bleu
value: 21.5
name: BLEU
- type: chrf
value: 0.52387
name: chr-F
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.57057
name: chr-F
- type: bleu
value: 29.6
name: BLEU
- type: chrf
value: 0.57376
name: chr-F
- type: bleu
value: 21.6
name: BLEU
- type: chrf
value: 0.5198
name: chr-F
- type: bleu
value: 29.5
name: BLEU
- type: chrf
value: 0.56151
name: chr-F
- type: bleu
value: 31.4
name: BLEU
- type: chrf
value: 0.58173
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.52409
name: chr-F
- type: bleu
value: 32.9
name: BLEU
- type: chrf
value: 0.58598
name: chr-F
- type: bleu
value: 31.5
name: BLEU
- type: chrf
value: 0.58722
name: chr-F
- type: bleu
value: 33.1
name: BLEU
- type: chrf
value: 0.59235
name: chr-F
- type: bleu
value: 20.7
name: BLEU
- type: chrf
value: 0.51708
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.56094
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2010
type: wmt-2010-news
args: ces-deu
metrics:
- type: bleu
value: 23.5
name: BLEU
- type: chrf
value: 0.53608
name: chr-F
- type: bleu
value: 28.8
name: BLEU
- type: chrf
value: 0.56348
name: chr-F
- type: bleu
value: 27.2
name: BLEU
- type: chrf
value: 0.5551
name: chr-F
- type: bleu
value: 30.6
name: BLEU
- type: chrf
value: 0.57375
name: chr-F
- type: bleu
value: 29.8
name: BLEU
- type: chrf
value: 0.57666
name: chr-F
- type: bleu
value: 28.2
name: BLEU
- type: chrf
value: 0.56822
name: chr-F
- type: bleu
value: 31.5
name: BLEU
- type: chrf
value: 0.58446
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.54037
name: chr-F
- type: bleu
value: 31.2
name: BLEU
- type: chrf
value: 0.58935
name: chr-F
- type: bleu
value: 35.6
name: BLEU
- type: chrf
value: 0.6123
name: chr-F
- type: bleu
value: 23.2
name: BLEU
- type: chrf
value: 0.52993
name: chr-F
- type: bleu
value: 31.7
name: BLEU
- type: chrf
value: 0.5858
name: chr-F
- type: bleu
value: 36.8
name: BLEU
- type: chrf
value: 0.61883
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.54232
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2011
type: wmt-2011-news
args: ces-deu
metrics:
- type: bleu
value: 22.2
name: BLEU
- type: chrf
value: 0.52042
name: chr-F
- type: bleu
value: 27.8
name: BLEU
- type: chrf
value: 0.5538
name: chr-F
- type: bleu
value: 28.0
name: BLEU
- type: chrf
value: 0.55651
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.56004
name: chr-F
- type: bleu
value: 25.8
name: BLEU
- type: chrf
value: 0.54263
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.54883
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.55738
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.52251
name: chr-F
- type: bleu
value: 33.3
name: BLEU
- type: chrf
value: 0.60292
name: chr-F
- type: bleu
value: 37.6
name: BLEU
- type: chrf
value: 0.61355
name: chr-F
- type: bleu
value: 22.1
name: BLEU
- type: chrf
value: 0.52082
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.58971
name: chr-F
- type: bleu
value: 38.7
name: BLEU
- type: chrf
value: 0.62318
name: chr-F
- type: bleu
value: 34.0
name: BLEU
- type: chrf
value: 0.60467
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2012
type: wmt-2012-news
args: ces-deu
metrics:
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.52126
name: chr-F
- type: bleu
value: 27.0
name: BLEU
- type: chrf
value: 0.5498
name: chr-F
- type: bleu
value: 26.8
name: BLEU
- type: chrf
value: 0.55088
name: chr-F
- type: bleu
value: 29.9
name: BLEU
- type: chrf
value: 0.5595
name: chr-F
- type: bleu
value: 27.5
name: BLEU
- type: chrf
value: 0.55507
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.5516
name: chr-F
- type: bleu
value: 30.1
name: BLEU
- type: chrf
value: 0.56307
name: chr-F
- type: bleu
value: 22.9
name: BLEU
- type: chrf
value: 0.52121
name: chr-F
- type: bleu
value: 30.8
name: BLEU
- type: chrf
value: 0.58675
name: chr-F
- type: bleu
value: 37.9
name: BLEU
- type: chrf
value: 0.61689
name: chr-F
- type: bleu
value: 23.2
name: BLEU
- type: chrf
value: 0.52009
name: chr-F
- type: bleu
value: 32.3
name: BLEU
- type: chrf
value: 0.58405
name: chr-F
- type: bleu
value: 38.5
name: BLEU
- type: chrf
value: 0.62038
name: chr-F
- type: bleu
value: 18.3
name: BLEU
- type: chrf
value: 0.47965
name: chr-F
- type: bleu
value: 36.1
name: BLEU
- type: chrf
value: 0.61258
name: chr-F
- type: bleu
value: 24.2
name: BLEU
- type: chrf
value: 0.52674
name: chr-F
- type: bleu
value: 27.4
name: BLEU
- type: chrf
value: 0.5376
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2013
type: wmt-2013-news
args: ces-deu
metrics:
- type: bleu
value: 25.3
name: BLEU
- type: chrf
value: 0.54483
name: chr-F
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.57212
name: chr-F
- type: bleu
value: 28.4
name: BLEU
- type: chrf
value: 0.55258
name: chr-F
- type: bleu
value: 30.6
name: BLEU
- type: chrf
value: 0.56179
name: chr-F
- type: bleu
value: 31.0
name: BLEU
- type: chrf
value: 0.57382
name: chr-F
- type: bleu
value: 28.8
name: BLEU
- type: chrf
value: 0.55576
name: chr-F
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.5622
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.5483
name: chr-F
- type: bleu
value: 32.6
name: BLEU
- type: chrf
value: 0.58195
name: chr-F
- type: bleu
value: 34.6
name: BLEU
- type: chrf
value: 0.59254
name: chr-F
- type: bleu
value: 24.6
name: BLEU
- type: chrf
value: 0.53465
name: chr-F
- type: bleu
value: 32.9
name: BLEU
- type: chrf
value: 0.58395
name: chr-F
- type: bleu
value: 34.1
name: BLEU
- type: chrf
value: 0.58748
name: chr-F
- type: bleu
value: 22.4
name: BLEU
- type: chrf
value: 0.5198
name: chr-F
- type: bleu
value: 28.9
name: BLEU
- type: chrf
value: 0.55557
name: chr-F
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.54627
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.5554
name: chr-F
- type: bleu
value: 24.8
name: BLEU
- type: chrf
value: 0.53925
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2014
type: wmt-2014-news
args: ces-eng
metrics:
- type: bleu
value: 33.9
name: BLEU
- type: chrf
value: 0.61449
name: chr-F
- type: bleu
value: 32.1
name: BLEU
- type: chrf
value: 0.58733
name: chr-F
- type: bleu
value: 26.5
name: BLEU
- type: chrf
value: 0.57701
name: chr-F
- type: bleu
value: 38.1
name: BLEU
- type: chrf
value: 0.63976
name: chr-F
- type: bleu
value: 36.8
name: BLEU
- type: chrf
value: 0.62627
name: chr-F
- type: bleu
value: 26.4
name: BLEU
- type: chrf
value: 0.56343
name: chr-F
- type: bleu
value: 36.6
name: BLEU
- type: chrf
value: 0.62633
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2015
type: wmt-2015-news
args: ces-eng
metrics:
- type: bleu
value: 30.7
name: BLEU
- type: chrf
value: 0.56562
name: chr-F
- type: bleu
value: 33.3
name: BLEU
- type: chrf
value: 0.59036
name: chr-F
- type: bleu
value: 30.1
name: BLEU
- type: chrf
value: 0.58604
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.58794
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2016
type: wmt-2016-news
args: ces-eng
metrics:
- type: bleu
value: 32.6
name: BLEU
- type: chrf
value: 0.58896
name: chr-F
- type: bleu
value: 39.4
name: BLEU
- type: chrf
value: 0.63945
name: chr-F
- type: bleu
value: 35.9
name: BLEU
- type: chrf
value: 0.62731
name: chr-F
- type: bleu
value: 38.1
name: BLEU
- type: chrf
value: 0.63051
name: chr-F
- type: bleu
value: 32.5
name: BLEU
- type: chrf
value: 0.58858
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2017
type: wmt-2017-news
args: ces-eng
metrics:
- type: bleu
value: 29.0
name: BLEU
- type: chrf
value: 0.55759
name: chr-F
- type: bleu
value: 34.8
name: BLEU
- type: chrf
value: 0.60252
name: chr-F
- type: bleu
value: 28.7
name: BLEU
- type: chrf
value: 0.57779
name: chr-F
- type: bleu
value: 20.2
name: BLEU
- type: chrf
value: 0.51103
name: chr-F
- type: bleu
value: 36.1
name: BLEU
- type: chrf
value: 0.61663
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2018
type: wmt-2018-news
args: ces-eng
metrics:
- type: bleu
value: 29.6
name: BLEU
- type: chrf
value: 0.56663
name: chr-F
- type: bleu
value: 41.8
name: BLEU
- type: chrf
value: 0.65768
name: chr-F
- type: bleu
value: 43.5
name: BLEU
- type: chrf
value: 0.6759
name: chr-F
- type: bleu
value: 31.5
name: BLEU
- type: chrf
value: 0.58427
name: chr-F
- task:
type: translation
name: Translation ces-deu
dataset:
name: newstest2019
type: wmt-2019-news
args: ces-deu
metrics:
- type: bleu
value: 23.8
name: BLEU
- type: chrf
value: 0.53405
name: chr-F
- type: bleu
value: 37.7
name: BLEU
- type: chrf
value: 0.62158
name: chr-F
- type: bleu
value: 34.4
name: BLEU
- type: chrf
value: 0.61819
name: chr-F
- type: bleu
value: 39.8
name: BLEU
- type: chrf
value: 0.6464
name: chr-F
- type: bleu
value: 27.6
name: BLEU
- type: chrf
value: 0.59291
name: chr-F
- type: bleu
value: 22.5
name: BLEU
- type: chrf
value: 0.51165
name: chr-F
- type: bleu
value: 29.1
name: BLEU
- type: chrf
value: 0.58019
name: chr-F
- type: bleu
value: 37.8
name: BLEU
- type: chrf
value: 0.62499
name: chr-F
- task:
type: translation
name: Translation deu-eng
dataset:
name: newstest2020
type: wmt-2020-news
args: deu-eng
metrics:
- type: bleu
value: 30.9
name: BLEU
- type: chrf
value: 0.56495
name: chr-F
- type: bleu
value: 31.6
name: BLEU
- type: chrf
value: 0.59211
name: chr-F
- type: bleu
value: 30.2
name: BLEU
- type: chrf
value: 0.58436
name: chr-F
- type: bleu
value: 26.6
name: BLEU
- type: chrf
value: 0.59478
name: chr-F
- type: bleu
value: 27.7
name: BLEU
- type: chrf
value: 0.56674
name: chr-F
- type: bleu
value: 10.8
name: BLEU
- type: chrf
value: 0.37276
name: chr-F
- type: bleu
value: 33.6
name: BLEU
- type: chrf
value: 0.62387
name: chr-F
- task:
type: translation
name: Translation ces-eng
dataset:
name: newstest2021
type: wmt-2021-news
args: ces-eng
metrics:
- type: bleu
value: 25.6
name: BLEU
- type: chrf
value: 0.54943
name: chr-F
- type: bleu
value: 30.5
name: BLEU
- type: chrf
value: 0.58675
name: chr-F
- type: bleu
value: 30.0
name: BLEU
- type: chrf
value: 0.5769
name: chr-F
- type: bleu
value: 24.9
name: BLEU
- type: chrf
value: 0.55381
name: chr-F
- type: bleu
value: 37.2
name: BLEU
- type: chrf
value: 0.63942
name: chr-F
- type: bleu
value: 29.2
name: BLEU
- type: chrf
value: 0.53701
name: chr-F
- type: bleu
value: 33.7
name: BLEU
- type: chrf
value: 0.6076
name: chr-F
---
# opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Indo-European languages (ine) to unknown (deu+eng+fra+por+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-05-30
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): acf afr aln ang anp arg asm ast awa bal bar bel ben bho bis bos bpy bre bul bzj cat cbk ces chu ckb cnr cor cos crs csb cym dan deu diq div djk drt dsb dty egl ell eng enm ext fao fas fra frm fro frp frr fry fur gbm gcf gla gle glg glk glv gos got grc gsw guj hat hbs hif hin hne hns hrv hrx hsb hwc hye hyw icr isl ita jam jdt kas kea kmr kok kri ksh kur lad lah lat lav lij lim lit lld lmo lou lrc ltz mag mai mar mfe mkd mol mwl mzn nap nds nep nld nno nob non nor npi oci ofs ori orv osp oss pal pan pap pcd pcm pdc pes pfl pih pis pli pms pnt pol por prg prs pus rhg rmy roh rom ron rop rue rup rus san scn sco sdh sgs sin skr slk slv snd spa sqi srd srm srn srp stq swe swg syl szl tcs tgk tly tpi ukr urd vec vls wae wln xcl yid zea zza
- Target Language(s): deu eng fra por spa
- Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/ine-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>deu<< Replace this with text in an accepted source language.",
">>spa<< This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa")
print(pipe(">>deu<< Replace this with text in an accepted source language."))
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/ine-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| afr-deu | tatoeba-test-v2021-08-07 | 0.68516 | 48.8 | 1583 | 9105 |
| afr-eng | tatoeba-test-v2021-08-07 | 0.73535 | 60.8 | 1374 | 9622 |
| afr-spa | tatoeba-test-v2021-08-07 | 0.72814 | 57.6 | 448 | 2783 |
| awa-eng | tatoeba-test-v2021-08-07 | 0.62154 | 42.4 | 279 | 1335 |
| bel-deu | tatoeba-test-v2021-08-07 | 0.65145 | 44.1 | 551 | 4182 |
| bel-eng | tatoeba-test-v2021-08-07 | 0.62648 | 44.8 | 2500 | 18571 |
| bel-fra | tatoeba-test-v2021-08-07 | 0.66291 | 47.4 | 283 | 2005 |
| bel-spa | tatoeba-test-v2021-08-07 | 0.66644 | 46.5 | 205 | 1412 |
| ben-eng | tatoeba-test-v2021-08-07 | 0.62742 | 46.1 | 2500 | 13978 |
| bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.76603 | 62.5 | 301 | 1826 |
| bre-eng | tatoeba-test-v2021-08-07 | 0.47135 | 26.2 | 383 | 2065 |
| bul-deu | tatoeba-test-v2021-08-07 | 0.68593 | 49.1 | 314 | 2224 |
| bul-eng | tatoeba-test-v2021-08-07 | 0.69980 | 55.5 | 10000 | 71872 |
| bul-fra | tatoeba-test-v2021-08-07 | 0.69233 | 52.4 | 446 | 3669 |
| bul-spa | tatoeba-test-v2021-08-07 | 0.66731 | 49.2 | 286 | 1783 |
| cat-deu | tatoeba-test-v2021-08-07 | 0.65296 | 45.7 | 723 | 5676 |
| cat-eng | tatoeba-test-v2021-08-07 | 0.70714 | 55.6 | 1631 | 12627 |
| cat-fra | tatoeba-test-v2021-08-07 | 0.71112 | 53.7 | 700 | 5664 |
| cat-por | tatoeba-test-v2021-08-07 | 0.74022 | 56.3 | 747 | 6119 |
| cat-spa | tatoeba-test-v2021-08-07 | 0.85238 | 74.0 | 1534 | 12094 |
| ces-deu | tatoeba-test-v2021-08-07 | 0.68073 | 50.1 | 3490 | 27155 |
| ces-eng | tatoeba-test-v2021-08-07 | 0.68902 | 53.6 | 13824 | 105010 |
| ces-fra | tatoeba-test-v2021-08-07 | 0.70071 | 53.5 | 438 | 3346 |
| ces-spa | tatoeba-test-v2021-08-07 | 0.69957 | 52.5 | 1807 | 12716 |
| cym-eng | tatoeba-test-v2021-08-07 | 0.65153 | 47.5 | 818 | 5563 |
| dan-deu | tatoeba-test-v2021-08-07 | 0.72320 | 53.7 | 9998 | 76055 |
| dan-eng | tatoeba-test-v2021-08-07 | 0.75679 | 62.3 | 10795 | 79684 |
| dan-fra | tatoeba-test-v2021-08-07 | 0.76077 | 61.8 | 1731 | 11882 |
| dan-por | tatoeba-test-v2021-08-07 | 0.76460 | 58.8 | 873 | 5360 |
| dan-spa | tatoeba-test-v2021-08-07 | 0.71685 | 53.8 | 5000 | 35528 |
| deu-deu | tatoeba-test-v2021-08-07 | 0.60029 | 37.6 | 2500 | 20806 |
| deu-eng | tatoeba-test-v2021-08-07 | 0.65647 | 48.4 | 17565 | 149462 |
| deu-fra | tatoeba-test-v2021-08-07 | 0.66811 | 48.7 | 12418 | 102721 |
| deu-por | tatoeba-test-v2021-08-07 | 0.62766 | 42.2 | 10000 | 81482 |
| deu-spa | tatoeba-test-v2021-08-07 | 0.67276 | 48.2 | 10521 | 82570 |
| dsb-deu | tatoeba-test-v2021-08-07 | 0.55993 | 34.5 | 640 | 4469 |
| ell-deu | tatoeba-test-v2021-08-07 | 0.68199 | 51.9 | 2500 | 17025 |
| ell-eng | tatoeba-test-v2021-08-07 | 0.76316 | 63.6 | 10899 | 68682 |
| ell-fra | tatoeba-test-v2021-08-07 | 0.74291 | 59.1 | 1506 | 9726 |
| ell-por | tatoeba-test-v2021-08-07 | 0.69593 | 50.0 | 885 | 5196 |
| ell-spa | tatoeba-test-v2021-08-07 | 0.64482 | 47.9 | 1829 | 10828 |
| eng-deu | tatoeba-test-v2021-08-07 | 0.61606 | 39.7 | 17565 | 151568 |
| eng-eng | tatoeba-test-v2021-08-07 | 0.82285 | 65.4 | 12062 | 115106 |
| eng-fra | tatoeba-test-v2021-08-07 | 0.67435 | 49.4 | 12681 | 106378 |
| eng-por | tatoeba-test-v2021-08-07 | 0.70975 | 51.9 | 13222 | 105265 |
| eng-spa | tatoeba-test-v2021-08-07 | 0.71497 | 53.9 | 16583 | 134710 |
| fao-eng | tatoeba-test-v2021-08-07 | 0.55253 | 40.1 | 294 | 1984 |
| fas-deu | tatoeba-test-v2021-08-07 | 0.57907 | 34.2 | 3185 | 25590 |
| fas-eng | tatoeba-test-v2021-08-07 | 0.58280 | 39.2 | 3762 | 31480 |
| fas-fra | tatoeba-test-v2021-08-07 | 0.57554 | 35.7 | 376 | 3377 |
| fra-deu | tatoeba-test-v2021-08-07 | 0.67258 | 47.6 | 12418 | 100545 |
| fra-eng | tatoeba-test-v2021-08-07 | 0.71355 | 56.3 | 12681 | 101754 |
| fra-fra | tatoeba-test-v2021-08-07 | 0.63538 | 43.9 | 1000 | 7757 |
| fra-por | tatoeba-test-v2021-08-07 | 0.69703 | 50.7 | 10518 | 77650 |
| fra-spa | tatoeba-test-v2021-08-07 | 0.71014 | 53.3 | 10294 | 78406 |
| fry-eng | tatoeba-test-v2021-08-07 | 0.55802 | 37.9 | 220 | 1573 |
| gla-eng | tatoeba-test-v2021-08-07 | 0.44054 | 27.0 | 955 | 6611 |
| gla-spa | tatoeba-test-v2021-08-07 | 0.44549 | 23.0 | 289 | 1608 |
| gle-eng | tatoeba-test-v2021-08-07 | 0.63566 | 48.1 | 1913 | 11190 |
| glg-eng | tatoeba-test-v2021-08-07 | 0.69249 | 54.1 | 1015 | 8421 |
| glg-por | tatoeba-test-v2021-08-07 | 0.76777 | 61.5 | 433 | 3105 |
| glg-spa | tatoeba-test-v2021-08-07 | 0.80359 | 68.8 | 2121 | 17443 |
| gos-deu | tatoeba-test-v2021-08-07 | 0.44004 | 17.2 | 207 | 1168 |
| gos-eng | tatoeba-test-v2021-08-07 | 0.37952 | 20.8 | 1154 | 5635 |
| gsw-eng | tatoeba-test-v2021-08-07 | 0.48360 | 32.5 | 205 | 990 |
| hbs-deu | tatoeba-test-v2021-08-07 | 0.68769 | 50.7 | 1959 | 15559 |
| hbs-eng | tatoeba-test-v2021-08-07 | 0.68956 | 54.9 | 10017 | 68934 |
| hbs-fra | tatoeba-test-v2021-08-07 | 0.66551 | 47.0 | 474 | 3370 |
| hbs-spa | tatoeba-test-v2021-08-07 | 0.70241 | 53.4 | 607 | 3766 |
| hin-eng | tatoeba-test-v2021-08-07 | 0.64048 | 47.1 | 5000 | 33943 |
| hrv-deu | tatoeba-test-v2021-08-07 | 0.66676 | 48.9 | 782 | 5734 |
| hrv-eng | tatoeba-test-v2021-08-07 | 0.71884 | 56.8 | 1480 | 10620 |
| hrv-fra | tatoeba-test-v2021-08-07 | 0.62438 | 42.3 | 258 | 1943 |
| hrv-spa | tatoeba-test-v2021-08-07 | 0.68433 | 52.9 | 254 | 1702 |
| hsb-deu | tatoeba-test-v2021-08-07 | 0.61176 | 40.9 | 666 | 4818 |
| hye-eng | tatoeba-test-v2021-08-07 | 0.50806 | 29.0 | 1121 | 5066 |
| isl-deu | tatoeba-test-v2021-08-07 | 0.66238 | 47.4 | 969 | 6279 |
| isl-eng | tatoeba-test-v2021-08-07 | 0.64466 | 48.1 | 2503 | 19788 |
| isl-spa | tatoeba-test-v2021-08-07 | 0.61980 | 42.5 | 238 | 1229 |
| ita-deu | tatoeba-test-v2021-08-07 | 0.67198 | 47.8 | 10094 | 79762 |
| ita-eng | tatoeba-test-v2021-08-07 | 0.79538 | 68.3 | 17320 | 119214 |
| ita-fra | tatoeba-test-v2021-08-07 | 0.76540 | 62.7 | 10091 | 66377 |
| ita-por | tatoeba-test-v2021-08-07 | 0.73006 | 54.1 | 3066 | 25668 |
| ita-spa | tatoeba-test-v2021-08-07 | 0.76476 | 61.0 | 5000 | 34937 |
| kur_Latn-deu | tatoeba-test-v2021-08-07 | 0.38732 | 23.8 | 223 | 1323 |
| kur_Latn-eng | tatoeba-test-v2021-08-07 | 0.39058 | 22.8 | 290 | 1708 |
| lad-deu | tatoeba-test-v2021-08-07 | 0.40264 | 10.1 | 220 | 1175 |
| lad-eng | tatoeba-test-v2021-08-07 | 0.47244 | 26.5 | 768 | 4184 |
| lad-spa | tatoeba-test-v2021-08-07 | 0.51096 | 26.7 | 276 | 1448 |
| lad_Latn-eng | tatoeba-test-v2021-08-07 | 0.53303 | 37.2 | 672 | 3665 |
| lad_Latn-spa | tatoeba-test-v2021-08-07 | 0.59686 | 42.3 | 239 | 1239 |
| lat-deu | tatoeba-test-v2021-08-07 | 0.42426 | 25.2 | 2016 | 13326 |
| lat-eng | tatoeba-test-v2021-08-07 | 0.41822 | 23.5 | 10298 | 100152 |
| lat-spa | tatoeba-test-v2021-08-07 | 0.44259 | 23.3 | 3129 | 34036 |
| lav-eng | tatoeba-test-v2021-08-07 | 0.70077 | 55.0 | 1631 | 11213 |
| lit-deu | tatoeba-test-v2021-08-07 | 0.65720 | 46.5 | 1115 | 8531 |
| lit-eng | tatoeba-test-v2021-08-07 | 0.71630 | 57.3 | 2528 | 17855 |
| lit-spa | tatoeba-test-v2021-08-07 | 0.67909 | 50.9 | 454 | 2751 |
| ltz-deu | tatoeba-test-v2021-08-07 | 0.63420 | 47.0 | 347 | 2208 |
| ltz-eng | tatoeba-test-v2021-08-07 | 0.64228 | 53.6 | 293 | 1840 |
| mar-eng | tatoeba-test-v2021-08-07 | 0.64526 | 47.0 | 10396 | 67527 |
| mkd-eng | tatoeba-test-v2021-08-07 | 0.66313 | 52.4 | 10010 | 65667 |
| mkd-spa | tatoeba-test-v2021-08-07 | 0.71066 | 55.7 | 217 | 1121 |
| nds-deu | tatoeba-test-v2021-08-07 | 0.66221 | 47.6 | 9999 | 74564 |
| nds-eng | tatoeba-test-v2021-08-07 | 0.61480 | 44.4 | 2500 | 17589 |
| nds-fra | tatoeba-test-v2021-08-07 | 0.61459 | 45.9 | 857 | 5676 |
| nds-por | tatoeba-test-v2021-08-07 | 0.60646 | 41.8 | 207 | 1256 |
| nds-spa | tatoeba-test-v2021-08-07 | 0.63982 | 44.6 | 923 | 5540 |
| nld-deu | tatoeba-test-v2021-08-07 | 0.72111 | 54.8 | 10218 | 74131 |
| nld-eng | tatoeba-test-v2021-08-07 | 0.73199 | 59.3 | 12696 | 89978 |
| nld-fra | tatoeba-test-v2021-08-07 | 0.67269 | 46.7 | 11548 | 82974 |
| nld-por | tatoeba-test-v2021-08-07 | 0.68204 | 48.9 | 2500 | 17326 |
| nld-spa | tatoeba-test-v2021-08-07 | 0.69314 | 51.0 | 10113 | 74981 |
| nno-eng | tatoeba-test-v2021-08-07 | 0.69230 | 55.8 | 460 | 3524 |
| nob-deu | tatoeba-test-v2021-08-07 | 0.68483 | 48.8 | 3525 | 33592 |
| nob-eng | tatoeba-test-v2021-08-07 | 0.71685 | 57.4 | 4539 | 36823 |
| nob-fra | tatoeba-test-v2021-08-07 | 0.70312 | 52.6 | 323 | 2269 |
| nob-spa | tatoeba-test-v2021-08-07 | 0.73880 | 56.2 | 885 | 6866 |
| nor-deu | tatoeba-test-v2021-08-07 | 0.68518 | 48.9 | 3651 | 34575 |
| nor-eng | tatoeba-test-v2021-08-07 | 0.71465 | 57.3 | 5000 | 40355 |
| nor-fra | tatoeba-test-v2021-08-07 | 0.71415 | 55.2 | 477 | 3213 |
| nor-por | tatoeba-test-v2021-08-07 | 0.67705 | 45.8 | 481 | 4182 |
| nor-spa | tatoeba-test-v2021-08-07 | 0.73721 | 56.0 | 960 | 7311 |
| oci-eng | tatoeba-test-v2021-08-07 | 0.41564 | 22.9 | 841 | 5299 |
| oci-fra | tatoeba-test-v2021-08-07 | 0.47832 | 27.0 | 806 | 6302 |
| pes-eng | tatoeba-test-v2021-08-07 | 0.58486 | 39.7 | 3757 | 31411 |
| pms-eng | tatoeba-test-v2021-08-07 | 0.39772 | 20.2 | 269 | 2059 |
| pol-deu | tatoeba-test-v2021-08-07 | 0.66592 | 47.9 | 5000 | 37421 |
| pol-eng | tatoeba-test-v2021-08-07 | 0.67680 | 51.8 | 10099 | 75766 |
| pol-fra | tatoeba-test-v2021-08-07 | 0.65788 | 47.7 | 3087 | 24257 |
| pol-por | tatoeba-test-v2021-08-07 | 0.64124 | 43.1 | 705 | 5063 |
| pol-spa | tatoeba-test-v2021-08-07 | 0.65488 | 46.9 | 2544 | 18113 |
| por-deu | tatoeba-test-v2021-08-07 | 0.66941 | 46.8 | 10000 | 81246 |
| por-eng | tatoeba-test-v2021-08-07 | 0.75755 | 62.4 | 13222 | 105351 |
| por-fra | tatoeba-test-v2021-08-07 | 0.74773 | 58.6 | 10518 | 80459 |
| por-por | tatoeba-test-v2021-08-07 | 0.72256 | 51.8 | 2500 | 19220 |
| por-spa | tatoeba-test-v2021-08-07 | 0.78598 | 63.6 | 10947 | 87335 |
| ron-deu | tatoeba-test-v2021-08-07 | 0.67249 | 49.1 | 1141 | 7893 |
| ron-eng | tatoeba-test-v2021-08-07 | 0.71740 | 57.3 | 5508 | 40717 |
| ron-fra | tatoeba-test-v2021-08-07 | 0.69777 | 53.0 | 1925 | 13347 |
| ron-por | tatoeba-test-v2021-08-07 | 0.72413 | 53.5 | 681 | 4593 |
| ron-spa | tatoeba-test-v2021-08-07 | 0.72960 | 56.3 | 1959 | 12679 |
| rus-deu | tatoeba-test-v2021-08-07 | 0.67364 | 48.2 | 12800 | 98842 |
| rus-eng | tatoeba-test-v2021-08-07 | 0.68851 | 53.7 | 19425 | 147872 |
| rus-fra | tatoeba-test-v2021-08-07 | 0.66299 | 49.1 | 11490 | 80579 |
| rus-por | tatoeba-test-v2021-08-07 | 0.64106 | 43.4 | 10000 | 74713 |
| rus-spa | tatoeba-test-v2021-08-07 | 0.67610 | 49.1 | 10506 | 75246 |
| slv-deu | tatoeba-test-v2021-08-07 | 0.72746 | 55.2 | 492 | 3003 |
| slv-eng | tatoeba-test-v2021-08-07 | 0.70580 | 55.4 | 2495 | 16940 |
| slv-fra | tatoeba-test-v2021-08-07 | 0.61642 | 43.0 | 448 | 3792 |
| sqi-eng | tatoeba-test-v2021-08-07 | 0.71252 | 56.5 | 1109 | 8129 |
| srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.65934 | 52.0 | 1580 | 10181 |
| swe-deu | tatoeba-test-v2021-08-07 | 0.70356 | 53.5 | 3410 | 23494 |
| swe-eng | tatoeba-test-v2021-08-07 | 0.74751 | 62.7 | 10362 | 68513 |
| swe-fra | tatoeba-test-v2021-08-07 | 0.71714 | 56.7 | 1407 | 9580 |
| swe-por | tatoeba-test-v2021-08-07 | 0.68849 | 48.7 | 320 | 2032 |
| swe-spa | tatoeba-test-v2021-08-07 | 0.70160 | 53.3 | 1351 | 8235 |
| ukr-deu | tatoeba-test-v2021-08-07 | 0.68602 | 50.8 | 10319 | 64646 |
| ukr-eng | tatoeba-test-v2021-08-07 | 0.68162 | 52.4 | 13127 | 88607 |
| ukr-fra | tatoeba-test-v2021-08-07 | 0.66118 | 48.4 | 10035 | 63227 |
| ukr-por | tatoeba-test-v2021-08-07 | 0.65923 | 46.6 | 3372 | 21315 |
| ukr-spa | tatoeba-test-v2021-08-07 | 0.67601 | 49.7 | 10115 | 59284 |
| urd-eng | tatoeba-test-v2021-08-07 | 0.52376 | 33.0 | 1663 | 12029 |
| yid-eng | tatoeba-test-v2021-08-07 | 0.43640 | 19.1 | 2483 | 15452 |
| yid-fra | tatoeba-test-v2021-08-07 | 0.43410 | 20.2 | 384 | 2455 |
| afr-deu | flores101-devtest | 0.57090 | 27.9 | 1012 | 25094 |
| afr-eng | flores101-devtest | 0.73127 | 52.4 | 1012 | 24721 |
| afr-fra | flores101-devtest | 0.60726 | 34.8 | 1012 | 28343 |
| afr-por | flores101-devtest | 0.60399 | 34.4 | 1012 | 26519 |
| afr-spa | flores101-devtest | 0.50655 | 22.1 | 1012 | 29199 |
| ast-fra | flores101-devtest | 0.56575 | 30.8 | 1012 | 28343 |
| ast-por | flores101-devtest | 0.56438 | 30.4 | 1012 | 26519 |
| ast-spa | flores101-devtest | 0.49455 | 21.1 | 1012 | 29199 |
| bel-deu | flores101-devtest | 0.46177 | 11.8 | 1012 | 25094 |
| bel-eng | flores101-devtest | 0.49344 | 15.6 | 1012 | 24721 |
| bel-fra | flores101-devtest | 0.49372 | 16.5 | 1012 | 28343 |
| bel-spa | flores101-devtest | 0.44802 | 13.8 | 1012 | 29199 |
| ben-eng | flores101-devtest | 0.53648 | 23.9 | 1012 | 24721 |
| ben-por | flores101-devtest | 0.48236 | 19.9 | 1012 | 26519 |
| bul-por | flores101-devtest | 0.58471 | 30.8 | 1012 | 26519 |
| cat-deu | flores101-devtest | 0.56499 | 27.4 | 1012 | 25094 |
| cat-eng | flores101-devtest | 0.67443 | 42.3 | 1012 | 24721 |
| cat-spa | flores101-devtest | 0.53140 | 24.4 | 1012 | 29199 |
| ces-por | flores101-devtest | 0.57503 | 29.9 | 1012 | 26519 |
| ces-spa | flores101-devtest | 0.49860 | 21.1 | 1012 | 29199 |
| ckb-eng | flores101-devtest | 0.41310 | 15.8 | 1012 | 24721 |
| cym-fra | flores101-devtest | 0.54610 | 28.6 | 1012 | 28343 |
| dan-por | flores101-devtest | 0.60877 | 34.7 | 1012 | 26519 |
| deu-eng | flores101-devtest | 0.65706 | 39.8 | 1012 | 24721 |
| fas-fra | flores101-devtest | 0.54336 | 26.8 | 1012 | 28343 |
| fra-eng | flores101-devtest | 0.66301 | 41.0 | 1012 | 24721 |
| fra-por | flores101-devtest | 0.61592 | 35.7 | 1012 | 26519 |
| gle-deu | flores101-devtest | 0.47354 | 17.0 | 1012 | 25094 |
| gle-por | flores101-devtest | 0.50115 | 21.7 | 1012 | 26519 |
| guj-deu | flores101-devtest | 0.42069 | 13.5 | 1012 | 25094 |
| hin-deu | flores101-devtest | 0.49480 | 19.6 | 1012 | 25094 |
| hin-eng | flores101-devtest | 0.59392 | 32.6 | 1012 | 24721 |
| hrv-por | flores101-devtest | 0.57004 | 29.5 | 1012 | 26519 |
| hye-deu | flores101-devtest | 0.47323 | 17.5 | 1012 | 25094 |
| hye-eng | flores101-devtest | 0.54450 | 26.3 | 1012 | 24721 |
| isl-eng | flores101-devtest | 0.53875 | 28.2 | 1012 | 24721 |
| ita-deu | flores101-devtest | 0.54033 | 22.0 | 1012 | 25094 |
| ita-fra | flores101-devtest | 0.59488 | 30.6 | 1012 | 28343 |
| ita-spa | flores101-devtest | 0.51946 | 22.9 | 1012 | 29199 |
| kea-spa | flores101-devtest | 0.46784 | 18.3 | 1012 | 29199 |
| lav-por | flores101-devtest | 0.54017 | 24.6 | 1012 | 26519 |
| lav-spa | flores101-devtest | 0.48185 | 19.3 | 1012 | 29199 |
| lit-deu | flores101-devtest | 0.51261 | 21.4 | 1012 | 25094 |
| lit-por | flores101-devtest | 0.53223 | 25.3 | 1012 | 26519 |
| ltz-deu | flores101-devtest | 0.58286 | 29.2 | 1012 | 25094 |
| ltz-por | flores101-devtest | 0.53241 | 27.0 | 1012 | 26519 |
| mar-deu | flores101-devtest | 0.44237 | 14.1 | 1012 | 25094 |
| mar-eng | flores101-devtest | 0.52755 | 23.8 | 1012 | 24721 |
| mar-por | flores101-devtest | 0.45667 | 18.1 | 1012 | 26519 |
| mkd-fra | flores101-devtest | 0.59219 | 32.8 | 1012 | 28343 |
| nld-deu | flores101-devtest | 0.52899 | 21.5 | 1012 | 25094 |
| nld-eng | flores101-devtest | 0.58230 | 29.8 | 1012 | 24721 |
| nob-spa | flores101-devtest | 0.50054 | 21.2 | 1012 | 29199 |
| npi-eng | flores101-devtest | 0.53179 | 24.8 | 1012 | 24721 |
| npi-spa | flores101-devtest | 0.41165 | 13.6 | 1012 | 29199 |
| pan-deu | flores101-devtest | 0.42831 | 13.6 | 1012 | 25094 |
| pan-eng | flores101-devtest | 0.51203 | 22.2 | 1012 | 24721 |
| pan-fra | flores101-devtest | 0.46357 | 19.2 | 1012 | 28343 |
| pan-por | flores101-devtest | 0.44885 | 17.4 | 1012 | 26519 |
| pol-deu | flores101-devtest | 0.50973 | 20.1 | 1012 | 25094 |
| pol-eng | flores101-devtest | 0.55772 | 25.9 | 1012 | 24721 |
| pol-fra | flores101-devtest | 0.54590 | 26.2 | 1012 | 28343 |
| pol-spa | flores101-devtest | 0.47816 | 18.9 | 1012 | 29199 |
| por-eng | flores101-devtest | 0.69438 | 45.5 | 1012 | 24721 |
| por-fra | flores101-devtest | 0.63701 | 38.9 | 1012 | 28343 |
| por-spa | flores101-devtest | 0.53216 | 25.0 | 1012 | 29199 |
| ron-fra | flores101-devtest | 0.62744 | 36.2 | 1012 | 28343 |
| rus-deu | flores101-devtest | 0.53823 | 23.1 | 1012 | 25094 |
| rus-eng | flores101-devtest | 0.59829 | 31.7 | 1012 | 24721 |
| rus-fra | flores101-devtest | 0.57384 | 29.8 | 1012 | 28343 |
| rus-por | flores101-devtest | 0.56082 | 28.0 | 1012 | 26519 |
| slk-eng | flores101-devtest | 0.62376 | 34.4 | 1012 | 24721 |
| slk-por | flores101-devtest | 0.54486 | 26.6 | 1012 | 26519 |
| slk-spa | flores101-devtest | 0.48253 | 20.0 | 1012 | 29199 |
| slv-deu | flores101-devtest | 0.54130 | 23.8 | 1012 | 25094 |
| slv-fra | flores101-devtest | 0.56838 | 29.2 | 1012 | 28343 |
| slv-por | flores101-devtest | 0.55554 | 28.1 | 1012 | 26519 |
| spa-deu | flores101-devtest | 0.51807 | 19.5 | 1012 | 25094 |
| swe-spa | flores101-devtest | 0.51211 | 22.8 | 1012 | 29199 |
| tgk-fra | flores101-devtest | 0.47290 | 19.6 | 1012 | 28343 |
| tgk-spa | flores101-devtest | 0.41393 | 14.3 | 1012 | 29199 |
| ukr-eng | flores101-devtest | 0.61588 | 34.3 | 1012 | 24721 |
| ukr-fra | flores101-devtest | 0.58296 | 31.3 | 1012 | 28343 |
| ukr-spa | flores101-devtest | 0.49535 | 21.1 | 1012 | 29199 |
| urd-deu | flores101-devtest | 0.44211 | 15.2 | 1012 | 25094 |
| afr-deu | flores200-devtest | 0.57712 | 28.7 | 1012 | 25094 |
| afr-eng | flores200-devtest | 0.73690 | 53.4 | 1012 | 24721 |
| afr-fra | flores200-devtest | 0.61332 | 35.7 | 1012 | 28343 |
| afr-por | flores200-devtest | 0.60899 | 35.1 | 1012 | 26519 |
| afr-spa | flores200-devtest | 0.50836 | 22.1 | 1012 | 29199 |
| asm-eng | flores200-devtest | 0.42432 | 13.4 | 1012 | 24721 |
| ast-deu | flores200-devtest | 0.52402 | 23.3 | 1012 | 25094 |
| ast-eng | flores200-devtest | 0.60640 | 35.1 | 1012 | 24721 |
| ast-fra | flores200-devtest | 0.57060 | 31.5 | 1012 | 28343 |
| ast-por | flores200-devtest | 0.56982 | 30.8 | 1012 | 26519 |
| ast-spa | flores200-devtest | 0.49452 | 21.1 | 1012 | 29199 |
| awa-deu | flores200-devtest | 0.47101 | 16.3 | 1012 | 25094 |
| awa-eng | flores200-devtest | 0.55042 | 25.7 | 1012 | 24721 |
| awa-fra | flores200-devtest | 0.50230 | 22.1 | 1012 | 28343 |
| awa-por | flores200-devtest | 0.49701 | 21.1 | 1012 | 26519 |
| awa-spa | flores200-devtest | 0.43913 | 15.7 | 1012 | 29199 |
| bel-deu | flores200-devtest | 0.46906 | 12.7 | 1012 | 25094 |
| bel-eng | flores200-devtest | 0.49995 | 16.5 | 1012 | 24721 |
| bel-fra | flores200-devtest | 0.49987 | 17.1 | 1012 | 28343 |
| bel-por | flores200-devtest | 0.48319 | 15.7 | 1012 | 26519 |
| bel-spa | flores200-devtest | 0.45393 | 14.4 | 1012 | 29199 |
| ben-deu | flores200-devtest | 0.46413 | 16.3 | 1012 | 25094 |
| ben-eng | flores200-devtest | 0.54681 | 24.5 | 1012 | 24721 |
| ben-fra | flores200-devtest | 0.49843 | 21.9 | 1012 | 28343 |
| ben-por | flores200-devtest | 0.49129 | 21.0 | 1012 | 26519 |
| ben-spa | flores200-devtest | 0.43310 | 14.9 | 1012 | 29199 |
| bho-deu | flores200-devtest | 0.41875 | 12.4 | 1012 | 25094 |
| bho-eng | flores200-devtest | 0.48319 | 18.5 | 1012 | 24721 |
| bho-fra | flores200-devtest | 0.44504 | 16.1 | 1012 | 28343 |
| bho-por | flores200-devtest | 0.43627 | 15.5 | 1012 | 26519 |
| bho-spa | flores200-devtest | 0.40189 | 12.6 | 1012 | 29199 |
| bul-deu | flores200-devtest | 0.56591 | 26.8 | 1012 | 25094 |
| bul-eng | flores200-devtest | 0.64922 | 37.8 | 1012 | 24721 |
| bul-fra | flores200-devtest | 0.60386 | 33.3 | 1012 | 28343 |
| bul-por | flores200-devtest | 0.59070 | 31.6 | 1012 | 26519 |
| bul-spa | flores200-devtest | 0.50968 | 22.2 | 1012 | 29199 |
| cat-deu | flores200-devtest | 0.57030 | 27.9 | 1012 | 25094 |
| cat-eng | flores200-devtest | 0.67842 | 43.0 | 1012 | 24721 |
| cat-fra | flores200-devtest | 0.63034 | 38.1 | 1012 | 28343 |
| cat-por | flores200-devtest | 0.62567 | 37.3 | 1012 | 26519 |
| cat-spa | flores200-devtest | 0.53260 | 24.5 | 1012 | 29199 |
| ces-deu | flores200-devtest | 0.56613 | 27.1 | 1012 | 25094 |
| ces-eng | flores200-devtest | 0.63574 | 36.5 | 1012 | 24721 |
| ces-fra | flores200-devtest | 0.59573 | 32.8 | 1012 | 28343 |
| ces-por | flores200-devtest | 0.58096 | 30.9 | 1012 | 26519 |
| ces-spa | flores200-devtest | 0.50295 | 21.6 | 1012 | 29199 |
| ckb-eng | flores200-devtest | 0.43075 | 16.7 | 1012 | 24721 |
| ckb-fra | flores200-devtest | 0.41038 | 15.7 | 1012 | 28343 |
| cym-deu | flores200-devtest | 0.51003 | 22.0 | 1012 | 25094 |
| cym-eng | flores200-devtest | 0.67808 | 45.7 | 1012 | 24721 |
| cym-fra | flores200-devtest | 0.55779 | 29.9 | 1012 | 28343 |
| cym-por | flores200-devtest | 0.53930 | 27.9 | 1012 | 26519 |
| cym-spa | flores200-devtest | 0.47129 | 19.6 | 1012 | 29199 |
| dan-deu | flores200-devtest | 0.59897 | 30.7 | 1012 | 25094 |
| dan-eng | flores200-devtest | 0.70142 | 46.2 | 1012 | 24721 |
| dan-fra | flores200-devtest | 0.62669 | 37.1 | 1012 | 28343 |
| dan-por | flores200-devtest | 0.61338 | 35.3 | 1012 | 26519 |
| dan-spa | flores200-devtest | 0.52360 | 24.2 | 1012 | 29199 |
| deu-eng | flores200-devtest | 0.66096 | 40.3 | 1012 | 24721 |
| deu-fra | flores200-devtest | 0.61562 | 35.4 | 1012 | 28343 |
| deu-por | flores200-devtest | 0.59775 | 33.3 | 1012 | 26519 |
| deu-spa | flores200-devtest | 0.51787 | 23.3 | 1012 | 29199 |
| ell-deu | flores200-devtest | 0.52003 | 22.0 | 1012 | 25094 |
| ell-eng | flores200-devtest | 0.59074 | 31.6 | 1012 | 24721 |
| ell-fra | flores200-devtest | 0.56636 | 29.9 | 1012 | 28343 |
| ell-por | flores200-devtest | 0.54903 | 27.2 | 1012 | 26519 |
| ell-spa | flores200-devtest | 0.48701 | 20.4 | 1012 | 29199 |
| eng-deu | flores200-devtest | 0.63747 | 36.8 | 1012 | 25094 |
| eng-fra | flores200-devtest | 0.69505 | 47.2 | 1012 | 28343 |
| eng-por | flores200-devtest | 0.69743 | 47.3 | 1012 | 26519 |
| eng-spa | flores200-devtest | 0.54954 | 26.6 | 1012 | 29199 |
| fao-deu | flores200-devtest | 0.42943 | 16.3 | 1012 | 25094 |
| fao-eng | flores200-devtest | 0.46227 | 22.9 | 1012 | 24721 |
| fao-fra | flores200-devtest | 0.41404 | 18.3 | 1012 | 28343 |
| fao-por | flores200-devtest | 0.41850 | 17.6 | 1012 | 26519 |
| fra-deu | flores200-devtest | 0.57718 | 28.2 | 1012 | 25094 |
| fra-eng | flores200-devtest | 0.66534 | 41.4 | 1012 | 24721 |
| fra-por | flores200-devtest | 0.61987 | 36.2 | 1012 | 26519 |
| fra-spa | flores200-devtest | 0.52646 | 24.1 | 1012 | 29199 |
| fur-deu | flores200-devtest | 0.50429 | 20.5 | 1012 | 25094 |
| fur-eng | flores200-devtest | 0.58954 | 32.0 | 1012 | 24721 |
| fur-fra | flores200-devtest | 0.55699 | 28.6 | 1012 | 28343 |
| fur-por | flores200-devtest | 0.54977 | 27.9 | 1012 | 26519 |
| fur-spa | flores200-devtest | 0.47550 | 19.0 | 1012 | 29199 |
| gla-eng | flores200-devtest | 0.43561 | 16.2 | 1012 | 24721 |
| gla-fra | flores200-devtest | 0.41770 | 15.3 | 1012 | 28343 |
| gla-por | flores200-devtest | 0.40473 | 14.7 | 1012 | 26519 |
| gle-deu | flores200-devtest | 0.48622 | 18.1 | 1012 | 25094 |
| gle-eng | flores200-devtest | 0.58337 | 30.7 | 1012 | 24721 |
| gle-fra | flores200-devtest | 0.52798 | 24.6 | 1012 | 28343 |
| gle-por | flores200-devtest | 0.51712 | 23.6 | 1012 | 26519 |
| gle-spa | flores200-devtest | 0.45954 | 18.1 | 1012 | 29199 |
| glg-deu | flores200-devtest | 0.56174 | 25.8 | 1012 | 25094 |
| glg-eng | flores200-devtest | 0.65391 | 38.4 | 1012 | 24721 |
| glg-fra | flores200-devtest | 0.61762 | 35.7 | 1012 | 28343 |
| glg-por | flores200-devtest | 0.60170 | 32.9 | 1012 | 26519 |
| glg-spa | flores200-devtest | 0.53214 | 24.3 | 1012 | 29199 |
| guj-deu | flores200-devtest | 0.43101 | 14.2 | 1012 | 25094 |
| guj-eng | flores200-devtest | 0.55857 | 26.4 | 1012 | 24721 |
| guj-fra | flores200-devtest | 0.47047 | 19.8 | 1012 | 28343 |
| guj-por | flores200-devtest | 0.45641 | 18.5 | 1012 | 26519 |
| guj-spa | flores200-devtest | 0.42457 | 14.5 | 1012 | 29199 |
| hat-deu | flores200-devtest | 0.49247 | 19.2 | 1012 | 25094 |
| hat-eng | flores200-devtest | 0.58655 | 31.7 | 1012 | 24721 |
| hat-fra | flores200-devtest | 0.60736 | 34.2 | 1012 | 28343 |
| hat-por | flores200-devtest | 0.54733 | 27.3 | 1012 | 26519 |
| hat-spa | flores200-devtest | 0.46963 | 17.9 | 1012 | 29199 |
| hin-deu | flores200-devtest | 0.50305 | 20.3 | 1012 | 25094 |
| hin-eng | flores200-devtest | 0.60811 | 34.0 | 1012 | 24721 |
| hin-fra | flores200-devtest | 0.53919 | 25.9 | 1012 | 28343 |
| hin-por | flores200-devtest | 0.53151 | 25.6 | 1012 | 26519 |
| hin-spa | flores200-devtest | 0.46051 | 17.4 | 1012 | 29199 |
| hne-deu | flores200-devtest | 0.48386 | 18.4 | 1012 | 25094 |
| hne-eng | flores200-devtest | 0.59671 | 32.3 | 1012 | 24721 |
| hne-fra | flores200-devtest | 0.52013 | 24.5 | 1012 | 28343 |
| hne-por | flores200-devtest | 0.51345 | 23.8 | 1012 | 26519 |
| hne-spa | flores200-devtest | 0.44481 | 16.3 | 1012 | 29199 |
| hrv-deu | flores200-devtest | 0.55524 | 26.0 | 1012 | 25094 |
| hrv-eng | flores200-devtest | 0.61977 | 34.9 | 1012 | 24721 |
| hrv-fra | flores200-devtest | 0.59318 | 32.7 | 1012 | 28343 |
| hrv-por | flores200-devtest | 0.57603 | 30.2 | 1012 | 26519 |
| hrv-spa | flores200-devtest | 0.50242 | 21.5 | 1012 | 29199 |
| hye-deu | flores200-devtest | 0.48676 | 19.2 | 1012 | 25094 |
| hye-eng | flores200-devtest | 0.55729 | 27.0 | 1012 | 24721 |
| hye-fra | flores200-devtest | 0.52152 | 25.2 | 1012 | 28343 |
| hye-por | flores200-devtest | 0.51026 | 23.3 | 1012 | 26519 |
| hye-spa | flores200-devtest | 0.45459 | 17.8 | 1012 | 29199 |
| isl-deu | flores200-devtest | 0.48677 | 20.5 | 1012 | 25094 |
| isl-eng | flores200-devtest | 0.54804 | 29.1 | 1012 | 24721 |
| isl-fra | flores200-devtest | 0.51362 | 25.0 | 1012 | 28343 |
| isl-por | flores200-devtest | 0.50201 | 23.8 | 1012 | 26519 |
| isl-spa | flores200-devtest | 0.44801 | 17.5 | 1012 | 29199 |
| ita-deu | flores200-devtest | 0.54589 | 22.9 | 1012 | 25094 |
| ita-eng | flores200-devtest | 0.60660 | 30.9 | 1012 | 24721 |
| ita-fra | flores200-devtest | 0.59811 | 31.0 | 1012 | 28343 |
| ita-por | flores200-devtest | 0.57808 | 28.4 | 1012 | 26519 |
| ita-spa | flores200-devtest | 0.52244 | 23.3 | 1012 | 29199 |
| kea-deu | flores200-devtest | 0.48107 | 19.2 | 1012 | 25094 |
| kea-eng | flores200-devtest | 0.59570 | 34.5 | 1012 | 24721 |
| kea-fra | flores200-devtest | 0.53683 | 26.8 | 1012 | 28343 |
| kea-por | flores200-devtest | 0.57642 | 30.3 | 1012 | 26519 |
| kea-spa | flores200-devtest | 0.47048 | 18.6 | 1012 | 29199 |
| lij-deu | flores200-devtest | 0.49270 | 19.4 | 1012 | 25094 |
| lij-eng | flores200-devtest | 0.58369 | 30.8 | 1012 | 24721 |
| lij-fra | flores200-devtest | 0.55002 | 28.6 | 1012 | 28343 |
| lij-por | flores200-devtest | 0.54155 | 26.7 | 1012 | 26519 |
| lij-spa | flores200-devtest | 0.46656 | 18.7 | 1012 | 29199 |
| lim-deu | flores200-devtest | 0.44183 | 15.0 | 1012 | 25094 |
| lim-eng | flores200-devtest | 0.46674 | 20.3 | 1012 | 24721 |
| lim-fra | flores200-devtest | 0.43685 | 17.8 | 1012 | 28343 |
| lim-por | flores200-devtest | 0.42699 | 16.3 | 1012 | 26519 |
| lit-deu | flores200-devtest | 0.51669 | 21.9 | 1012 | 25094 |
| lit-eng | flores200-devtest | 0.57849 | 30.5 | 1012 | 24721 |
| lit-fra | flores200-devtest | 0.55896 | 29.0 | 1012 | 28343 |
| lit-por | flores200-devtest | 0.53960 | 26.3 | 1012 | 26519 |
| lit-spa | flores200-devtest | 0.48120 | 19.7 | 1012 | 29199 |
| lmo-deu | flores200-devtest | 0.44732 | 14.2 | 1012 | 25094 |
| lmo-eng | flores200-devtest | 0.51710 | 23.3 | 1012 | 24721 |
| lmo-fra | flores200-devtest | 0.49129 | 21.5 | 1012 | 28343 |
| lmo-por | flores200-devtest | 0.49153 | 21.4 | 1012 | 26519 |
| lmo-spa | flores200-devtest | 0.43363 | 15.4 | 1012 | 29199 |
| ltz-deu | flores200-devtest | 0.58897 | 29.8 | 1012 | 25094 |
| ltz-eng | flores200-devtest | 0.62250 | 36.2 | 1012 | 24721 |
| ltz-fra | flores200-devtest | 0.57460 | 31.6 | 1012 | 28343 |
| ltz-por | flores200-devtest | 0.53674 | 27.1 | 1012 | 26519 |
| ltz-spa | flores200-devtest | 0.46048 | 18.8 | 1012 | 29199 |
| mag-deu | flores200-devtest | 0.49176 | 18.9 | 1012 | 25094 |
| mag-eng | flores200-devtest | 0.59691 | 32.2 | 1012 | 24721 |
| mag-fra | flores200-devtest | 0.52068 | 24.1 | 1012 | 28343 |
| mag-por | flores200-devtest | 0.52006 | 23.8 | 1012 | 26519 |
| mag-spa | flores200-devtest | 0.44945 | 16.5 | 1012 | 29199 |
| mai-deu | flores200-devtest | 0.46893 | 16.5 | 1012 | 25094 |
| mai-eng | flores200-devtest | 0.56282 | 27.7 | 1012 | 24721 |
| mai-fra | flores200-devtest | 0.50286 | 22.2 | 1012 | 28343 |
| mai-por | flores200-devtest | 0.49523 | 21.6 | 1012 | 26519 |
| mai-spa | flores200-devtest | 0.44271 | 15.9 | 1012 | 29199 |
| mar-deu | flores200-devtest | 0.44712 | 14.8 | 1012 | 25094 |
| mar-eng | flores200-devtest | 0.54222 | 25.4 | 1012 | 24721 |
| mar-fra | flores200-devtest | 0.47383 | 19.6 | 1012 | 28343 |
| mar-por | flores200-devtest | 0.46593 | 18.7 | 1012 | 26519 |
| mar-spa | flores200-devtest | 0.41912 | 14.0 | 1012 | 29199 |
| mkd-deu | flores200-devtest | 0.56267 | 26.8 | 1012 | 25094 |
| mkd-eng | flores200-devtest | 0.64902 | 38.8 | 1012 | 24721 |
| mkd-fra | flores200-devtest | 0.60051 | 33.9 | 1012 | 28343 |
| mkd-por | flores200-devtest | 0.59197 | 32.9 | 1012 | 26519 |
| mkd-spa | flores200-devtest | 0.50972 | 22.8 | 1012 | 29199 |
| nld-deu | flores200-devtest | 0.53072 | 21.8 | 1012 | 25094 |
| nld-eng | flores200-devtest | 0.58671 | 30.5 | 1012 | 24721 |
| nld-fra | flores200-devtest | 0.55677 | 27.5 | 1012 | 28343 |
| nld-por | flores200-devtest | 0.53989 | 25.6 | 1012 | 26519 |
| nld-spa | flores200-devtest | 0.48443 | 19.5 | 1012 | 29199 |
| nno-deu | flores200-devtest | 0.56707 | 27.3 | 1012 | 25094 |
| nno-eng | flores200-devtest | 0.67683 | 43.2 | 1012 | 24721 |
| nno-fra | flores200-devtest | 0.59829 | 34.3 | 1012 | 28343 |
| nno-por | flores200-devtest | 0.58723 | 32.5 | 1012 | 26519 |
| nno-spa | flores200-devtest | 0.50217 | 22.0 | 1012 | 29199 |
| nob-deu | flores200-devtest | 0.56197 | 26.5 | 1012 | 25094 |
| nob-eng | flores200-devtest | 0.66428 | 41.7 | 1012 | 24721 |
| nob-fra | flores200-devtest | 0.59531 | 33.1 | 1012 | 28343 |
| nob-por | flores200-devtest | 0.58521 | 31.7 | 1012 | 26519 |
| nob-spa | flores200-devtest | 0.50418 | 21.4 | 1012 | 29199 |
| npi-deu | flores200-devtest | 0.44364 | 14.6 | 1012 | 25094 |
| npi-eng | flores200-devtest | 0.54309 | 26.1 | 1012 | 24721 |
| npi-fra | flores200-devtest | 0.47458 | 19.7 | 1012 | 28343 |
| npi-por | flores200-devtest | 0.46702 | 18.9 | 1012 | 26519 |
| npi-spa | flores200-devtest | 0.41720 | 13.9 | 1012 | 29199 |
| oci-deu | flores200-devtest | 0.56668 | 26.9 | 1012 | 25094 |
| oci-eng | flores200-devtest | 0.70282 | 46.8 | 1012 | 24721 |
| oci-fra | flores200-devtest | 0.64408 | 39.1 | 1012 | 28343 |
| oci-por | flores200-devtest | 0.62256 | 35.7 | 1012 | 26519 |
| oci-spa | flores200-devtest | 0.51705 | 22.3 | 1012 | 29199 |
| pan-deu | flores200-devtest | 0.44428 | 15.1 | 1012 | 25094 |
| pan-eng | flores200-devtest | 0.52652 | 23.0 | 1012 | 24721 |
| pan-fra | flores200-devtest | 0.47743 | 19.9 | 1012 | 28343 |
| pan-por | flores200-devtest | 0.46585 | 18.8 | 1012 | 26519 |
| pan-spa | flores200-devtest | 0.41798 | 14.5 | 1012 | 29199 |
| pap-deu | flores200-devtest | 0.53397 | 23.5 | 1012 | 25094 |
| pap-eng | flores200-devtest | 0.67741 | 43.1 | 1012 | 24721 |
| pap-fra | flores200-devtest | 0.57787 | 31.1 | 1012 | 28343 |
| pap-por | flores200-devtest | 0.59003 | 32.9 | 1012 | 26519 |
| pap-spa | flores200-devtest | 0.49768 | 21.8 | 1012 | 29199 |
| pes-deu | flores200-devtest | 0.50787 | 20.9 | 1012 | 25094 |
| pes-eng | flores200-devtest | 0.58693 | 31.1 | 1012 | 24721 |
| pes-fra | flores200-devtest | 0.55060 | 27.9 | 1012 | 28343 |
| pes-por | flores200-devtest | 0.54139 | 26.6 | 1012 | 26519 |
| pes-spa | flores200-devtest | 0.47230 | 18.6 | 1012 | 29199 |
| pol-deu | flores200-devtest | 0.51514 | 20.8 | 1012 | 25094 |
| pol-eng | flores200-devtest | 0.56021 | 26.2 | 1012 | 24721 |
| pol-fra | flores200-devtest | 0.55176 | 27.0 | 1012 | 28343 |
| pol-por | flores200-devtest | 0.52998 | 24.3 | 1012 | 26519 |
| pol-spa | flores200-devtest | 0.48344 | 19.4 | 1012 | 29199 |
| por-deu | flores200-devtest | 0.58002 | 29.3 | 1012 | 25094 |
| por-eng | flores200-devtest | 0.69694 | 46.0 | 1012 | 24721 |
| por-fra | flores200-devtest | 0.64146 | 39.6 | 1012 | 28343 |
| por-spa | flores200-devtest | 0.53508 | 25.3 | 1012 | 29199 |
| prs-deu | flores200-devtest | 0.49849 | 20.4 | 1012 | 25094 |
| prs-eng | flores200-devtest | 0.58120 | 32.0 | 1012 | 24721 |
| prs-fra | flores200-devtest | 0.53939 | 27.0 | 1012 | 28343 |
| prs-por | flores200-devtest | 0.53479 | 26.7 | 1012 | 26519 |
| prs-spa | flores200-devtest | 0.46241 | 18.3 | 1012 | 29199 |
| ron-deu | flores200-devtest | 0.57214 | 27.4 | 1012 | 25094 |
| ron-eng | flores200-devtest | 0.66701 | 40.4 | 1012 | 24721 |
| ron-fra | flores200-devtest | 0.63234 | 37.2 | 1012 | 28343 |
| ron-por | flores200-devtest | 0.61838 | 35.4 | 1012 | 26519 |
| ron-spa | flores200-devtest | 0.52856 | 24.3 | 1012 | 29199 |
| rus-deu | flores200-devtest | 0.54446 | 23.9 | 1012 | 25094 |
| rus-eng | flores200-devtest | 0.60131 | 32.0 | 1012 | 24721 |
| rus-fra | flores200-devtest | 0.57986 | 30.4 | 1012 | 28343 |
| rus-por | flores200-devtest | 0.56600 | 28.7 | 1012 | 26519 |
| rus-spa | flores200-devtest | 0.49871 | 21.2 | 1012 | 29199 |
| scn-deu | flores200-devtest | 0.46523 | 17.0 | 1012 | 25094 |
| scn-eng | flores200-devtest | 0.53341 | 26.1 | 1012 | 24721 |
| scn-fra | flores200-devtest | 0.51481 | 25.0 | 1012 | 28343 |
| scn-por | flores200-devtest | 0.50343 | 23.8 | 1012 | 26519 |
| scn-spa | flores200-devtest | 0.44756 | 17.1 | 1012 | 29199 |
| slk-deu | flores200-devtest | 0.53932 | 23.6 | 1012 | 25094 |
| slk-eng | flores200-devtest | 0.63137 | 35.4 | 1012 | 24721 |
| slk-fra | flores200-devtest | 0.56587 | 29.9 | 1012 | 28343 |
| slk-por | flores200-devtest | 0.54523 | 27.3 | 1012 | 26519 |
| slk-spa | flores200-devtest | 0.48275 | 20.1 | 1012 | 29199 |
| slv-deu | flores200-devtest | 0.54583 | 24.5 | 1012 | 25094 |
| slv-eng | flores200-devtest | 0.59952 | 32.4 | 1012 | 24721 |
| slv-fra | flores200-devtest | 0.57418 | 30.3 | 1012 | 28343 |
| slv-por | flores200-devtest | 0.55838 | 28.4 | 1012 | 26519 |
| slv-spa | flores200-devtest | 0.49438 | 20.7 | 1012 | 29199 |
| spa-deu | flores200-devtest | 0.52303 | 20.0 | 1012 | 25094 |
| spa-eng | flores200-devtest | 0.57648 | 26.7 | 1012 | 24721 |
| srd-deu | flores200-devtest | 0.47651 | 18.6 | 1012 | 25094 |
| srd-eng | flores200-devtest | 0.56624 | 30.5 | 1012 | 24721 |
| srd-fra | flores200-devtest | 0.52746 | 26.8 | 1012 | 28343 |
| srd-por | flores200-devtest | 0.52301 | 26.4 | 1012 | 26519 |
| srd-spa | flores200-devtest | 0.45213 | 17.7 | 1012 | 29199 |
| srp_Cyrl-deu | flores200-devtest | 0.57563 | 27.7 | 1012 | 25094 |
| srp_Cyrl-eng | flores200-devtest | 0.66201 | 39.9 | 1012 | 24721 |
| srp_Cyrl-fra | flores200-devtest | 0.61570 | 35.0 | 1012 | 28343 |
| srp_Cyrl-por | flores200-devtest | 0.60561 | 33.6 | 1012 | 26519 |
| srp_Cyrl-spa | flores200-devtest | 0.51500 | 22.4 | 1012 | 29199 |
| swe-deu | flores200-devtest | 0.59607 | 31.6 | 1012 | 25094 |
| swe-eng | flores200-devtest | 0.69032 | 46.0 | 1012 | 24721 |
| swe-fra | flores200-devtest | 0.62610 | 37.8 | 1012 | 28343 |
| swe-por | flores200-devtest | 0.60692 | 35.0 | 1012 | 26519 |
| swe-spa | flores200-devtest | 0.51448 | 23.0 | 1012 | 29199 |
| szl-deu | flores200-devtest | 0.51005 | 22.0 | 1012 | 25094 |
| szl-eng | flores200-devtest | 0.57536 | 30.6 | 1012 | 24721 |
| szl-fra | flores200-devtest | 0.54029 | 28.2 | 1012 | 28343 |
| szl-por | flores200-devtest | 0.52911 | 26.5 | 1012 | 26519 |
| szl-spa | flores200-devtest | 0.46280 | 18.8 | 1012 | 29199 |
| tgk-deu | flores200-devtest | 0.45372 | 15.8 | 1012 | 25094 |
| tgk-eng | flores200-devtest | 0.51096 | 22.1 | 1012 | 24721 |
| tgk-fra | flores200-devtest | 0.48620 | 21.1 | 1012 | 28343 |
| tgk-por | flores200-devtest | 0.46870 | 19.4 | 1012 | 26519 |
| tgk-spa | flores200-devtest | 0.42689 | 15.1 | 1012 | 29199 |
| tpi-deu | flores200-devtest | 0.41078 | 11.1 | 1012 | 25094 |
| tpi-eng | flores200-devtest | 0.48619 | 20.1 | 1012 | 24721 |
| tpi-fra | flores200-devtest | 0.43850 | 16.3 | 1012 | 28343 |
| tpi-por | flores200-devtest | 0.43040 | 15.8 | 1012 | 26519 |
| ukr-deu | flores200-devtest | 0.55290 | 25.1 | 1012 | 25094 |
| ukr-eng | flores200-devtest | 0.62150 | 34.9 | 1012 | 24721 |
| ukr-fra | flores200-devtest | 0.59093 | 32.5 | 1012 | 28343 |
| ukr-por | flores200-devtest | 0.57706 | 30.7 | 1012 | 26519 |
| ukr-spa | flores200-devtest | 0.50128 | 21.8 | 1012 | 29199 |
| urd-deu | flores200-devtest | 0.45107 | 15.6 | 1012 | 25094 |
| urd-eng | flores200-devtest | 0.53130 | 25.0 | 1012 | 24721 |
| urd-fra | flores200-devtest | 0.48377 | 20.7 | 1012 | 28343 |
| urd-por | flores200-devtest | 0.45290 | 18.5 | 1012 | 26519 |
| urd-spa | flores200-devtest | 0.41342 | 13.8 | 1012 | 29199 |
| vec-deu | flores200-devtest | 0.48212 | 18.5 | 1012 | 25094 |
| vec-eng | flores200-devtest | 0.56243 | 29.3 | 1012 | 24721 |
| vec-fra | flores200-devtest | 0.53340 | 26.4 | 1012 | 28343 |
| vec-por | flores200-devtest | 0.52845 | 25.7 | 1012 | 26519 |
| vec-spa | flores200-devtest | 0.46136 | 17.9 | 1012 | 29199 |
| ces-eng | generaltest2022 | 0.64599 | 40.2 | 1448 | 30675 |
| deu-eng | generaltest2022 | 0.54993 | 29.8 | 1984 | 37634 |
| deu-fra | generaltest2022 | 0.59361 | 35.6 | 1984 | 38276 |
| eng-deu | generaltest2022 | 0.59885 | 31.9 | 2037 | 38914 |
| fra-deu | generaltest2022 | 0.64266 | 40.1 | 2006 | 37696 |
| rus-eng | generaltest2022 | 0.63746 | 37.8 | 2016 | 38529 |
| ukr-eng | generaltest2022 | 0.60704 | 35.9 | 2018 | 34242 |
| ces-deu | multi30k_test_2016_flickr | 0.56370 | 26.9 | 1000 | 12106 |
| ces-eng | multi30k_test_2016_flickr | 0.57217 | 32.7 | 1000 | 12955 |
| ces-fra | multi30k_test_2016_flickr | 0.57498 | 30.7 | 1000 | 13505 |
| deu-eng | multi30k_test_2016_flickr | 0.60234 | 39.1 | 1000 | 12955 |
| deu-fra | multi30k_test_2016_flickr | 0.60951 | 36.7 | 1000 | 13505 |
| eng-deu | multi30k_test_2016_flickr | 0.62191 | 32.5 | 1000 | 12106 |
| eng-fra | multi30k_test_2016_flickr | 0.69376 | 47.9 | 1000 | 13505 |
| fra-deu | multi30k_test_2016_flickr | 0.59597 | 29.3 | 1000 | 12106 |
| fra-eng | multi30k_test_2016_flickr | 0.64810 | 45.4 | 1000 | 12955 |
| deu-eng | multi30k_test_2017_flickr | 0.61895 | 38.9 | 1000 | 11374 |
| deu-fra | multi30k_test_2017_flickr | 0.60570 | 34.6 | 1000 | 12118 |
| eng-deu | multi30k_test_2017_flickr | 0.61458 | 32.1 | 1000 | 10755 |
| eng-fra | multi30k_test_2017_flickr | 0.69630 | 48.1 | 1000 | 12118 |
| fra-deu | multi30k_test_2017_flickr | 0.58207 | 27.7 | 1000 | 10755 |
| fra-eng | multi30k_test_2017_flickr | 0.67447 | 48.0 | 1000 | 11374 |
| deu-eng | multi30k_test_2017_mscoco | 0.54299 | 30.9 | 461 | 5231 |
| deu-fra | multi30k_test_2017_mscoco | 0.57789 | 32.3 | 461 | 5484 |
| eng-deu | multi30k_test_2017_mscoco | 0.56164 | 27.3 | 461 | 5158 |
| eng-fra | multi30k_test_2017_mscoco | 0.71453 | 51.9 | 461 | 5484 |
| fra-deu | multi30k_test_2017_mscoco | 0.53897 | 23.9 | 461 | 5158 |
| fra-eng | multi30k_test_2017_mscoco | 0.65274 | 46.5 | 461 | 5231 |
| ces-deu | multi30k_test_2018_flickr | 0.51543 | 22.4 | 1071 | 13703 |
| ces-eng | multi30k_test_2018_flickr | 0.57995 | 33.1 | 1071 | 14689 |
| ces-fra | multi30k_test_2018_flickr | 0.53232 | 26.0 | 1071 | 15867 |
| deu-eng | multi30k_test_2018_flickr | 0.58274 | 35.3 | 1071 | 14689 |
| deu-fra | multi30k_test_2018_flickr | 0.55809 | 29.3 | 1071 | 15867 |
| eng-deu | multi30k_test_2018_flickr | 0.58395 | 28.7 | 1071 | 13703 |
| eng-fra | multi30k_test_2018_flickr | 0.63770 | 39.3 | 1071 | 15867 |
| fra-deu | multi30k_test_2018_flickr | 0.53677 | 22.6 | 1071 | 13703 |
| fra-eng | multi30k_test_2018_flickr | 0.62909 | 41.0 | 1071 | 14689 |
| eng-fra | newsdiscusstest2015 | 0.62144 | 35.7 | 1500 | 27975 |
| fra-eng | newsdiscusstest2015 | 0.60513 | 37.5 | 1500 | 26982 |
| ces-deu | newssyscomb2009 | 0.52473 | 21.7 | 502 | 11271 |
| ces-eng | newssyscomb2009 | 0.55107 | 28.0 | 502 | 11818 |
| ces-fra | newssyscomb2009 | 0.56925 | 28.7 | 502 | 12331 |
| ces-spa | newssyscomb2009 | 0.56161 | 28.8 | 502 | 12503 |
| deu-eng | newssyscomb2009 | 0.55367 | 29.2 | 502 | 11818 |
| deu-fra | newssyscomb2009 | 0.55730 | 27.1 | 502 | 12331 |
| deu-spa | newssyscomb2009 | 0.54844 | 27.6 | 502 | 12503 |
| eng-deu | newssyscomb2009 | 0.53204 | 22.3 | 502 | 11271 |
| eng-fra | newssyscomb2009 | 0.57875 | 28.8 | 502 | 12331 |
| eng-spa | newssyscomb2009 | 0.57849 | 30.5 | 502 | 12503 |
| fra-deu | newssyscomb2009 | 0.52855 | 22.5 | 502 | 11271 |
| fra-eng | newssyscomb2009 | 0.57071 | 30.6 | 502 | 11818 |
| fra-spa | newssyscomb2009 | 0.60067 | 34.0 | 502 | 12503 |
| ita-deu | newssyscomb2009 | 0.53245 | 22.1 | 502 | 11271 |
| ita-eng | newssyscomb2009 | 0.59274 | 33.7 | 502 | 11818 |
| ita-fra | newssyscomb2009 | 0.61167 | 33.8 | 502 | 12331 |
| ita-spa | newssyscomb2009 | 0.60645 | 35.1 | 502 | 12503 |
| spa-deu | newssyscomb2009 | 0.52676 | 21.8 | 502 | 11271 |
| spa-fra | newssyscomb2009 | 0.61003 | 33.6 | 502 | 12331 |
| ces-deu | newstest2008 | 0.52450 | 21.6 | 2051 | 47447 |
| ces-eng | newstest2008 | 0.52805 | 24.9 | 2051 | 49380 |
| ces-fra | newstest2008 | 0.54135 | 25.4 | 2051 | 52685 |
| ces-spa | newstest2008 | 0.53925 | 26.2 | 2051 | 52586 |
| deu-eng | newstest2008 | 0.53756 | 26.2 | 2051 | 49380 |
| deu-fra | newstest2008 | 0.54147 | 25.5 | 2051 | 52685 |
| deu-spa | newstest2008 | 0.53296 | 24.8 | 2051 | 52586 |
| eng-deu | newstest2008 | 0.52399 | 22.4 | 2051 | 47447 |
| eng-fra | newstest2008 | 0.54809 | 26.1 | 2051 | 52685 |
| eng-spa | newstest2008 | 0.56027 | 29.1 | 2051 | 52586 |
| fra-deu | newstest2008 | 0.52211 | 21.8 | 2051 | 47447 |
| fra-eng | newstest2008 | 0.53878 | 26.1 | 2051 | 49380 |
| fra-spa | newstest2008 | 0.58122 | 32.5 | 2051 | 52586 |
| spa-deu | newstest2008 | 0.51468 | 20.9 | 2051 | 47447 |
| ces-deu | newstest2009 | 0.52537 | 22.4 | 2525 | 62816 |
| ces-eng | newstest2009 | 0.54467 | 27.1 | 2525 | 65399 |
| ces-fra | newstest2009 | 0.54545 | 26.1 | 2525 | 69263 |
| ces-spa | newstest2009 | 0.54339 | 26.3 | 2525 | 68111 |
| deu-eng | newstest2009 | 0.53323 | 25.9 | 2525 | 65399 |
| deu-fra | newstest2009 | 0.53408 | 25.0 | 2525 | 69263 |
| deu-spa | newstest2009 | 0.52999 | 24.4 | 2525 | 68111 |
| eng-deu | newstest2009 | 0.52387 | 21.5 | 2525 | 62816 |
| eng-fra | newstest2009 | 0.57057 | 28.7 | 2525 | 69263 |
| eng-spa | newstest2009 | 0.57376 | 29.6 | 2525 | 68111 |
| fra-deu | newstest2009 | 0.51980 | 21.6 | 2525 | 62816 |
| fra-eng | newstest2009 | 0.56151 | 29.5 | 2525 | 65399 |
| fra-spa | newstest2009 | 0.58173 | 31.4 | 2525 | 68111 |
| ita-deu | newstest2009 | 0.52409 | 22.1 | 2525 | 62816 |
| ita-eng | newstest2009 | 0.58598 | 32.9 | 2525 | 65399 |
| ita-fra | newstest2009 | 0.58722 | 31.5 | 2525 | 69263 |
| ita-spa | newstest2009 | 0.59235 | 33.1 | 2525 | 68111 |
| spa-deu | newstest2009 | 0.51708 | 20.7 | 2525 | 62816 |
| spa-eng | newstest2009 | 0.56094 | 29.2 | 2525 | 65399 |
| ces-deu | newstest2010 | 0.53608 | 23.5 | 2489 | 61503 |
| ces-eng | newstest2010 | 0.56348 | 28.8 | 2489 | 61711 |
| ces-fra | newstest2010 | 0.55510 | 27.2 | 2489 | 66022 |
| ces-spa | newstest2010 | 0.57375 | 30.6 | 2489 | 65480 |
| deu-eng | newstest2010 | 0.57666 | 29.8 | 2489 | 61711 |
| deu-fra | newstest2010 | 0.56822 | 28.2 | 2489 | 66022 |
| deu-spa | newstest2010 | 0.58446 | 31.5 | 2489 | 65480 |
| eng-deu | newstest2010 | 0.54037 | 24.8 | 2489 | 61503 |
| eng-fra | newstest2010 | 0.58935 | 31.2 | 2489 | 66022 |
| eng-spa | newstest2010 | 0.61230 | 35.6 | 2489 | 65480 |
| fra-deu | newstest2010 | 0.52993 | 23.2 | 2489 | 61503 |
| fra-eng | newstest2010 | 0.58580 | 31.7 | 2489 | 61711 |
| fra-spa | newstest2010 | 0.61883 | 36.8 | 2489 | 65480 |
| spa-deu | newstest2010 | 0.54232 | 24.8 | 2489 | 61503 |
| ces-deu | newstest2011 | 0.52042 | 22.2 | 3003 | 72981 |
| ces-eng | newstest2011 | 0.55380 | 27.8 | 3003 | 74681 |
| ces-fra | newstest2011 | 0.55651 | 28.0 | 3003 | 80626 |
| ces-spa | newstest2011 | 0.56004 | 29.9 | 3003 | 79476 |
| deu-eng | newstest2011 | 0.54263 | 25.8 | 3003 | 74681 |
| deu-fra | newstest2011 | 0.54883 | 26.4 | 3003 | 80626 |
| deu-spa | newstest2011 | 0.55738 | 29.1 | 3003 | 79476 |
| eng-deu | newstest2011 | 0.52251 | 22.4 | 3003 | 72981 |
| eng-fra | newstest2011 | 0.60292 | 33.3 | 3003 | 80626 |
| eng-spa | newstest2011 | 0.61355 | 37.6 | 3003 | 79476 |
| fra-deu | newstest2011 | 0.52082 | 22.1 | 3003 | 72981 |
| fra-eng | newstest2011 | 0.58971 | 32.3 | 3003 | 74681 |
| fra-spa | newstest2011 | 0.62318 | 38.7 | 3003 | 79476 |
| spa-fra | newstest2011 | 0.60467 | 34.0 | 3003 | 80626 |
| ces-deu | newstest2012 | 0.52126 | 22.9 | 3003 | 72886 |
| ces-eng | newstest2012 | 0.54980 | 27.0 | 3003 | 72812 |
| ces-fra | newstest2012 | 0.55088 | 26.8 | 3003 | 78011 |
| ces-spa | newstest2012 | 0.55950 | 29.9 | 3003 | 79006 |
| deu-eng | newstest2012 | 0.55507 | 27.5 | 3003 | 72812 |
| deu-fra | newstest2012 | 0.55160 | 26.6 | 3003 | 78011 |
| deu-spa | newstest2012 | 0.56307 | 30.1 | 3003 | 79006 |
| eng-deu | newstest2012 | 0.52121 | 22.9 | 3003 | 72886 |
| eng-fra | newstest2012 | 0.58675 | 30.8 | 3003 | 78011 |
| eng-spa | newstest2012 | 0.61689 | 37.9 | 3003 | 79006 |
| fra-deu | newstest2012 | 0.52009 | 23.2 | 3003 | 72886 |
| fra-eng | newstest2012 | 0.58405 | 32.3 | 3003 | 72812 |
| fra-spa | newstest2012 | 0.62038 | 38.5 | 3003 | 79006 |
| rus-deu | newstest2012 | 0.47965 | 18.3 | 3003 | 72886 |
| rus-eng | newstest2012 | 0.61258 | 36.1 | 3003 | 72812 |
| rus-fra | newstest2012 | 0.52674 | 24.2 | 3003 | 78011 |
| rus-spa | newstest2012 | 0.53760 | 27.4 | 3003 | 79006 |
| ces-deu | newstest2013 | 0.54483 | 25.3 | 3000 | 63737 |
| ces-eng | newstest2013 | 0.57212 | 30.7 | 3000 | 64505 |
| ces-fra | newstest2013 | 0.55258 | 28.4 | 3000 | 70037 |
| ces-spa | newstest2013 | 0.56179 | 30.6 | 3000 | 70528 |
| deu-eng | newstest2013 | 0.57382 | 31.0 | 3000 | 64505 |
| deu-fra | newstest2013 | 0.55576 | 28.8 | 3000 | 70037 |
| deu-spa | newstest2013 | 0.56220 | 30.9 | 3000 | 70528 |
| eng-deu | newstest2013 | 0.54830 | 26.6 | 3000 | 63737 |
| eng-fra | newstest2013 | 0.58195 | 32.6 | 3000 | 70037 |
| eng-spa | newstest2013 | 0.59254 | 34.6 | 3000 | 70528 |
| fra-deu | newstest2013 | 0.53465 | 24.6 | 3000 | 63737 |
| fra-eng | newstest2013 | 0.58395 | 32.9 | 3000 | 64505 |
| fra-spa | newstest2013 | 0.58748 | 34.1 | 3000 | 70528 |
| rus-deu | newstest2013 | 0.51980 | 22.4 | 3000 | 63737 |
| rus-eng | newstest2013 | 0.55557 | 28.9 | 3000 | 64505 |
| rus-fra | newstest2013 | 0.54627 | 27.6 | 3000 | 70037 |
| rus-spa | newstest2013 | 0.55540 | 30.5 | 3000 | 70528 |
| spa-deu | newstest2013 | 0.53925 | 24.8 | 3000 | 63737 |
| ces-eng | newstest2014 | 0.61449 | 33.9 | 3003 | 68065 |
| deu-eng | newstest2014 | 0.58733 | 32.1 | 3003 | 67337 |
| eng-deu | newstest2014 | 0.57701 | 26.5 | 3003 | 62688 |
| eng-fra | newstest2014 | 0.63976 | 38.1 | 3003 | 77306 |
| fra-eng | newstest2014 | 0.62627 | 36.8 | 3003 | 70708 |
| hin-eng | newstest2014 | 0.56343 | 26.4 | 2507 | 55571 |
| rus-eng | newstest2014 | 0.62633 | 36.6 | 3003 | 69210 |
| ces-eng | newstest2015 | 0.56562 | 30.7 | 2656 | 53569 |
| deu-eng | newstest2015 | 0.59036 | 33.3 | 2169 | 46443 |
| eng-deu | newstest2015 | 0.58604 | 30.1 | 2169 | 44260 |
| rus-eng | newstest2015 | 0.58794 | 32.5 | 2818 | 64428 |
| ces-eng | newstest2016 | 0.58896 | 32.6 | 2999 | 64670 |
| deu-eng | newstest2016 | 0.63945 | 39.4 | 2999 | 64119 |
| eng-deu | newstest2016 | 0.62731 | 35.9 | 2999 | 62669 |
| ron-eng | newstest2016 | 0.63051 | 38.1 | 1999 | 47562 |
| rus-eng | newstest2016 | 0.58858 | 32.5 | 2998 | 69278 |
| ces-eng | newstest2017 | 0.55759 | 29.0 | 3005 | 61721 |
| deu-eng | newstest2017 | 0.60252 | 34.8 | 3004 | 64399 |
| eng-deu | newstest2017 | 0.57779 | 28.7 | 3004 | 61287 |
| lav-eng | newstest2017 | 0.51103 | 20.2 | 2001 | 47511 |
| rus-eng | newstest2017 | 0.61663 | 36.1 | 3001 | 69025 |
| ces-eng | newstest2018 | 0.56663 | 29.6 | 2983 | 63495 |
| deu-eng | newstest2018 | 0.65768 | 41.8 | 2998 | 67012 |
| eng-deu | newstest2018 | 0.67590 | 43.5 | 2998 | 64276 |
| rus-eng | newstest2018 | 0.58427 | 31.5 | 3000 | 71291 |
| ces-deu | newstest2019 | 0.53405 | 23.8 | 1997 | 48746 |
| deu-eng | newstest2019 | 0.62158 | 37.7 | 2000 | 39227 |
| deu-fra | newstest2019 | 0.61819 | 34.4 | 1701 | 42509 |
| eng-deu | newstest2019 | 0.64640 | 39.8 | 1997 | 48746 |
| fra-deu | newstest2019 | 0.59291 | 27.6 | 1701 | 36446 |
| guj-eng | newstest2019 | 0.51165 | 22.5 | 1016 | 17757 |
| lit-eng | newstest2019 | 0.58019 | 29.1 | 1000 | 25878 |
| rus-eng | newstest2019 | 0.62499 | 37.8 | 2000 | 42642 |
| deu-eng | newstest2020 | 0.56495 | 30.9 | 785 | 38220 |
| deu-fra | newstest2020 | 0.59211 | 31.6 | 1619 | 36890 |
| eng-deu | newstest2020 | 0.58436 | 30.2 | 1418 | 52383 |
| fra-deu | newstest2020 | 0.59478 | 26.6 | 1619 | 30265 |
| pol-eng | newstest2020 | 0.56674 | 27.7 | 1001 | 21755 |
| rus-eng | newstest2020 | 0.62387 | 33.6 | 991 | 20217 |
| ces-eng | newstest2021 | 0.54943 | 25.6 | 1000 | 22056 |
| deu-eng | newstest2021 | 0.58675 | 30.5 | 1000 | 20180 |
| deu-fra | newstest2021 | 0.57690 | 30.0 | 1000 | 23757 |
| eng-deu | newstest2021 | 0.55381 | 24.9 | 1002 | 27970 |
| fra-deu | newstest2021 | 0.63942 | 37.2 | 1026 | 26077 |
| isl-eng | newstest2021 | 0.53701 | 29.2 | 1000 | 22529 |
| rus-eng | newstest2021 | 0.60760 | 33.7 | 1000 | 21228 |
| deu-eng | newstestALL2020 | 0.56898 | 30.8 | 785 | 38220 |
| eng-deu | newstestALL2020 | 0.58436 | 30.2 | 1418 | 52383 |
| rus-eng | newstestALL2020 | 0.62387 | 33.6 | 991 | 20217 |
| deu-eng | newstestB2020 | 0.56571 | 30.3 | 785 | 37696 |
| eng-deu | newstestB2020 | 0.57458 | 29.7 | 1418 | 53092 |
| rus-eng | newstestB2020 | 0.62934 | 35.5 | 991 | 20423 |
| afr-deu | ntrex128 | 0.54806 | 25.7 | 1997 | 48761 |
| afr-eng | ntrex128 | 0.71452 | 50.6 | 1997 | 47673 |
| afr-fra | ntrex128 | 0.55624 | 28.2 | 1997 | 53481 |
| afr-por | ntrex128 | 0.54364 | 26.9 | 1997 | 51631 |
| afr-spa | ntrex128 | 0.57498 | 32.3 | 1997 | 54107 |
| bel-deu | ntrex128 | 0.48215 | 17.8 | 1997 | 48761 |
| bel-eng | ntrex128 | 0.55146 | 26.7 | 1997 | 47673 |
| bel-fra | ntrex128 | 0.49288 | 20.4 | 1997 | 53481 |
| bel-por | ntrex128 | 0.48488 | 19.9 | 1997 | 51631 |
| bel-spa | ntrex128 | 0.50933 | 23.7 | 1997 | 54107 |
| ben-deu | ntrex128 | 0.43995 | 13.7 | 1997 | 48761 |
| ben-eng | ntrex128 | 0.53312 | 24.9 | 1997 | 47673 |
| ben-fra | ntrex128 | 0.45297 | 17.1 | 1997 | 53481 |
| ben-por | ntrex128 | 0.44323 | 15.5 | 1997 | 51631 |
| ben-spa | ntrex128 | 0.46993 | 19.5 | 1997 | 54107 |
| bul-deu | ntrex128 | 0.51786 | 20.9 | 1997 | 48761 |
| bul-eng | ntrex128 | 0.59510 | 31.3 | 1997 | 47673 |
| bul-fra | ntrex128 | 0.53787 | 25.4 | 1997 | 53481 |
| bul-por | ntrex128 | 0.52650 | 24.2 | 1997 | 51631 |
| bul-spa | ntrex128 | 0.54950 | 28.4 | 1997 | 54107 |
| cat-deu | ntrex128 | 0.52907 | 22.5 | 1997 | 48761 |
| cat-eng | ntrex128 | 0.62247 | 34.6 | 1997 | 47673 |
| cat-fra | ntrex128 | 0.55858 | 27.5 | 1997 | 53481 |
| cat-por | ntrex128 | 0.55916 | 28.3 | 1997 | 51631 |
| cat-spa | ntrex128 | 0.61209 | 35.6 | 1997 | 54107 |
| ces-deu | ntrex128 | 0.52704 | 22.5 | 1997 | 48761 |
| ces-eng | ntrex128 | 0.60742 | 33.1 | 1997 | 47673 |
| ces-fra | ntrex128 | 0.54283 | 26.3 | 1997 | 53481 |
| ces-por | ntrex128 | 0.52392 | 24.1 | 1997 | 51631 |
| ces-spa | ntrex128 | 0.55467 | 28.9 | 1997 | 54107 |
| cym-deu | ntrex128 | 0.48064 | 19.1 | 1997 | 48761 |
| cym-eng | ntrex128 | 0.60592 | 34.7 | 1997 | 47673 |
| cym-fra | ntrex128 | 0.50667 | 23.9 | 1997 | 53481 |
| cym-por | ntrex128 | 0.48189 | 20.5 | 1997 | 51631 |
| cym-spa | ntrex128 | 0.52160 | 26.7 | 1997 | 54107 |
| dan-deu | ntrex128 | 0.53284 | 24.4 | 1997 | 48761 |
| dan-eng | ntrex128 | 0.62092 | 37.5 | 1997 | 47673 |
| dan-fra | ntrex128 | 0.53068 | 25.4 | 1997 | 53481 |
| dan-por | ntrex128 | 0.52754 | 26.2 | 1997 | 51631 |
| dan-spa | ntrex128 | 0.55304 | 29.8 | 1997 | 54107 |
| deu-eng | ntrex128 | 0.61371 | 33.7 | 1997 | 47673 |
| deu-fra | ntrex128 | 0.54844 | 27.4 | 1997 | 53481 |
| deu-por | ntrex128 | 0.53694 | 25.3 | 1997 | 51631 |
| deu-spa | ntrex128 | 0.56148 | 29.8 | 1997 | 54107 |
| ell-deu | ntrex128 | 0.51567 | 21.1 | 1997 | 48761 |
| ell-eng | ntrex128 | 0.60389 | 34.0 | 1997 | 47673 |
| ell-fra | ntrex128 | 0.53343 | 25.1 | 1997 | 53481 |
| ell-por | ntrex128 | 0.53030 | 25.9 | 1997 | 51631 |
| ell-spa | ntrex128 | 0.55542 | 29.7 | 1997 | 54107 |
| eng-deu | ntrex128 | 0.57592 | 28.9 | 1997 | 48761 |
| eng-fra | ntrex128 | 0.60159 | 33.9 | 1997 | 53481 |
| eng-por | ntrex128 | 0.59020 | 32.6 | 1997 | 51631 |
| eng-spa | ntrex128 | 0.62826 | 38.6 | 1997 | 54107 |
| fao-deu | ntrex128 | 0.42717 | 16.1 | 1997 | 48761 |
| fao-eng | ntrex128 | 0.48210 | 24.5 | 1997 | 47673 |
| fao-fra | ntrex128 | 0.40770 | 16.9 | 1997 | 53481 |
| fao-por | ntrex128 | 0.40603 | 16.2 | 1997 | 51631 |
| fao-spa | ntrex128 | 0.42980 | 18.8 | 1997 | 54107 |
| fas-deu | ntrex128 | 0.47062 | 15.7 | 1997 | 48761 |
| fas-eng | ntrex128 | 0.53552 | 24.0 | 1997 | 47673 |
| fas-fra | ntrex128 | 0.48958 | 20.1 | 1997 | 53481 |
| fas-por | ntrex128 | 0.47091 | 18.3 | 1997 | 51631 |
| fas-spa | ntrex128 | 0.49946 | 22.5 | 1997 | 54107 |
| fra-deu | ntrex128 | 0.52037 | 22.1 | 1997 | 48761 |
| fra-eng | ntrex128 | 0.59918 | 32.7 | 1997 | 47673 |
| fra-por | ntrex128 | 0.53484 | 25.0 | 1997 | 51631 |
| fra-spa | ntrex128 | 0.56500 | 30.3 | 1997 | 54107 |
| gle-deu | ntrex128 | 0.45357 | 16.0 | 1997 | 48761 |
| gle-eng | ntrex128 | 0.54960 | 27.0 | 1997 | 47673 |
| gle-fra | ntrex128 | 0.47041 | 18.7 | 1997 | 53481 |
| gle-por | ntrex128 | 0.45725 | 17.5 | 1997 | 51631 |
| gle-spa | ntrex128 | 0.48897 | 22.4 | 1997 | 54107 |
| glg-deu | ntrex128 | 0.52710 | 22.4 | 1997 | 48761 |
| glg-eng | ntrex128 | 0.63076 | 37.0 | 1997 | 47673 |
| glg-fra | ntrex128 | 0.55231 | 27.2 | 1997 | 53481 |
| glg-por | ntrex128 | 0.56272 | 28.9 | 1997 | 51631 |
| glg-spa | ntrex128 | 0.61675 | 36.6 | 1997 | 54107 |
| guj-deu | ntrex128 | 0.40361 | 11.9 | 1997 | 48761 |
| guj-eng | ntrex128 | 0.52283 | 23.0 | 1997 | 47673 |
| guj-fra | ntrex128 | 0.41597 | 14.7 | 1997 | 53481 |
| guj-por | ntrex128 | 0.40085 | 13.0 | 1997 | 51631 |
| guj-spa | ntrex128 | 0.44800 | 18.3 | 1997 | 54107 |
| hin-deu | ntrex128 | 0.45618 | 14.4 | 1997 | 48761 |
| hin-eng | ntrex128 | 0.57183 | 27.9 | 1997 | 47673 |
| hin-fra | ntrex128 | 0.47504 | 18.5 | 1997 | 53481 |
| hin-por | ntrex128 | 0.45829 | 16.9 | 1997 | 51631 |
| hin-spa | ntrex128 | 0.48784 | 21.4 | 1997 | 54107 |
| hrv-deu | ntrex128 | 0.53567 | 23.2 | 1997 | 48761 |
| hrv-eng | ntrex128 | 0.61932 | 34.8 | 1997 | 47673 |
| hrv-fra | ntrex128 | 0.55306 | 27.6 | 1997 | 53481 |
| hrv-por | ntrex128 | 0.53968 | 26.3 | 1997 | 51631 |
| hrv-spa | ntrex128 | 0.56765 | 30.4 | 1997 | 54107 |
| hye-deu | ntrex128 | 0.42987 | 14.0 | 1997 | 48761 |
| hye-eng | ntrex128 | 0.49189 | 20.9 | 1997 | 47673 |
| hye-fra | ntrex128 | 0.44434 | 17.2 | 1997 | 53481 |
| hye-por | ntrex128 | 0.43069 | 16.0 | 1997 | 51631 |
| hye-spa | ntrex128 | 0.45889 | 19.5 | 1997 | 54107 |
| isl-deu | ntrex128 | 0.48392 | 19.5 | 1997 | 48761 |
| isl-eng | ntrex128 | 0.54720 | 27.5 | 1997 | 47673 |
| isl-fra | ntrex128 | 0.49971 | 22.5 | 1997 | 53481 |
| isl-por | ntrex128 | 0.47811 | 20.2 | 1997 | 51631 |
| isl-spa | ntrex128 | 0.51060 | 25.1 | 1997 | 54107 |
| ita-deu | ntrex128 | 0.53354 | 23.3 | 1997 | 48761 |
| ita-eng | ntrex128 | 0.63069 | 37.1 | 1997 | 47673 |
| ita-fra | ntrex128 | 0.56721 | 29.1 | 1997 | 53481 |
| ita-por | ntrex128 | 0.56298 | 28.9 | 1997 | 51631 |
| ita-spa | ntrex128 | 0.58483 | 32.6 | 1997 | 54107 |
| lav-deu | ntrex128 | 0.48637 | 17.5 | 1997 | 48761 |
| lav-eng | ntrex128 | 0.55909 | 25.5 | 1997 | 47673 |
| lav-fra | ntrex128 | 0.49579 | 20.4 | 1997 | 53481 |
| lav-por | ntrex128 | 0.47936 | 18.9 | 1997 | 51631 |
| lav-spa | ntrex128 | 0.51105 | 23.3 | 1997 | 54107 |
| lit-deu | ntrex128 | 0.49203 | 18.0 | 1997 | 48761 |
| lit-eng | ntrex128 | 0.55075 | 25.7 | 1997 | 47673 |
| lit-fra | ntrex128 | 0.50667 | 21.9 | 1997 | 53481 |
| lit-por | ntrex128 | 0.49771 | 20.8 | 1997 | 51631 |
| lit-spa | ntrex128 | 0.52333 | 24.8 | 1997 | 54107 |
| ltz-deu | ntrex128 | 0.51232 | 22.0 | 1997 | 48761 |
| ltz-eng | ntrex128 | 0.58218 | 32.4 | 1997 | 47673 |
| ltz-fra | ntrex128 | 0.49182 | 21.6 | 1997 | 53481 |
| ltz-por | ntrex128 | 0.46871 | 20.3 | 1997 | 51631 |
| ltz-spa | ntrex128 | 0.48975 | 23.6 | 1997 | 54107 |
| mar-deu | ntrex128 | 0.42225 | 12.5 | 1997 | 48761 |
| mar-eng | ntrex128 | 0.51583 | 22.2 | 1997 | 47673 |
| mar-fra | ntrex128 | 0.43088 | 15.1 | 1997 | 53481 |
| mar-por | ntrex128 | 0.42394 | 14.6 | 1997 | 51631 |
| mar-spa | ntrex128 | 0.44945 | 17.7 | 1997 | 54107 |
| mkd-deu | ntrex128 | 0.52537 | 21.8 | 1997 | 48761 |
| mkd-eng | ntrex128 | 0.62757 | 35.8 | 1997 | 47673 |
| mkd-fra | ntrex128 | 0.54428 | 26.4 | 1997 | 53481 |
| mkd-por | ntrex128 | 0.52919 | 24.5 | 1997 | 51631 |
| mkd-spa | ntrex128 | 0.56365 | 30.0 | 1997 | 54107 |
| nep-deu | ntrex128 | 0.40783 | 11.6 | 1997 | 48761 |
| nep-eng | ntrex128 | 0.51242 | 23.1 | 1997 | 47673 |
| nep-fra | ntrex128 | 0.41414 | 14.5 | 1997 | 53481 |
| nep-por | ntrex128 | 0.41356 | 13.8 | 1997 | 51631 |
| nep-spa | ntrex128 | 0.43667 | 17.0 | 1997 | 54107 |
| nld-deu | ntrex128 | 0.55633 | 25.3 | 1997 | 48761 |
| nld-eng | ntrex128 | 0.63172 | 36.0 | 1997 | 47673 |
| nld-fra | ntrex128 | 0.55161 | 27.1 | 1997 | 53481 |
| nld-por | ntrex128 | 0.54074 | 26.8 | 1997 | 51631 |
| nld-spa | ntrex128 | 0.57106 | 31.7 | 1997 | 54107 |
| nno-deu | ntrex128 | 0.52489 | 23.9 | 1997 | 48761 |
| nno-eng | ntrex128 | 0.64889 | 41.6 | 1997 | 47673 |
| nno-fra | ntrex128 | 0.53358 | 26.2 | 1997 | 53481 |
| nno-por | ntrex128 | 0.52089 | 24.7 | 1997 | 51631 |
| nno-spa | ntrex128 | 0.54863 | 29.4 | 1997 | 54107 |
| nob-deu | ntrex128 | 0.54650 | 25.5 | 1997 | 48761 |
| nob-eng | ntrex128 | 0.64444 | 39.3 | 1997 | 47673 |
| nob-fra | ntrex128 | 0.55024 | 28.0 | 1997 | 53481 |
| nob-por | ntrex128 | 0.53537 | 25.9 | 1997 | 51631 |
| nob-spa | ntrex128 | 0.56899 | 31.4 | 1997 | 54107 |
| pan-deu | ntrex128 | 0.40429 | 11.6 | 1997 | 48761 |
| pan-eng | ntrex128 | 0.49942 | 20.6 | 1997 | 47673 |
| pan-fra | ntrex128 | 0.41440 | 14.8 | 1997 | 53481 |
| pan-spa | ntrex128 | 0.42840 | 16.6 | 1997 | 54107 |
| pol-deu | ntrex128 | 0.50884 | 20.4 | 1997 | 48761 |
| pol-eng | ntrex128 | 0.55781 | 26.2 | 1997 | 47673 |
| pol-fra | ntrex128 | 0.52511 | 23.9 | 1997 | 53481 |
| pol-por | ntrex128 | 0.50796 | 21.8 | 1997 | 51631 |
| pol-spa | ntrex128 | 0.53122 | 25.6 | 1997 | 54107 |
| por-deu | ntrex128 | 0.54003 | 23.7 | 1997 | 48761 |
| por-eng | ntrex128 | 0.63798 | 37.6 | 1997 | 47673 |
| por-fra | ntrex128 | 0.56317 | 28.3 | 1997 | 53481 |
| por-spa | ntrex128 | 0.59244 | 33.9 | 1997 | 54107 |
| prs-deu | ntrex128 | 0.44878 | 14.3 | 1997 | 48761 |
| prs-eng | ntrex128 | 0.52855 | 24.2 | 1997 | 47673 |
| prs-fra | ntrex128 | 0.46323 | 17.6 | 1997 | 53481 |
| prs-por | ntrex128 | 0.45211 | 16.9 | 1997 | 51631 |
| prs-spa | ntrex128 | 0.47595 | 20.5 | 1997 | 54107 |
| pus-eng | ntrex128 | 0.40630 | 13.0 | 1997 | 47673 |
| ron-deu | ntrex128 | 0.52534 | 21.6 | 1997 | 48761 |
| ron-eng | ntrex128 | 0.60733 | 32.2 | 1997 | 47673 |
| ron-fra | ntrex128 | 0.55222 | 26.1 | 1997 | 53481 |
| ron-por | ntrex128 | 0.54549 | 26.4 | 1997 | 51631 |
| ron-spa | ntrex128 | 0.57503 | 31.6 | 1997 | 54107 |
| rus-deu | ntrex128 | 0.49519 | 18.5 | 1997 | 48761 |
| rus-eng | ntrex128 | 0.55126 | 25.6 | 1997 | 47673 |
| rus-fra | ntrex128 | 0.51684 | 22.8 | 1997 | 53481 |
| rus-por | ntrex128 | 0.49329 | 20.4 | 1997 | 51631 |
| rus-spa | ntrex128 | 0.52316 | 24.8 | 1997 | 54107 |
| slk-deu | ntrex128 | 0.52066 | 22.0 | 1997 | 48761 |
| slk-eng | ntrex128 | 0.60940 | 33.0 | 1997 | 47673 |
| slk-fra | ntrex128 | 0.53303 | 25.8 | 1997 | 53481 |
| slk-por | ntrex128 | 0.51245 | 23.0 | 1997 | 51631 |
| slk-spa | ntrex128 | 0.54489 | 28.3 | 1997 | 54107 |
| slv-deu | ntrex128 | 0.52189 | 22.0 | 1997 | 48761 |
| slv-eng | ntrex128 | 0.58552 | 30.4 | 1997 | 47673 |
| slv-fra | ntrex128 | 0.53247 | 25.3 | 1997 | 53481 |
| slv-por | ntrex128 | 0.51817 | 23.4 | 1997 | 51631 |
| slv-spa | ntrex128 | 0.54582 | 27.7 | 1997 | 54107 |
| spa-fra | ntrex128 | 0.56549 | 28.3 | 1997 | 53481 |
| spa-por | ntrex128 | 0.56372 | 28.5 | 1997 | 51631 |
| sqi-deu | ntrex128 | 0.52259 | 21.7 | 1997 | 48761 |
| sqi-eng | ntrex128 | 0.62439 | 36.2 | 1997 | 47673 |
| sqi-fra | ntrex128 | 0.54643 | 26.2 | 1997 | 53481 |
| sqi-por | ntrex128 | 0.53857 | 26.2 | 1997 | 51631 |
| sqi-spa | ntrex128 | 0.56804 | 30.8 | 1997 | 54107 |
| srp_Cyrl-deu | ntrex128 | 0.48837 | 18.6 | 1997 | 48761 |
| srp_Cyrl-eng | ntrex128 | 0.54292 | 24.5 | 1997 | 47673 |
| srp_Cyrl-fra | ntrex128 | 0.48977 | 21.5 | 1997 | 53481 |
| srp_Cyrl-por | ntrex128 | 0.48429 | 20.5 | 1997 | 51631 |
| srp_Cyrl-spa | ntrex128 | 0.51373 | 24.9 | 1997 | 54107 |
| swe-deu | ntrex128 | 0.54871 | 25.9 | 1997 | 48761 |
| swe-eng | ntrex128 | 0.65427 | 41.2 | 1997 | 47673 |
| swe-fra | ntrex128 | 0.55294 | 28.2 | 1997 | 53481 |
| swe-por | ntrex128 | 0.53911 | 26.7 | 1997 | 51631 |
| swe-spa | ntrex128 | 0.57293 | 31.9 | 1997 | 54107 |
| tgk_Cyrl-deu | ntrex128 | 0.40503 | 11.8 | 1997 | 48761 |
| tgk_Cyrl-eng | ntrex128 | 0.45221 | 16.4 | 1997 | 47673 |
| tgk_Cyrl-fra | ntrex128 | 0.41930 | 14.4 | 1997 | 53481 |
| tgk_Cyrl-por | ntrex128 | 0.40576 | 12.7 | 1997 | 51631 |
| tgk_Cyrl-spa | ntrex128 | 0.43095 | 16.4 | 1997 | 54107 |
| ukr-deu | ntrex128 | 0.49644 | 18.5 | 1997 | 48761 |
| ukr-eng | ntrex128 | 0.55193 | 25.7 | 1997 | 47673 |
| ukr-fra | ntrex128 | 0.50914 | 21.8 | 1997 | 53481 |
| ukr-por | ntrex128 | 0.49879 | 21.3 | 1997 | 51631 |
| ukr-spa | ntrex128 | 0.52640 | 25.6 | 1997 | 54107 |
| urd-deu | ntrex128 | 0.43742 | 14.1 | 1997 | 48761 |
| urd-eng | ntrex128 | 0.52486 | 23.8 | 1997 | 47673 |
| urd-fra | ntrex128 | 0.45409 | 17.4 | 1997 | 53481 |
| urd-por | ntrex128 | 0.42660 | 14.6 | 1997 | 51631 |
| urd-spa | ntrex128 | 0.46414 | 19.4 | 1997 | 54107 |
| ben-eng | tico19-test | 0.55418 | 27.3 | 2100 | 56824 |
| ben-fra | tico19-test | 0.45176 | 18.3 | 2100 | 64661 |
| ben-por | tico19-test | 0.49778 | 20.9 | 2100 | 62729 |
| ben-spa | tico19-test | 0.51344 | 25.8 | 2100 | 66563 |
| eng-fra | tico19-test | 0.62001 | 38.2 | 2100 | 64661 |
| eng-por | tico19-test | 0.71654 | 48.3 | 2100 | 62729 |
| eng-spa | tico19-test | 0.71947 | 50.2 | 2100 | 66563 |
| fas-eng | tico19-test | 0.58617 | 31.6 | 2100 | 56315 |
| fas-fra | tico19-test | 0.50453 | 23.9 | 2100 | 64661 |
| fas-por | tico19-test | 0.55031 | 28.1 | 2100 | 62729 |
| fas-spa | tico19-test | 0.56113 | 29.9 | 2100 | 66563 |
| fra-eng | tico19-test | 0.60512 | 35.8 | 2100 | 56323 |
| fra-por | tico19-test | 0.57530 | 33.0 | 2100 | 62729 |
| fra-spa | tico19-test | 0.58823 | 35.6 | 2100 | 66563 |
| hin-eng | tico19-test | 0.64146 | 39.6 | 2100 | 56323 |
| hin-fra | tico19-test | 0.51582 | 25.4 | 2100 | 64661 |
| hin-por | tico19-test | 0.57182 | 30.9 | 2100 | 62729 |
| hin-spa | tico19-test | 0.58341 | 33.7 | 2100 | 66563 |
| mar-eng | tico19-test | 0.51194 | 21.4 | 2100 | 56315 |
| mar-fra | tico19-test | 0.43359 | 16.8 | 2100 | 64661 |
| mar-por | tico19-test | 0.47089 | 20.3 | 2100 | 62729 |
| mar-spa | tico19-test | 0.48435 | 22.8 | 2100 | 66563 |
| nep-eng | tico19-test | 0.57060 | 30.1 | 2100 | 56824 |
| nep-fra | tico19-test | 0.46212 | 19.7 | 2100 | 64661 |
| nep-por | tico19-test | 0.51024 | 24.0 | 2100 | 62729 |
| nep-spa | tico19-test | 0.51651 | 25.9 | 2100 | 66563 |
| por-eng | tico19-test | 0.72228 | 47.4 | 2100 | 56315 |
| por-fra | tico19-test | 0.58934 | 33.4 | 2100 | 64661 |
| por-spa | tico19-test | 0.67509 | 44.1 | 2100 | 66563 |
| prs-eng | tico19-test | 0.54979 | 26.6 | 2100 | 56824 |
| prs-fra | tico19-test | 0.47627 | 21.0 | 2100 | 64661 |
| prs-por | tico19-test | 0.52000 | 25.6 | 2100 | 62729 |
| prs-spa | tico19-test | 0.54172 | 28.5 | 2100 | 66563 |
| pus-eng | tico19-test | 0.48655 | 23.1 | 2100 | 56315 |
| pus-fra | tico19-test | 0.40980 | 16.2 | 2100 | 64661 |
| pus-por | tico19-test | 0.44879 | 19.5 | 2100 | 62729 |
| pus-spa | tico19-test | 0.45280 | 20.4 | 2100 | 66563 |
| rus-eng | tico19-test | 0.59787 | 30.4 | 2100 | 56323 |
| rus-fra | tico19-test | 0.52211 | 24.1 | 2100 | 64661 |
| rus-por | tico19-test | 0.56473 | 26.9 | 2100 | 62729 |
| rus-spa | tico19-test | 0.58626 | 31.1 | 2100 | 66563 |
| spa-fra | tico19-test | 0.59078 | 33.1 | 2100 | 64661 |
| urd-eng | tico19-test | 0.51957 | 25.0 | 2100 | 56315 |
| urd-fra | tico19-test | 0.43707 | 17.2 | 2100 | 64661 |
| urd-por | tico19-test | 0.47484 | 20.1 | 2100 | 62729 |
| urd-spa | tico19-test | 0.48812 | 22.4 | 2100 | 66563 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:49:33 EEST 2024
* port machine: LM0-400-22516.local
| null |
Non_BioNLP
|
# opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Indo-European languages (ine) to unknown (deu+eng+fra+por+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-05-30
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): acf afr aln ang anp arg asm ast awa bal bar bel ben bho bis bos bpy bre bul bzj cat cbk ces chu ckb cnr cor cos crs csb cym dan deu diq div djk drt dsb dty egl ell eng enm ext fao fas fra frm fro frp frr fry fur gbm gcf gla gle glg glk glv gos got grc gsw guj hat hbs hif hin hne hns hrv hrx hsb hwc hye hyw icr isl ita jam jdt kas kea kmr kok kri ksh kur lad lah lat lav lij lim lit lld lmo lou lrc ltz mag mai mar mfe mkd mol mwl mzn nap nds nep nld nno nob non nor npi oci ofs ori orv osp oss pal pan pap pcd pcm pdc pes pfl pih pis pli pms pnt pol por prg prs pus rhg rmy roh rom ron rop rue rup rus san scn sco sdh sgs sin skr slk slv snd spa sqi srd srm srn srp stq swe swg syl szl tcs tgk tly tpi ukr urd vec vls wae wln xcl yid zea zza
- Target Language(s): deu eng fra por spa
- Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/ine-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>deu<< Replace this with text in an accepted source language.",
">>spa<< This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa")
print(pipe(">>deu<< Replace this with text in an accepted source language."))
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/ine-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| afr-deu | tatoeba-test-v2021-08-07 | 0.68516 | 48.8 | 1583 | 9105 |
| afr-eng | tatoeba-test-v2021-08-07 | 0.73535 | 60.8 | 1374 | 9622 |
| afr-spa | tatoeba-test-v2021-08-07 | 0.72814 | 57.6 | 448 | 2783 |
| awa-eng | tatoeba-test-v2021-08-07 | 0.62154 | 42.4 | 279 | 1335 |
| bel-deu | tatoeba-test-v2021-08-07 | 0.65145 | 44.1 | 551 | 4182 |
| bel-eng | tatoeba-test-v2021-08-07 | 0.62648 | 44.8 | 2500 | 18571 |
| bel-fra | tatoeba-test-v2021-08-07 | 0.66291 | 47.4 | 283 | 2005 |
| bel-spa | tatoeba-test-v2021-08-07 | 0.66644 | 46.5 | 205 | 1412 |
| ben-eng | tatoeba-test-v2021-08-07 | 0.62742 | 46.1 | 2500 | 13978 |
| bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.76603 | 62.5 | 301 | 1826 |
| bre-eng | tatoeba-test-v2021-08-07 | 0.47135 | 26.2 | 383 | 2065 |
| bul-deu | tatoeba-test-v2021-08-07 | 0.68593 | 49.1 | 314 | 2224 |
| bul-eng | tatoeba-test-v2021-08-07 | 0.69980 | 55.5 | 10000 | 71872 |
| bul-fra | tatoeba-test-v2021-08-07 | 0.69233 | 52.4 | 446 | 3669 |
| bul-spa | tatoeba-test-v2021-08-07 | 0.66731 | 49.2 | 286 | 1783 |
| cat-deu | tatoeba-test-v2021-08-07 | 0.65296 | 45.7 | 723 | 5676 |
| cat-eng | tatoeba-test-v2021-08-07 | 0.70714 | 55.6 | 1631 | 12627 |
| cat-fra | tatoeba-test-v2021-08-07 | 0.71112 | 53.7 | 700 | 5664 |
| cat-por | tatoeba-test-v2021-08-07 | 0.74022 | 56.3 | 747 | 6119 |
| cat-spa | tatoeba-test-v2021-08-07 | 0.85238 | 74.0 | 1534 | 12094 |
| ces-deu | tatoeba-test-v2021-08-07 | 0.68073 | 50.1 | 3490 | 27155 |
| ces-eng | tatoeba-test-v2021-08-07 | 0.68902 | 53.6 | 13824 | 105010 |
| ces-fra | tatoeba-test-v2021-08-07 | 0.70071 | 53.5 | 438 | 3346 |
| ces-spa | tatoeba-test-v2021-08-07 | 0.69957 | 52.5 | 1807 | 12716 |
| cym-eng | tatoeba-test-v2021-08-07 | 0.65153 | 47.5 | 818 | 5563 |
| dan-deu | tatoeba-test-v2021-08-07 | 0.72320 | 53.7 | 9998 | 76055 |
| dan-eng | tatoeba-test-v2021-08-07 | 0.75679 | 62.3 | 10795 | 79684 |
| dan-fra | tatoeba-test-v2021-08-07 | 0.76077 | 61.8 | 1731 | 11882 |
| dan-por | tatoeba-test-v2021-08-07 | 0.76460 | 58.8 | 873 | 5360 |
| dan-spa | tatoeba-test-v2021-08-07 | 0.71685 | 53.8 | 5000 | 35528 |
| deu-deu | tatoeba-test-v2021-08-07 | 0.60029 | 37.6 | 2500 | 20806 |
| deu-eng | tatoeba-test-v2021-08-07 | 0.65647 | 48.4 | 17565 | 149462 |
| deu-fra | tatoeba-test-v2021-08-07 | 0.66811 | 48.7 | 12418 | 102721 |
| deu-por | tatoeba-test-v2021-08-07 | 0.62766 | 42.2 | 10000 | 81482 |
| deu-spa | tatoeba-test-v2021-08-07 | 0.67276 | 48.2 | 10521 | 82570 |
| dsb-deu | tatoeba-test-v2021-08-07 | 0.55993 | 34.5 | 640 | 4469 |
| ell-deu | tatoeba-test-v2021-08-07 | 0.68199 | 51.9 | 2500 | 17025 |
| ell-eng | tatoeba-test-v2021-08-07 | 0.76316 | 63.6 | 10899 | 68682 |
| ell-fra | tatoeba-test-v2021-08-07 | 0.74291 | 59.1 | 1506 | 9726 |
| ell-por | tatoeba-test-v2021-08-07 | 0.69593 | 50.0 | 885 | 5196 |
| ell-spa | tatoeba-test-v2021-08-07 | 0.64482 | 47.9 | 1829 | 10828 |
| eng-deu | tatoeba-test-v2021-08-07 | 0.61606 | 39.7 | 17565 | 151568 |
| eng-eng | tatoeba-test-v2021-08-07 | 0.82285 | 65.4 | 12062 | 115106 |
| eng-fra | tatoeba-test-v2021-08-07 | 0.67435 | 49.4 | 12681 | 106378 |
| eng-por | tatoeba-test-v2021-08-07 | 0.70975 | 51.9 | 13222 | 105265 |
| eng-spa | tatoeba-test-v2021-08-07 | 0.71497 | 53.9 | 16583 | 134710 |
| fao-eng | tatoeba-test-v2021-08-07 | 0.55253 | 40.1 | 294 | 1984 |
| fas-deu | tatoeba-test-v2021-08-07 | 0.57907 | 34.2 | 3185 | 25590 |
| fas-eng | tatoeba-test-v2021-08-07 | 0.58280 | 39.2 | 3762 | 31480 |
| fas-fra | tatoeba-test-v2021-08-07 | 0.57554 | 35.7 | 376 | 3377 |
| fra-deu | tatoeba-test-v2021-08-07 | 0.67258 | 47.6 | 12418 | 100545 |
| fra-eng | tatoeba-test-v2021-08-07 | 0.71355 | 56.3 | 12681 | 101754 |
| fra-fra | tatoeba-test-v2021-08-07 | 0.63538 | 43.9 | 1000 | 7757 |
| fra-por | tatoeba-test-v2021-08-07 | 0.69703 | 50.7 | 10518 | 77650 |
| fra-spa | tatoeba-test-v2021-08-07 | 0.71014 | 53.3 | 10294 | 78406 |
| fry-eng | tatoeba-test-v2021-08-07 | 0.55802 | 37.9 | 220 | 1573 |
| gla-eng | tatoeba-test-v2021-08-07 | 0.44054 | 27.0 | 955 | 6611 |
| gla-spa | tatoeba-test-v2021-08-07 | 0.44549 | 23.0 | 289 | 1608 |
| gle-eng | tatoeba-test-v2021-08-07 | 0.63566 | 48.1 | 1913 | 11190 |
| glg-eng | tatoeba-test-v2021-08-07 | 0.69249 | 54.1 | 1015 | 8421 |
| glg-por | tatoeba-test-v2021-08-07 | 0.76777 | 61.5 | 433 | 3105 |
| glg-spa | tatoeba-test-v2021-08-07 | 0.80359 | 68.8 | 2121 | 17443 |
| gos-deu | tatoeba-test-v2021-08-07 | 0.44004 | 17.2 | 207 | 1168 |
| gos-eng | tatoeba-test-v2021-08-07 | 0.37952 | 20.8 | 1154 | 5635 |
| gsw-eng | tatoeba-test-v2021-08-07 | 0.48360 | 32.5 | 205 | 990 |
| hbs-deu | tatoeba-test-v2021-08-07 | 0.68769 | 50.7 | 1959 | 15559 |
| hbs-eng | tatoeba-test-v2021-08-07 | 0.68956 | 54.9 | 10017 | 68934 |
| hbs-fra | tatoeba-test-v2021-08-07 | 0.66551 | 47.0 | 474 | 3370 |
| hbs-spa | tatoeba-test-v2021-08-07 | 0.70241 | 53.4 | 607 | 3766 |
| hin-eng | tatoeba-test-v2021-08-07 | 0.64048 | 47.1 | 5000 | 33943 |
| hrv-deu | tatoeba-test-v2021-08-07 | 0.66676 | 48.9 | 782 | 5734 |
| hrv-eng | tatoeba-test-v2021-08-07 | 0.71884 | 56.8 | 1480 | 10620 |
| hrv-fra | tatoeba-test-v2021-08-07 | 0.62438 | 42.3 | 258 | 1943 |
| hrv-spa | tatoeba-test-v2021-08-07 | 0.68433 | 52.9 | 254 | 1702 |
| hsb-deu | tatoeba-test-v2021-08-07 | 0.61176 | 40.9 | 666 | 4818 |
| hye-eng | tatoeba-test-v2021-08-07 | 0.50806 | 29.0 | 1121 | 5066 |
| isl-deu | tatoeba-test-v2021-08-07 | 0.66238 | 47.4 | 969 | 6279 |
| isl-eng | tatoeba-test-v2021-08-07 | 0.64466 | 48.1 | 2503 | 19788 |
| isl-spa | tatoeba-test-v2021-08-07 | 0.61980 | 42.5 | 238 | 1229 |
| ita-deu | tatoeba-test-v2021-08-07 | 0.67198 | 47.8 | 10094 | 79762 |
| ita-eng | tatoeba-test-v2021-08-07 | 0.79538 | 68.3 | 17320 | 119214 |
| ita-fra | tatoeba-test-v2021-08-07 | 0.76540 | 62.7 | 10091 | 66377 |
| ita-por | tatoeba-test-v2021-08-07 | 0.73006 | 54.1 | 3066 | 25668 |
| ita-spa | tatoeba-test-v2021-08-07 | 0.76476 | 61.0 | 5000 | 34937 |
| kur_Latn-deu | tatoeba-test-v2021-08-07 | 0.38732 | 23.8 | 223 | 1323 |
| kur_Latn-eng | tatoeba-test-v2021-08-07 | 0.39058 | 22.8 | 290 | 1708 |
| lad-deu | tatoeba-test-v2021-08-07 | 0.40264 | 10.1 | 220 | 1175 |
| lad-eng | tatoeba-test-v2021-08-07 | 0.47244 | 26.5 | 768 | 4184 |
| lad-spa | tatoeba-test-v2021-08-07 | 0.51096 | 26.7 | 276 | 1448 |
| lad_Latn-eng | tatoeba-test-v2021-08-07 | 0.53303 | 37.2 | 672 | 3665 |
| lad_Latn-spa | tatoeba-test-v2021-08-07 | 0.59686 | 42.3 | 239 | 1239 |
| lat-deu | tatoeba-test-v2021-08-07 | 0.42426 | 25.2 | 2016 | 13326 |
| lat-eng | tatoeba-test-v2021-08-07 | 0.41822 | 23.5 | 10298 | 100152 |
| lat-spa | tatoeba-test-v2021-08-07 | 0.44259 | 23.3 | 3129 | 34036 |
| lav-eng | tatoeba-test-v2021-08-07 | 0.70077 | 55.0 | 1631 | 11213 |
| lit-deu | tatoeba-test-v2021-08-07 | 0.65720 | 46.5 | 1115 | 8531 |
| lit-eng | tatoeba-test-v2021-08-07 | 0.71630 | 57.3 | 2528 | 17855 |
| lit-spa | tatoeba-test-v2021-08-07 | 0.67909 | 50.9 | 454 | 2751 |
| ltz-deu | tatoeba-test-v2021-08-07 | 0.63420 | 47.0 | 347 | 2208 |
| ltz-eng | tatoeba-test-v2021-08-07 | 0.64228 | 53.6 | 293 | 1840 |
| mar-eng | tatoeba-test-v2021-08-07 | 0.64526 | 47.0 | 10396 | 67527 |
| mkd-eng | tatoeba-test-v2021-08-07 | 0.66313 | 52.4 | 10010 | 65667 |
| mkd-spa | tatoeba-test-v2021-08-07 | 0.71066 | 55.7 | 217 | 1121 |
| nds-deu | tatoeba-test-v2021-08-07 | 0.66221 | 47.6 | 9999 | 74564 |
| nds-eng | tatoeba-test-v2021-08-07 | 0.61480 | 44.4 | 2500 | 17589 |
| nds-fra | tatoeba-test-v2021-08-07 | 0.61459 | 45.9 | 857 | 5676 |
| nds-por | tatoeba-test-v2021-08-07 | 0.60646 | 41.8 | 207 | 1256 |
| nds-spa | tatoeba-test-v2021-08-07 | 0.63982 | 44.6 | 923 | 5540 |
| nld-deu | tatoeba-test-v2021-08-07 | 0.72111 | 54.8 | 10218 | 74131 |
| nld-eng | tatoeba-test-v2021-08-07 | 0.73199 | 59.3 | 12696 | 89978 |
| nld-fra | tatoeba-test-v2021-08-07 | 0.67269 | 46.7 | 11548 | 82974 |
| nld-por | tatoeba-test-v2021-08-07 | 0.68204 | 48.9 | 2500 | 17326 |
| nld-spa | tatoeba-test-v2021-08-07 | 0.69314 | 51.0 | 10113 | 74981 |
| nno-eng | tatoeba-test-v2021-08-07 | 0.69230 | 55.8 | 460 | 3524 |
| nob-deu | tatoeba-test-v2021-08-07 | 0.68483 | 48.8 | 3525 | 33592 |
| nob-eng | tatoeba-test-v2021-08-07 | 0.71685 | 57.4 | 4539 | 36823 |
| nob-fra | tatoeba-test-v2021-08-07 | 0.70312 | 52.6 | 323 | 2269 |
| nob-spa | tatoeba-test-v2021-08-07 | 0.73880 | 56.2 | 885 | 6866 |
| nor-deu | tatoeba-test-v2021-08-07 | 0.68518 | 48.9 | 3651 | 34575 |
| nor-eng | tatoeba-test-v2021-08-07 | 0.71465 | 57.3 | 5000 | 40355 |
| nor-fra | tatoeba-test-v2021-08-07 | 0.71415 | 55.2 | 477 | 3213 |
| nor-por | tatoeba-test-v2021-08-07 | 0.67705 | 45.8 | 481 | 4182 |
| nor-spa | tatoeba-test-v2021-08-07 | 0.73721 | 56.0 | 960 | 7311 |
| oci-eng | tatoeba-test-v2021-08-07 | 0.41564 | 22.9 | 841 | 5299 |
| oci-fra | tatoeba-test-v2021-08-07 | 0.47832 | 27.0 | 806 | 6302 |
| pes-eng | tatoeba-test-v2021-08-07 | 0.58486 | 39.7 | 3757 | 31411 |
| pms-eng | tatoeba-test-v2021-08-07 | 0.39772 | 20.2 | 269 | 2059 |
| pol-deu | tatoeba-test-v2021-08-07 | 0.66592 | 47.9 | 5000 | 37421 |
| pol-eng | tatoeba-test-v2021-08-07 | 0.67680 | 51.8 | 10099 | 75766 |
| pol-fra | tatoeba-test-v2021-08-07 | 0.65788 | 47.7 | 3087 | 24257 |
| pol-por | tatoeba-test-v2021-08-07 | 0.64124 | 43.1 | 705 | 5063 |
| pol-spa | tatoeba-test-v2021-08-07 | 0.65488 | 46.9 | 2544 | 18113 |
| por-deu | tatoeba-test-v2021-08-07 | 0.66941 | 46.8 | 10000 | 81246 |
| por-eng | tatoeba-test-v2021-08-07 | 0.75755 | 62.4 | 13222 | 105351 |
| por-fra | tatoeba-test-v2021-08-07 | 0.74773 | 58.6 | 10518 | 80459 |
| por-por | tatoeba-test-v2021-08-07 | 0.72256 | 51.8 | 2500 | 19220 |
| por-spa | tatoeba-test-v2021-08-07 | 0.78598 | 63.6 | 10947 | 87335 |
| ron-deu | tatoeba-test-v2021-08-07 | 0.67249 | 49.1 | 1141 | 7893 |
| ron-eng | tatoeba-test-v2021-08-07 | 0.71740 | 57.3 | 5508 | 40717 |
| ron-fra | tatoeba-test-v2021-08-07 | 0.69777 | 53.0 | 1925 | 13347 |
| ron-por | tatoeba-test-v2021-08-07 | 0.72413 | 53.5 | 681 | 4593 |
| ron-spa | tatoeba-test-v2021-08-07 | 0.72960 | 56.3 | 1959 | 12679 |
| rus-deu | tatoeba-test-v2021-08-07 | 0.67364 | 48.2 | 12800 | 98842 |
| rus-eng | tatoeba-test-v2021-08-07 | 0.68851 | 53.7 | 19425 | 147872 |
| rus-fra | tatoeba-test-v2021-08-07 | 0.66299 | 49.1 | 11490 | 80579 |
| rus-por | tatoeba-test-v2021-08-07 | 0.64106 | 43.4 | 10000 | 74713 |
| rus-spa | tatoeba-test-v2021-08-07 | 0.67610 | 49.1 | 10506 | 75246 |
| slv-deu | tatoeba-test-v2021-08-07 | 0.72746 | 55.2 | 492 | 3003 |
| slv-eng | tatoeba-test-v2021-08-07 | 0.70580 | 55.4 | 2495 | 16940 |
| slv-fra | tatoeba-test-v2021-08-07 | 0.61642 | 43.0 | 448 | 3792 |
| sqi-eng | tatoeba-test-v2021-08-07 | 0.71252 | 56.5 | 1109 | 8129 |
| srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.65934 | 52.0 | 1580 | 10181 |
| swe-deu | tatoeba-test-v2021-08-07 | 0.70356 | 53.5 | 3410 | 23494 |
| swe-eng | tatoeba-test-v2021-08-07 | 0.74751 | 62.7 | 10362 | 68513 |
| swe-fra | tatoeba-test-v2021-08-07 | 0.71714 | 56.7 | 1407 | 9580 |
| swe-por | tatoeba-test-v2021-08-07 | 0.68849 | 48.7 | 320 | 2032 |
| swe-spa | tatoeba-test-v2021-08-07 | 0.70160 | 53.3 | 1351 | 8235 |
| ukr-deu | tatoeba-test-v2021-08-07 | 0.68602 | 50.8 | 10319 | 64646 |
| ukr-eng | tatoeba-test-v2021-08-07 | 0.68162 | 52.4 | 13127 | 88607 |
| ukr-fra | tatoeba-test-v2021-08-07 | 0.66118 | 48.4 | 10035 | 63227 |
| ukr-por | tatoeba-test-v2021-08-07 | 0.65923 | 46.6 | 3372 | 21315 |
| ukr-spa | tatoeba-test-v2021-08-07 | 0.67601 | 49.7 | 10115 | 59284 |
| urd-eng | tatoeba-test-v2021-08-07 | 0.52376 | 33.0 | 1663 | 12029 |
| yid-eng | tatoeba-test-v2021-08-07 | 0.43640 | 19.1 | 2483 | 15452 |
| yid-fra | tatoeba-test-v2021-08-07 | 0.43410 | 20.2 | 384 | 2455 |
| afr-deu | flores101-devtest | 0.57090 | 27.9 | 1012 | 25094 |
| afr-eng | flores101-devtest | 0.73127 | 52.4 | 1012 | 24721 |
| afr-fra | flores101-devtest | 0.60726 | 34.8 | 1012 | 28343 |
| afr-por | flores101-devtest | 0.60399 | 34.4 | 1012 | 26519 |
| afr-spa | flores101-devtest | 0.50655 | 22.1 | 1012 | 29199 |
| ast-fra | flores101-devtest | 0.56575 | 30.8 | 1012 | 28343 |
| ast-por | flores101-devtest | 0.56438 | 30.4 | 1012 | 26519 |
| ast-spa | flores101-devtest | 0.49455 | 21.1 | 1012 | 29199 |
| bel-deu | flores101-devtest | 0.46177 | 11.8 | 1012 | 25094 |
| bel-eng | flores101-devtest | 0.49344 | 15.6 | 1012 | 24721 |
| bel-fra | flores101-devtest | 0.49372 | 16.5 | 1012 | 28343 |
| bel-spa | flores101-devtest | 0.44802 | 13.8 | 1012 | 29199 |
| ben-eng | flores101-devtest | 0.53648 | 23.9 | 1012 | 24721 |
| ben-por | flores101-devtest | 0.48236 | 19.9 | 1012 | 26519 |
| bul-por | flores101-devtest | 0.58471 | 30.8 | 1012 | 26519 |
| cat-deu | flores101-devtest | 0.56499 | 27.4 | 1012 | 25094 |
| cat-eng | flores101-devtest | 0.67443 | 42.3 | 1012 | 24721 |
| cat-spa | flores101-devtest | 0.53140 | 24.4 | 1012 | 29199 |
| ces-por | flores101-devtest | 0.57503 | 29.9 | 1012 | 26519 |
| ces-spa | flores101-devtest | 0.49860 | 21.1 | 1012 | 29199 |
| ckb-eng | flores101-devtest | 0.41310 | 15.8 | 1012 | 24721 |
| cym-fra | flores101-devtest | 0.54610 | 28.6 | 1012 | 28343 |
| dan-por | flores101-devtest | 0.60877 | 34.7 | 1012 | 26519 |
| deu-eng | flores101-devtest | 0.65706 | 39.8 | 1012 | 24721 |
| fas-fra | flores101-devtest | 0.54336 | 26.8 | 1012 | 28343 |
| fra-eng | flores101-devtest | 0.66301 | 41.0 | 1012 | 24721 |
| fra-por | flores101-devtest | 0.61592 | 35.7 | 1012 | 26519 |
| gle-deu | flores101-devtest | 0.47354 | 17.0 | 1012 | 25094 |
| gle-por | flores101-devtest | 0.50115 | 21.7 | 1012 | 26519 |
| guj-deu | flores101-devtest | 0.42069 | 13.5 | 1012 | 25094 |
| hin-deu | flores101-devtest | 0.49480 | 19.6 | 1012 | 25094 |
| hin-eng | flores101-devtest | 0.59392 | 32.6 | 1012 | 24721 |
| hrv-por | flores101-devtest | 0.57004 | 29.5 | 1012 | 26519 |
| hye-deu | flores101-devtest | 0.47323 | 17.5 | 1012 | 25094 |
| hye-eng | flores101-devtest | 0.54450 | 26.3 | 1012 | 24721 |
| isl-eng | flores101-devtest | 0.53875 | 28.2 | 1012 | 24721 |
| ita-deu | flores101-devtest | 0.54033 | 22.0 | 1012 | 25094 |
| ita-fra | flores101-devtest | 0.59488 | 30.6 | 1012 | 28343 |
| ita-spa | flores101-devtest | 0.51946 | 22.9 | 1012 | 29199 |
| kea-spa | flores101-devtest | 0.46784 | 18.3 | 1012 | 29199 |
| lav-por | flores101-devtest | 0.54017 | 24.6 | 1012 | 26519 |
| lav-spa | flores101-devtest | 0.48185 | 19.3 | 1012 | 29199 |
| lit-deu | flores101-devtest | 0.51261 | 21.4 | 1012 | 25094 |
| lit-por | flores101-devtest | 0.53223 | 25.3 | 1012 | 26519 |
| ltz-deu | flores101-devtest | 0.58286 | 29.2 | 1012 | 25094 |
| ltz-por | flores101-devtest | 0.53241 | 27.0 | 1012 | 26519 |
| mar-deu | flores101-devtest | 0.44237 | 14.1 | 1012 | 25094 |
| mar-eng | flores101-devtest | 0.52755 | 23.8 | 1012 | 24721 |
| mar-por | flores101-devtest | 0.45667 | 18.1 | 1012 | 26519 |
| mkd-fra | flores101-devtest | 0.59219 | 32.8 | 1012 | 28343 |
| nld-deu | flores101-devtest | 0.52899 | 21.5 | 1012 | 25094 |
| nld-eng | flores101-devtest | 0.58230 | 29.8 | 1012 | 24721 |
| nob-spa | flores101-devtest | 0.50054 | 21.2 | 1012 | 29199 |
| npi-eng | flores101-devtest | 0.53179 | 24.8 | 1012 | 24721 |
| npi-spa | flores101-devtest | 0.41165 | 13.6 | 1012 | 29199 |
| pan-deu | flores101-devtest | 0.42831 | 13.6 | 1012 | 25094 |
| pan-eng | flores101-devtest | 0.51203 | 22.2 | 1012 | 24721 |
| pan-fra | flores101-devtest | 0.46357 | 19.2 | 1012 | 28343 |
| pan-por | flores101-devtest | 0.44885 | 17.4 | 1012 | 26519 |
| pol-deu | flores101-devtest | 0.50973 | 20.1 | 1012 | 25094 |
| pol-eng | flores101-devtest | 0.55772 | 25.9 | 1012 | 24721 |
| pol-fra | flores101-devtest | 0.54590 | 26.2 | 1012 | 28343 |
| pol-spa | flores101-devtest | 0.47816 | 18.9 | 1012 | 29199 |
| por-eng | flores101-devtest | 0.69438 | 45.5 | 1012 | 24721 |
| por-fra | flores101-devtest | 0.63701 | 38.9 | 1012 | 28343 |
| por-spa | flores101-devtest | 0.53216 | 25.0 | 1012 | 29199 |
| ron-fra | flores101-devtest | 0.62744 | 36.2 | 1012 | 28343 |
| rus-deu | flores101-devtest | 0.53823 | 23.1 | 1012 | 25094 |
| rus-eng | flores101-devtest | 0.59829 | 31.7 | 1012 | 24721 |
| rus-fra | flores101-devtest | 0.57384 | 29.8 | 1012 | 28343 |
| rus-por | flores101-devtest | 0.56082 | 28.0 | 1012 | 26519 |
| slk-eng | flores101-devtest | 0.62376 | 34.4 | 1012 | 24721 |
| slk-por | flores101-devtest | 0.54486 | 26.6 | 1012 | 26519 |
| slk-spa | flores101-devtest | 0.48253 | 20.0 | 1012 | 29199 |
| slv-deu | flores101-devtest | 0.54130 | 23.8 | 1012 | 25094 |
| slv-fra | flores101-devtest | 0.56838 | 29.2 | 1012 | 28343 |
| slv-por | flores101-devtest | 0.55554 | 28.1 | 1012 | 26519 |
| spa-deu | flores101-devtest | 0.51807 | 19.5 | 1012 | 25094 |
| swe-spa | flores101-devtest | 0.51211 | 22.8 | 1012 | 29199 |
| tgk-fra | flores101-devtest | 0.47290 | 19.6 | 1012 | 28343 |
| tgk-spa | flores101-devtest | 0.41393 | 14.3 | 1012 | 29199 |
| ukr-eng | flores101-devtest | 0.61588 | 34.3 | 1012 | 24721 |
| ukr-fra | flores101-devtest | 0.58296 | 31.3 | 1012 | 28343 |
| ukr-spa | flores101-devtest | 0.49535 | 21.1 | 1012 | 29199 |
| urd-deu | flores101-devtest | 0.44211 | 15.2 | 1012 | 25094 |
| afr-deu | flores200-devtest | 0.57712 | 28.7 | 1012 | 25094 |
| afr-eng | flores200-devtest | 0.73690 | 53.4 | 1012 | 24721 |
| afr-fra | flores200-devtest | 0.61332 | 35.7 | 1012 | 28343 |
| afr-por | flores200-devtest | 0.60899 | 35.1 | 1012 | 26519 |
| afr-spa | flores200-devtest | 0.50836 | 22.1 | 1012 | 29199 |
| asm-eng | flores200-devtest | 0.42432 | 13.4 | 1012 | 24721 |
| ast-deu | flores200-devtest | 0.52402 | 23.3 | 1012 | 25094 |
| ast-eng | flores200-devtest | 0.60640 | 35.1 | 1012 | 24721 |
| ast-fra | flores200-devtest | 0.57060 | 31.5 | 1012 | 28343 |
| ast-por | flores200-devtest | 0.56982 | 30.8 | 1012 | 26519 |
| ast-spa | flores200-devtest | 0.49452 | 21.1 | 1012 | 29199 |
| awa-deu | flores200-devtest | 0.47101 | 16.3 | 1012 | 25094 |
| awa-eng | flores200-devtest | 0.55042 | 25.7 | 1012 | 24721 |
| awa-fra | flores200-devtest | 0.50230 | 22.1 | 1012 | 28343 |
| awa-por | flores200-devtest | 0.49701 | 21.1 | 1012 | 26519 |
| awa-spa | flores200-devtest | 0.43913 | 15.7 | 1012 | 29199 |
| bel-deu | flores200-devtest | 0.46906 | 12.7 | 1012 | 25094 |
| bel-eng | flores200-devtest | 0.49995 | 16.5 | 1012 | 24721 |
| bel-fra | flores200-devtest | 0.49987 | 17.1 | 1012 | 28343 |
| bel-por | flores200-devtest | 0.48319 | 15.7 | 1012 | 26519 |
| bel-spa | flores200-devtest | 0.45393 | 14.4 | 1012 | 29199 |
| ben-deu | flores200-devtest | 0.46413 | 16.3 | 1012 | 25094 |
| ben-eng | flores200-devtest | 0.54681 | 24.5 | 1012 | 24721 |
| ben-fra | flores200-devtest | 0.49843 | 21.9 | 1012 | 28343 |
| ben-por | flores200-devtest | 0.49129 | 21.0 | 1012 | 26519 |
| ben-spa | flores200-devtest | 0.43310 | 14.9 | 1012 | 29199 |
| bho-deu | flores200-devtest | 0.41875 | 12.4 | 1012 | 25094 |
| bho-eng | flores200-devtest | 0.48319 | 18.5 | 1012 | 24721 |
| bho-fra | flores200-devtest | 0.44504 | 16.1 | 1012 | 28343 |
| bho-por | flores200-devtest | 0.43627 | 15.5 | 1012 | 26519 |
| bho-spa | flores200-devtest | 0.40189 | 12.6 | 1012 | 29199 |
| bul-deu | flores200-devtest | 0.56591 | 26.8 | 1012 | 25094 |
| bul-eng | flores200-devtest | 0.64922 | 37.8 | 1012 | 24721 |
| bul-fra | flores200-devtest | 0.60386 | 33.3 | 1012 | 28343 |
| bul-por | flores200-devtest | 0.59070 | 31.6 | 1012 | 26519 |
| bul-spa | flores200-devtest | 0.50968 | 22.2 | 1012 | 29199 |
| cat-deu | flores200-devtest | 0.57030 | 27.9 | 1012 | 25094 |
| cat-eng | flores200-devtest | 0.67842 | 43.0 | 1012 | 24721 |
| cat-fra | flores200-devtest | 0.63034 | 38.1 | 1012 | 28343 |
| cat-por | flores200-devtest | 0.62567 | 37.3 | 1012 | 26519 |
| cat-spa | flores200-devtest | 0.53260 | 24.5 | 1012 | 29199 |
| ces-deu | flores200-devtest | 0.56613 | 27.1 | 1012 | 25094 |
| ces-eng | flores200-devtest | 0.63574 | 36.5 | 1012 | 24721 |
| ces-fra | flores200-devtest | 0.59573 | 32.8 | 1012 | 28343 |
| ces-por | flores200-devtest | 0.58096 | 30.9 | 1012 | 26519 |
| ces-spa | flores200-devtest | 0.50295 | 21.6 | 1012 | 29199 |
| ckb-eng | flores200-devtest | 0.43075 | 16.7 | 1012 | 24721 |
| ckb-fra | flores200-devtest | 0.41038 | 15.7 | 1012 | 28343 |
| cym-deu | flores200-devtest | 0.51003 | 22.0 | 1012 | 25094 |
| cym-eng | flores200-devtest | 0.67808 | 45.7 | 1012 | 24721 |
| cym-fra | flores200-devtest | 0.55779 | 29.9 | 1012 | 28343 |
| cym-por | flores200-devtest | 0.53930 | 27.9 | 1012 | 26519 |
| cym-spa | flores200-devtest | 0.47129 | 19.6 | 1012 | 29199 |
| dan-deu | flores200-devtest | 0.59897 | 30.7 | 1012 | 25094 |
| dan-eng | flores200-devtest | 0.70142 | 46.2 | 1012 | 24721 |
| dan-fra | flores200-devtest | 0.62669 | 37.1 | 1012 | 28343 |
| dan-por | flores200-devtest | 0.61338 | 35.3 | 1012 | 26519 |
| dan-spa | flores200-devtest | 0.52360 | 24.2 | 1012 | 29199 |
| deu-eng | flores200-devtest | 0.66096 | 40.3 | 1012 | 24721 |
| deu-fra | flores200-devtest | 0.61562 | 35.4 | 1012 | 28343 |
| deu-por | flores200-devtest | 0.59775 | 33.3 | 1012 | 26519 |
| deu-spa | flores200-devtest | 0.51787 | 23.3 | 1012 | 29199 |
| ell-deu | flores200-devtest | 0.52003 | 22.0 | 1012 | 25094 |
| ell-eng | flores200-devtest | 0.59074 | 31.6 | 1012 | 24721 |
| ell-fra | flores200-devtest | 0.56636 | 29.9 | 1012 | 28343 |
| ell-por | flores200-devtest | 0.54903 | 27.2 | 1012 | 26519 |
| ell-spa | flores200-devtest | 0.48701 | 20.4 | 1012 | 29199 |
| eng-deu | flores200-devtest | 0.63747 | 36.8 | 1012 | 25094 |
| eng-fra | flores200-devtest | 0.69505 | 47.2 | 1012 | 28343 |
| eng-por | flores200-devtest | 0.69743 | 47.3 | 1012 | 26519 |
| eng-spa | flores200-devtest | 0.54954 | 26.6 | 1012 | 29199 |
| fao-deu | flores200-devtest | 0.42943 | 16.3 | 1012 | 25094 |
| fao-eng | flores200-devtest | 0.46227 | 22.9 | 1012 | 24721 |
| fao-fra | flores200-devtest | 0.41404 | 18.3 | 1012 | 28343 |
| fao-por | flores200-devtest | 0.41850 | 17.6 | 1012 | 26519 |
| fra-deu | flores200-devtest | 0.57718 | 28.2 | 1012 | 25094 |
| fra-eng | flores200-devtest | 0.66534 | 41.4 | 1012 | 24721 |
| fra-por | flores200-devtest | 0.61987 | 36.2 | 1012 | 26519 |
| fra-spa | flores200-devtest | 0.52646 | 24.1 | 1012 | 29199 |
| fur-deu | flores200-devtest | 0.50429 | 20.5 | 1012 | 25094 |
| fur-eng | flores200-devtest | 0.58954 | 32.0 | 1012 | 24721 |
| fur-fra | flores200-devtest | 0.55699 | 28.6 | 1012 | 28343 |
| fur-por | flores200-devtest | 0.54977 | 27.9 | 1012 | 26519 |
| fur-spa | flores200-devtest | 0.47550 | 19.0 | 1012 | 29199 |
| gla-eng | flores200-devtest | 0.43561 | 16.2 | 1012 | 24721 |
| gla-fra | flores200-devtest | 0.41770 | 15.3 | 1012 | 28343 |
| gla-por | flores200-devtest | 0.40473 | 14.7 | 1012 | 26519 |
| gle-deu | flores200-devtest | 0.48622 | 18.1 | 1012 | 25094 |
| gle-eng | flores200-devtest | 0.58337 | 30.7 | 1012 | 24721 |
| gle-fra | flores200-devtest | 0.52798 | 24.6 | 1012 | 28343 |
| gle-por | flores200-devtest | 0.51712 | 23.6 | 1012 | 26519 |
| gle-spa | flores200-devtest | 0.45954 | 18.1 | 1012 | 29199 |
| glg-deu | flores200-devtest | 0.56174 | 25.8 | 1012 | 25094 |
| glg-eng | flores200-devtest | 0.65391 | 38.4 | 1012 | 24721 |
| glg-fra | flores200-devtest | 0.61762 | 35.7 | 1012 | 28343 |
| glg-por | flores200-devtest | 0.60170 | 32.9 | 1012 | 26519 |
| glg-spa | flores200-devtest | 0.53214 | 24.3 | 1012 | 29199 |
| guj-deu | flores200-devtest | 0.43101 | 14.2 | 1012 | 25094 |
| guj-eng | flores200-devtest | 0.55857 | 26.4 | 1012 | 24721 |
| guj-fra | flores200-devtest | 0.47047 | 19.8 | 1012 | 28343 |
| guj-por | flores200-devtest | 0.45641 | 18.5 | 1012 | 26519 |
| guj-spa | flores200-devtest | 0.42457 | 14.5 | 1012 | 29199 |
| hat-deu | flores200-devtest | 0.49247 | 19.2 | 1012 | 25094 |
| hat-eng | flores200-devtest | 0.58655 | 31.7 | 1012 | 24721 |
| hat-fra | flores200-devtest | 0.60736 | 34.2 | 1012 | 28343 |
| hat-por | flores200-devtest | 0.54733 | 27.3 | 1012 | 26519 |
| hat-spa | flores200-devtest | 0.46963 | 17.9 | 1012 | 29199 |
| hin-deu | flores200-devtest | 0.50305 | 20.3 | 1012 | 25094 |
| hin-eng | flores200-devtest | 0.60811 | 34.0 | 1012 | 24721 |
| hin-fra | flores200-devtest | 0.53919 | 25.9 | 1012 | 28343 |
| hin-por | flores200-devtest | 0.53151 | 25.6 | 1012 | 26519 |
| hin-spa | flores200-devtest | 0.46051 | 17.4 | 1012 | 29199 |
| hne-deu | flores200-devtest | 0.48386 | 18.4 | 1012 | 25094 |
| hne-eng | flores200-devtest | 0.59671 | 32.3 | 1012 | 24721 |
| hne-fra | flores200-devtest | 0.52013 | 24.5 | 1012 | 28343 |
| hne-por | flores200-devtest | 0.51345 | 23.8 | 1012 | 26519 |
| hne-spa | flores200-devtest | 0.44481 | 16.3 | 1012 | 29199 |
| hrv-deu | flores200-devtest | 0.55524 | 26.0 | 1012 | 25094 |
| hrv-eng | flores200-devtest | 0.61977 | 34.9 | 1012 | 24721 |
| hrv-fra | flores200-devtest | 0.59318 | 32.7 | 1012 | 28343 |
| hrv-por | flores200-devtest | 0.57603 | 30.2 | 1012 | 26519 |
| hrv-spa | flores200-devtest | 0.50242 | 21.5 | 1012 | 29199 |
| hye-deu | flores200-devtest | 0.48676 | 19.2 | 1012 | 25094 |
| hye-eng | flores200-devtest | 0.55729 | 27.0 | 1012 | 24721 |
| hye-fra | flores200-devtest | 0.52152 | 25.2 | 1012 | 28343 |
| hye-por | flores200-devtest | 0.51026 | 23.3 | 1012 | 26519 |
| hye-spa | flores200-devtest | 0.45459 | 17.8 | 1012 | 29199 |
| isl-deu | flores200-devtest | 0.48677 | 20.5 | 1012 | 25094 |
| isl-eng | flores200-devtest | 0.54804 | 29.1 | 1012 | 24721 |
| isl-fra | flores200-devtest | 0.51362 | 25.0 | 1012 | 28343 |
| isl-por | flores200-devtest | 0.50201 | 23.8 | 1012 | 26519 |
| isl-spa | flores200-devtest | 0.44801 | 17.5 | 1012 | 29199 |
| ita-deu | flores200-devtest | 0.54589 | 22.9 | 1012 | 25094 |
| ita-eng | flores200-devtest | 0.60660 | 30.9 | 1012 | 24721 |
| ita-fra | flores200-devtest | 0.59811 | 31.0 | 1012 | 28343 |
| ita-por | flores200-devtest | 0.57808 | 28.4 | 1012 | 26519 |
| ita-spa | flores200-devtest | 0.52244 | 23.3 | 1012 | 29199 |
| kea-deu | flores200-devtest | 0.48107 | 19.2 | 1012 | 25094 |
| kea-eng | flores200-devtest | 0.59570 | 34.5 | 1012 | 24721 |
| kea-fra | flores200-devtest | 0.53683 | 26.8 | 1012 | 28343 |
| kea-por | flores200-devtest | 0.57642 | 30.3 | 1012 | 26519 |
| kea-spa | flores200-devtest | 0.47048 | 18.6 | 1012 | 29199 |
| lij-deu | flores200-devtest | 0.49270 | 19.4 | 1012 | 25094 |
| lij-eng | flores200-devtest | 0.58369 | 30.8 | 1012 | 24721 |
| lij-fra | flores200-devtest | 0.55002 | 28.6 | 1012 | 28343 |
| lij-por | flores200-devtest | 0.54155 | 26.7 | 1012 | 26519 |
| lij-spa | flores200-devtest | 0.46656 | 18.7 | 1012 | 29199 |
| lim-deu | flores200-devtest | 0.44183 | 15.0 | 1012 | 25094 |
| lim-eng | flores200-devtest | 0.46674 | 20.3 | 1012 | 24721 |
| lim-fra | flores200-devtest | 0.43685 | 17.8 | 1012 | 28343 |
| lim-por | flores200-devtest | 0.42699 | 16.3 | 1012 | 26519 |
| lit-deu | flores200-devtest | 0.51669 | 21.9 | 1012 | 25094 |
| lit-eng | flores200-devtest | 0.57849 | 30.5 | 1012 | 24721 |
| lit-fra | flores200-devtest | 0.55896 | 29.0 | 1012 | 28343 |
| lit-por | flores200-devtest | 0.53960 | 26.3 | 1012 | 26519 |
| lit-spa | flores200-devtest | 0.48120 | 19.7 | 1012 | 29199 |
| lmo-deu | flores200-devtest | 0.44732 | 14.2 | 1012 | 25094 |
| lmo-eng | flores200-devtest | 0.51710 | 23.3 | 1012 | 24721 |
| lmo-fra | flores200-devtest | 0.49129 | 21.5 | 1012 | 28343 |
| lmo-por | flores200-devtest | 0.49153 | 21.4 | 1012 | 26519 |
| lmo-spa | flores200-devtest | 0.43363 | 15.4 | 1012 | 29199 |
| ltz-deu | flores200-devtest | 0.58897 | 29.8 | 1012 | 25094 |
| ltz-eng | flores200-devtest | 0.62250 | 36.2 | 1012 | 24721 |
| ltz-fra | flores200-devtest | 0.57460 | 31.6 | 1012 | 28343 |
| ltz-por | flores200-devtest | 0.53674 | 27.1 | 1012 | 26519 |
| ltz-spa | flores200-devtest | 0.46048 | 18.8 | 1012 | 29199 |
| mag-deu | flores200-devtest | 0.49176 | 18.9 | 1012 | 25094 |
| mag-eng | flores200-devtest | 0.59691 | 32.2 | 1012 | 24721 |
| mag-fra | flores200-devtest | 0.52068 | 24.1 | 1012 | 28343 |
| mag-por | flores200-devtest | 0.52006 | 23.8 | 1012 | 26519 |
| mag-spa | flores200-devtest | 0.44945 | 16.5 | 1012 | 29199 |
| mai-deu | flores200-devtest | 0.46893 | 16.5 | 1012 | 25094 |
| mai-eng | flores200-devtest | 0.56282 | 27.7 | 1012 | 24721 |
| mai-fra | flores200-devtest | 0.50286 | 22.2 | 1012 | 28343 |
| mai-por | flores200-devtest | 0.49523 | 21.6 | 1012 | 26519 |
| mai-spa | flores200-devtest | 0.44271 | 15.9 | 1012 | 29199 |
| mar-deu | flores200-devtest | 0.44712 | 14.8 | 1012 | 25094 |
| mar-eng | flores200-devtest | 0.54222 | 25.4 | 1012 | 24721 |
| mar-fra | flores200-devtest | 0.47383 | 19.6 | 1012 | 28343 |
| mar-por | flores200-devtest | 0.46593 | 18.7 | 1012 | 26519 |
| mar-spa | flores200-devtest | 0.41912 | 14.0 | 1012 | 29199 |
| mkd-deu | flores200-devtest | 0.56267 | 26.8 | 1012 | 25094 |
| mkd-eng | flores200-devtest | 0.64902 | 38.8 | 1012 | 24721 |
| mkd-fra | flores200-devtest | 0.60051 | 33.9 | 1012 | 28343 |
| mkd-por | flores200-devtest | 0.59197 | 32.9 | 1012 | 26519 |
| mkd-spa | flores200-devtest | 0.50972 | 22.8 | 1012 | 29199 |
| nld-deu | flores200-devtest | 0.53072 | 21.8 | 1012 | 25094 |
| nld-eng | flores200-devtest | 0.58671 | 30.5 | 1012 | 24721 |
| nld-fra | flores200-devtest | 0.55677 | 27.5 | 1012 | 28343 |
| nld-por | flores200-devtest | 0.53989 | 25.6 | 1012 | 26519 |
| nld-spa | flores200-devtest | 0.48443 | 19.5 | 1012 | 29199 |
| nno-deu | flores200-devtest | 0.56707 | 27.3 | 1012 | 25094 |
| nno-eng | flores200-devtest | 0.67683 | 43.2 | 1012 | 24721 |
| nno-fra | flores200-devtest | 0.59829 | 34.3 | 1012 | 28343 |
| nno-por | flores200-devtest | 0.58723 | 32.5 | 1012 | 26519 |
| nno-spa | flores200-devtest | 0.50217 | 22.0 | 1012 | 29199 |
| nob-deu | flores200-devtest | 0.56197 | 26.5 | 1012 | 25094 |
| nob-eng | flores200-devtest | 0.66428 | 41.7 | 1012 | 24721 |
| nob-fra | flores200-devtest | 0.59531 | 33.1 | 1012 | 28343 |
| nob-por | flores200-devtest | 0.58521 | 31.7 | 1012 | 26519 |
| nob-spa | flores200-devtest | 0.50418 | 21.4 | 1012 | 29199 |
| npi-deu | flores200-devtest | 0.44364 | 14.6 | 1012 | 25094 |
| npi-eng | flores200-devtest | 0.54309 | 26.1 | 1012 | 24721 |
| npi-fra | flores200-devtest | 0.47458 | 19.7 | 1012 | 28343 |
| npi-por | flores200-devtest | 0.46702 | 18.9 | 1012 | 26519 |
| npi-spa | flores200-devtest | 0.41720 | 13.9 | 1012 | 29199 |
| oci-deu | flores200-devtest | 0.56668 | 26.9 | 1012 | 25094 |
| oci-eng | flores200-devtest | 0.70282 | 46.8 | 1012 | 24721 |
| oci-fra | flores200-devtest | 0.64408 | 39.1 | 1012 | 28343 |
| oci-por | flores200-devtest | 0.62256 | 35.7 | 1012 | 26519 |
| oci-spa | flores200-devtest | 0.51705 | 22.3 | 1012 | 29199 |
| pan-deu | flores200-devtest | 0.44428 | 15.1 | 1012 | 25094 |
| pan-eng | flores200-devtest | 0.52652 | 23.0 | 1012 | 24721 |
| pan-fra | flores200-devtest | 0.47743 | 19.9 | 1012 | 28343 |
| pan-por | flores200-devtest | 0.46585 | 18.8 | 1012 | 26519 |
| pan-spa | flores200-devtest | 0.41798 | 14.5 | 1012 | 29199 |
| pap-deu | flores200-devtest | 0.53397 | 23.5 | 1012 | 25094 |
| pap-eng | flores200-devtest | 0.67741 | 43.1 | 1012 | 24721 |
| pap-fra | flores200-devtest | 0.57787 | 31.1 | 1012 | 28343 |
| pap-por | flores200-devtest | 0.59003 | 32.9 | 1012 | 26519 |
| pap-spa | flores200-devtest | 0.49768 | 21.8 | 1012 | 29199 |
| pes-deu | flores200-devtest | 0.50787 | 20.9 | 1012 | 25094 |
| pes-eng | flores200-devtest | 0.58693 | 31.1 | 1012 | 24721 |
| pes-fra | flores200-devtest | 0.55060 | 27.9 | 1012 | 28343 |
| pes-por | flores200-devtest | 0.54139 | 26.6 | 1012 | 26519 |
| pes-spa | flores200-devtest | 0.47230 | 18.6 | 1012 | 29199 |
| pol-deu | flores200-devtest | 0.51514 | 20.8 | 1012 | 25094 |
| pol-eng | flores200-devtest | 0.56021 | 26.2 | 1012 | 24721 |
| pol-fra | flores200-devtest | 0.55176 | 27.0 | 1012 | 28343 |
| pol-por | flores200-devtest | 0.52998 | 24.3 | 1012 | 26519 |
| pol-spa | flores200-devtest | 0.48344 | 19.4 | 1012 | 29199 |
| por-deu | flores200-devtest | 0.58002 | 29.3 | 1012 | 25094 |
| por-eng | flores200-devtest | 0.69694 | 46.0 | 1012 | 24721 |
| por-fra | flores200-devtest | 0.64146 | 39.6 | 1012 | 28343 |
| por-spa | flores200-devtest | 0.53508 | 25.3 | 1012 | 29199 |
| prs-deu | flores200-devtest | 0.49849 | 20.4 | 1012 | 25094 |
| prs-eng | flores200-devtest | 0.58120 | 32.0 | 1012 | 24721 |
| prs-fra | flores200-devtest | 0.53939 | 27.0 | 1012 | 28343 |
| prs-por | flores200-devtest | 0.53479 | 26.7 | 1012 | 26519 |
| prs-spa | flores200-devtest | 0.46241 | 18.3 | 1012 | 29199 |
| ron-deu | flores200-devtest | 0.57214 | 27.4 | 1012 | 25094 |
| ron-eng | flores200-devtest | 0.66701 | 40.4 | 1012 | 24721 |
| ron-fra | flores200-devtest | 0.63234 | 37.2 | 1012 | 28343 |
| ron-por | flores200-devtest | 0.61838 | 35.4 | 1012 | 26519 |
| ron-spa | flores200-devtest | 0.52856 | 24.3 | 1012 | 29199 |
| rus-deu | flores200-devtest | 0.54446 | 23.9 | 1012 | 25094 |
| rus-eng | flores200-devtest | 0.60131 | 32.0 | 1012 | 24721 |
| rus-fra | flores200-devtest | 0.57986 | 30.4 | 1012 | 28343 |
| rus-por | flores200-devtest | 0.56600 | 28.7 | 1012 | 26519 |
| rus-spa | flores200-devtest | 0.49871 | 21.2 | 1012 | 29199 |
| scn-deu | flores200-devtest | 0.46523 | 17.0 | 1012 | 25094 |
| scn-eng | flores200-devtest | 0.53341 | 26.1 | 1012 | 24721 |
| scn-fra | flores200-devtest | 0.51481 | 25.0 | 1012 | 28343 |
| scn-por | flores200-devtest | 0.50343 | 23.8 | 1012 | 26519 |
| scn-spa | flores200-devtest | 0.44756 | 17.1 | 1012 | 29199 |
| slk-deu | flores200-devtest | 0.53932 | 23.6 | 1012 | 25094 |
| slk-eng | flores200-devtest | 0.63137 | 35.4 | 1012 | 24721 |
| slk-fra | flores200-devtest | 0.56587 | 29.9 | 1012 | 28343 |
| slk-por | flores200-devtest | 0.54523 | 27.3 | 1012 | 26519 |
| slk-spa | flores200-devtest | 0.48275 | 20.1 | 1012 | 29199 |
| slv-deu | flores200-devtest | 0.54583 | 24.5 | 1012 | 25094 |
| slv-eng | flores200-devtest | 0.59952 | 32.4 | 1012 | 24721 |
| slv-fra | flores200-devtest | 0.57418 | 30.3 | 1012 | 28343 |
| slv-por | flores200-devtest | 0.55838 | 28.4 | 1012 | 26519 |
| slv-spa | flores200-devtest | 0.49438 | 20.7 | 1012 | 29199 |
| spa-deu | flores200-devtest | 0.52303 | 20.0 | 1012 | 25094 |
| spa-eng | flores200-devtest | 0.57648 | 26.7 | 1012 | 24721 |
| srd-deu | flores200-devtest | 0.47651 | 18.6 | 1012 | 25094 |
| srd-eng | flores200-devtest | 0.56624 | 30.5 | 1012 | 24721 |
| srd-fra | flores200-devtest | 0.52746 | 26.8 | 1012 | 28343 |
| srd-por | flores200-devtest | 0.52301 | 26.4 | 1012 | 26519 |
| srd-spa | flores200-devtest | 0.45213 | 17.7 | 1012 | 29199 |
| srp_Cyrl-deu | flores200-devtest | 0.57563 | 27.7 | 1012 | 25094 |
| srp_Cyrl-eng | flores200-devtest | 0.66201 | 39.9 | 1012 | 24721 |
| srp_Cyrl-fra | flores200-devtest | 0.61570 | 35.0 | 1012 | 28343 |
| srp_Cyrl-por | flores200-devtest | 0.60561 | 33.6 | 1012 | 26519 |
| srp_Cyrl-spa | flores200-devtest | 0.51500 | 22.4 | 1012 | 29199 |
| swe-deu | flores200-devtest | 0.59607 | 31.6 | 1012 | 25094 |
| swe-eng | flores200-devtest | 0.69032 | 46.0 | 1012 | 24721 |
| swe-fra | flores200-devtest | 0.62610 | 37.8 | 1012 | 28343 |
| swe-por | flores200-devtest | 0.60692 | 35.0 | 1012 | 26519 |
| swe-spa | flores200-devtest | 0.51448 | 23.0 | 1012 | 29199 |
| szl-deu | flores200-devtest | 0.51005 | 22.0 | 1012 | 25094 |
| szl-eng | flores200-devtest | 0.57536 | 30.6 | 1012 | 24721 |
| szl-fra | flores200-devtest | 0.54029 | 28.2 | 1012 | 28343 |
| szl-por | flores200-devtest | 0.52911 | 26.5 | 1012 | 26519 |
| szl-spa | flores200-devtest | 0.46280 | 18.8 | 1012 | 29199 |
| tgk-deu | flores200-devtest | 0.45372 | 15.8 | 1012 | 25094 |
| tgk-eng | flores200-devtest | 0.51096 | 22.1 | 1012 | 24721 |
| tgk-fra | flores200-devtest | 0.48620 | 21.1 | 1012 | 28343 |
| tgk-por | flores200-devtest | 0.46870 | 19.4 | 1012 | 26519 |
| tgk-spa | flores200-devtest | 0.42689 | 15.1 | 1012 | 29199 |
| tpi-deu | flores200-devtest | 0.41078 | 11.1 | 1012 | 25094 |
| tpi-eng | flores200-devtest | 0.48619 | 20.1 | 1012 | 24721 |
| tpi-fra | flores200-devtest | 0.43850 | 16.3 | 1012 | 28343 |
| tpi-por | flores200-devtest | 0.43040 | 15.8 | 1012 | 26519 |
| ukr-deu | flores200-devtest | 0.55290 | 25.1 | 1012 | 25094 |
| ukr-eng | flores200-devtest | 0.62150 | 34.9 | 1012 | 24721 |
| ukr-fra | flores200-devtest | 0.59093 | 32.5 | 1012 | 28343 |
| ukr-por | flores200-devtest | 0.57706 | 30.7 | 1012 | 26519 |
| ukr-spa | flores200-devtest | 0.50128 | 21.8 | 1012 | 29199 |
| urd-deu | flores200-devtest | 0.45107 | 15.6 | 1012 | 25094 |
| urd-eng | flores200-devtest | 0.53130 | 25.0 | 1012 | 24721 |
| urd-fra | flores200-devtest | 0.48377 | 20.7 | 1012 | 28343 |
| urd-por | flores200-devtest | 0.45290 | 18.5 | 1012 | 26519 |
| urd-spa | flores200-devtest | 0.41342 | 13.8 | 1012 | 29199 |
| vec-deu | flores200-devtest | 0.48212 | 18.5 | 1012 | 25094 |
| vec-eng | flores200-devtest | 0.56243 | 29.3 | 1012 | 24721 |
| vec-fra | flores200-devtest | 0.53340 | 26.4 | 1012 | 28343 |
| vec-por | flores200-devtest | 0.52845 | 25.7 | 1012 | 26519 |
| vec-spa | flores200-devtest | 0.46136 | 17.9 | 1012 | 29199 |
| ces-eng | generaltest2022 | 0.64599 | 40.2 | 1448 | 30675 |
| deu-eng | generaltest2022 | 0.54993 | 29.8 | 1984 | 37634 |
| deu-fra | generaltest2022 | 0.59361 | 35.6 | 1984 | 38276 |
| eng-deu | generaltest2022 | 0.59885 | 31.9 | 2037 | 38914 |
| fra-deu | generaltest2022 | 0.64266 | 40.1 | 2006 | 37696 |
| rus-eng | generaltest2022 | 0.63746 | 37.8 | 2016 | 38529 |
| ukr-eng | generaltest2022 | 0.60704 | 35.9 | 2018 | 34242 |
| ces-deu | multi30k_test_2016_flickr | 0.56370 | 26.9 | 1000 | 12106 |
| ces-eng | multi30k_test_2016_flickr | 0.57217 | 32.7 | 1000 | 12955 |
| ces-fra | multi30k_test_2016_flickr | 0.57498 | 30.7 | 1000 | 13505 |
| deu-eng | multi30k_test_2016_flickr | 0.60234 | 39.1 | 1000 | 12955 |
| deu-fra | multi30k_test_2016_flickr | 0.60951 | 36.7 | 1000 | 13505 |
| eng-deu | multi30k_test_2016_flickr | 0.62191 | 32.5 | 1000 | 12106 |
| eng-fra | multi30k_test_2016_flickr | 0.69376 | 47.9 | 1000 | 13505 |
| fra-deu | multi30k_test_2016_flickr | 0.59597 | 29.3 | 1000 | 12106 |
| fra-eng | multi30k_test_2016_flickr | 0.64810 | 45.4 | 1000 | 12955 |
| deu-eng | multi30k_test_2017_flickr | 0.61895 | 38.9 | 1000 | 11374 |
| deu-fra | multi30k_test_2017_flickr | 0.60570 | 34.6 | 1000 | 12118 |
| eng-deu | multi30k_test_2017_flickr | 0.61458 | 32.1 | 1000 | 10755 |
| eng-fra | multi30k_test_2017_flickr | 0.69630 | 48.1 | 1000 | 12118 |
| fra-deu | multi30k_test_2017_flickr | 0.58207 | 27.7 | 1000 | 10755 |
| fra-eng | multi30k_test_2017_flickr | 0.67447 | 48.0 | 1000 | 11374 |
| deu-eng | multi30k_test_2017_mscoco | 0.54299 | 30.9 | 461 | 5231 |
| deu-fra | multi30k_test_2017_mscoco | 0.57789 | 32.3 | 461 | 5484 |
| eng-deu | multi30k_test_2017_mscoco | 0.56164 | 27.3 | 461 | 5158 |
| eng-fra | multi30k_test_2017_mscoco | 0.71453 | 51.9 | 461 | 5484 |
| fra-deu | multi30k_test_2017_mscoco | 0.53897 | 23.9 | 461 | 5158 |
| fra-eng | multi30k_test_2017_mscoco | 0.65274 | 46.5 | 461 | 5231 |
| ces-deu | multi30k_test_2018_flickr | 0.51543 | 22.4 | 1071 | 13703 |
| ces-eng | multi30k_test_2018_flickr | 0.57995 | 33.1 | 1071 | 14689 |
| ces-fra | multi30k_test_2018_flickr | 0.53232 | 26.0 | 1071 | 15867 |
| deu-eng | multi30k_test_2018_flickr | 0.58274 | 35.3 | 1071 | 14689 |
| deu-fra | multi30k_test_2018_flickr | 0.55809 | 29.3 | 1071 | 15867 |
| eng-deu | multi30k_test_2018_flickr | 0.58395 | 28.7 | 1071 | 13703 |
| eng-fra | multi30k_test_2018_flickr | 0.63770 | 39.3 | 1071 | 15867 |
| fra-deu | multi30k_test_2018_flickr | 0.53677 | 22.6 | 1071 | 13703 |
| fra-eng | multi30k_test_2018_flickr | 0.62909 | 41.0 | 1071 | 14689 |
| eng-fra | newsdiscusstest2015 | 0.62144 | 35.7 | 1500 | 27975 |
| fra-eng | newsdiscusstest2015 | 0.60513 | 37.5 | 1500 | 26982 |
| ces-deu | newssyscomb2009 | 0.52473 | 21.7 | 502 | 11271 |
| ces-eng | newssyscomb2009 | 0.55107 | 28.0 | 502 | 11818 |
| ces-fra | newssyscomb2009 | 0.56925 | 28.7 | 502 | 12331 |
| ces-spa | newssyscomb2009 | 0.56161 | 28.8 | 502 | 12503 |
| deu-eng | newssyscomb2009 | 0.55367 | 29.2 | 502 | 11818 |
| deu-fra | newssyscomb2009 | 0.55730 | 27.1 | 502 | 12331 |
| deu-spa | newssyscomb2009 | 0.54844 | 27.6 | 502 | 12503 |
| eng-deu | newssyscomb2009 | 0.53204 | 22.3 | 502 | 11271 |
| eng-fra | newssyscomb2009 | 0.57875 | 28.8 | 502 | 12331 |
| eng-spa | newssyscomb2009 | 0.57849 | 30.5 | 502 | 12503 |
| fra-deu | newssyscomb2009 | 0.52855 | 22.5 | 502 | 11271 |
| fra-eng | newssyscomb2009 | 0.57071 | 30.6 | 502 | 11818 |
| fra-spa | newssyscomb2009 | 0.60067 | 34.0 | 502 | 12503 |
| ita-deu | newssyscomb2009 | 0.53245 | 22.1 | 502 | 11271 |
| ita-eng | newssyscomb2009 | 0.59274 | 33.7 | 502 | 11818 |
| ita-fra | newssyscomb2009 | 0.61167 | 33.8 | 502 | 12331 |
| ita-spa | newssyscomb2009 | 0.60645 | 35.1 | 502 | 12503 |
| spa-deu | newssyscomb2009 | 0.52676 | 21.8 | 502 | 11271 |
| spa-fra | newssyscomb2009 | 0.61003 | 33.6 | 502 | 12331 |
| ces-deu | newstest2008 | 0.52450 | 21.6 | 2051 | 47447 |
| ces-eng | newstest2008 | 0.52805 | 24.9 | 2051 | 49380 |
| ces-fra | newstest2008 | 0.54135 | 25.4 | 2051 | 52685 |
| ces-spa | newstest2008 | 0.53925 | 26.2 | 2051 | 52586 |
| deu-eng | newstest2008 | 0.53756 | 26.2 | 2051 | 49380 |
| deu-fra | newstest2008 | 0.54147 | 25.5 | 2051 | 52685 |
| deu-spa | newstest2008 | 0.53296 | 24.8 | 2051 | 52586 |
| eng-deu | newstest2008 | 0.52399 | 22.4 | 2051 | 47447 |
| eng-fra | newstest2008 | 0.54809 | 26.1 | 2051 | 52685 |
| eng-spa | newstest2008 | 0.56027 | 29.1 | 2051 | 52586 |
| fra-deu | newstest2008 | 0.52211 | 21.8 | 2051 | 47447 |
| fra-eng | newstest2008 | 0.53878 | 26.1 | 2051 | 49380 |
| fra-spa | newstest2008 | 0.58122 | 32.5 | 2051 | 52586 |
| spa-deu | newstest2008 | 0.51468 | 20.9 | 2051 | 47447 |
| ces-deu | newstest2009 | 0.52537 | 22.4 | 2525 | 62816 |
| ces-eng | newstest2009 | 0.54467 | 27.1 | 2525 | 65399 |
| ces-fra | newstest2009 | 0.54545 | 26.1 | 2525 | 69263 |
| ces-spa | newstest2009 | 0.54339 | 26.3 | 2525 | 68111 |
| deu-eng | newstest2009 | 0.53323 | 25.9 | 2525 | 65399 |
| deu-fra | newstest2009 | 0.53408 | 25.0 | 2525 | 69263 |
| deu-spa | newstest2009 | 0.52999 | 24.4 | 2525 | 68111 |
| eng-deu | newstest2009 | 0.52387 | 21.5 | 2525 | 62816 |
| eng-fra | newstest2009 | 0.57057 | 28.7 | 2525 | 69263 |
| eng-spa | newstest2009 | 0.57376 | 29.6 | 2525 | 68111 |
| fra-deu | newstest2009 | 0.51980 | 21.6 | 2525 | 62816 |
| fra-eng | newstest2009 | 0.56151 | 29.5 | 2525 | 65399 |
| fra-spa | newstest2009 | 0.58173 | 31.4 | 2525 | 68111 |
| ita-deu | newstest2009 | 0.52409 | 22.1 | 2525 | 62816 |
| ita-eng | newstest2009 | 0.58598 | 32.9 | 2525 | 65399 |
| ita-fra | newstest2009 | 0.58722 | 31.5 | 2525 | 69263 |
| ita-spa | newstest2009 | 0.59235 | 33.1 | 2525 | 68111 |
| spa-deu | newstest2009 | 0.51708 | 20.7 | 2525 | 62816 |
| spa-eng | newstest2009 | 0.56094 | 29.2 | 2525 | 65399 |
| ces-deu | newstest2010 | 0.53608 | 23.5 | 2489 | 61503 |
| ces-eng | newstest2010 | 0.56348 | 28.8 | 2489 | 61711 |
| ces-fra | newstest2010 | 0.55510 | 27.2 | 2489 | 66022 |
| ces-spa | newstest2010 | 0.57375 | 30.6 | 2489 | 65480 |
| deu-eng | newstest2010 | 0.57666 | 29.8 | 2489 | 61711 |
| deu-fra | newstest2010 | 0.56822 | 28.2 | 2489 | 66022 |
| deu-spa | newstest2010 | 0.58446 | 31.5 | 2489 | 65480 |
| eng-deu | newstest2010 | 0.54037 | 24.8 | 2489 | 61503 |
| eng-fra | newstest2010 | 0.58935 | 31.2 | 2489 | 66022 |
| eng-spa | newstest2010 | 0.61230 | 35.6 | 2489 | 65480 |
| fra-deu | newstest2010 | 0.52993 | 23.2 | 2489 | 61503 |
| fra-eng | newstest2010 | 0.58580 | 31.7 | 2489 | 61711 |
| fra-spa | newstest2010 | 0.61883 | 36.8 | 2489 | 65480 |
| spa-deu | newstest2010 | 0.54232 | 24.8 | 2489 | 61503 |
| ces-deu | newstest2011 | 0.52042 | 22.2 | 3003 | 72981 |
| ces-eng | newstest2011 | 0.55380 | 27.8 | 3003 | 74681 |
| ces-fra | newstest2011 | 0.55651 | 28.0 | 3003 | 80626 |
| ces-spa | newstest2011 | 0.56004 | 29.9 | 3003 | 79476 |
| deu-eng | newstest2011 | 0.54263 | 25.8 | 3003 | 74681 |
| deu-fra | newstest2011 | 0.54883 | 26.4 | 3003 | 80626 |
| deu-spa | newstest2011 | 0.55738 | 29.1 | 3003 | 79476 |
| eng-deu | newstest2011 | 0.52251 | 22.4 | 3003 | 72981 |
| eng-fra | newstest2011 | 0.60292 | 33.3 | 3003 | 80626 |
| eng-spa | newstest2011 | 0.61355 | 37.6 | 3003 | 79476 |
| fra-deu | newstest2011 | 0.52082 | 22.1 | 3003 | 72981 |
| fra-eng | newstest2011 | 0.58971 | 32.3 | 3003 | 74681 |
| fra-spa | newstest2011 | 0.62318 | 38.7 | 3003 | 79476 |
| spa-fra | newstest2011 | 0.60467 | 34.0 | 3003 | 80626 |
| ces-deu | newstest2012 | 0.52126 | 22.9 | 3003 | 72886 |
| ces-eng | newstest2012 | 0.54980 | 27.0 | 3003 | 72812 |
| ces-fra | newstest2012 | 0.55088 | 26.8 | 3003 | 78011 |
| ces-spa | newstest2012 | 0.55950 | 29.9 | 3003 | 79006 |
| deu-eng | newstest2012 | 0.55507 | 27.5 | 3003 | 72812 |
| deu-fra | newstest2012 | 0.55160 | 26.6 | 3003 | 78011 |
| deu-spa | newstest2012 | 0.56307 | 30.1 | 3003 | 79006 |
| eng-deu | newstest2012 | 0.52121 | 22.9 | 3003 | 72886 |
| eng-fra | newstest2012 | 0.58675 | 30.8 | 3003 | 78011 |
| eng-spa | newstest2012 | 0.61689 | 37.9 | 3003 | 79006 |
| fra-deu | newstest2012 | 0.52009 | 23.2 | 3003 | 72886 |
| fra-eng | newstest2012 | 0.58405 | 32.3 | 3003 | 72812 |
| fra-spa | newstest2012 | 0.62038 | 38.5 | 3003 | 79006 |
| rus-deu | newstest2012 | 0.47965 | 18.3 | 3003 | 72886 |
| rus-eng | newstest2012 | 0.61258 | 36.1 | 3003 | 72812 |
| rus-fra | newstest2012 | 0.52674 | 24.2 | 3003 | 78011 |
| rus-spa | newstest2012 | 0.53760 | 27.4 | 3003 | 79006 |
| ces-deu | newstest2013 | 0.54483 | 25.3 | 3000 | 63737 |
| ces-eng | newstest2013 | 0.57212 | 30.7 | 3000 | 64505 |
| ces-fra | newstest2013 | 0.55258 | 28.4 | 3000 | 70037 |
| ces-spa | newstest2013 | 0.56179 | 30.6 | 3000 | 70528 |
| deu-eng | newstest2013 | 0.57382 | 31.0 | 3000 | 64505 |
| deu-fra | newstest2013 | 0.55576 | 28.8 | 3000 | 70037 |
| deu-spa | newstest2013 | 0.56220 | 30.9 | 3000 | 70528 |
| eng-deu | newstest2013 | 0.54830 | 26.6 | 3000 | 63737 |
| eng-fra | newstest2013 | 0.58195 | 32.6 | 3000 | 70037 |
| eng-spa | newstest2013 | 0.59254 | 34.6 | 3000 | 70528 |
| fra-deu | newstest2013 | 0.53465 | 24.6 | 3000 | 63737 |
| fra-eng | newstest2013 | 0.58395 | 32.9 | 3000 | 64505 |
| fra-spa | newstest2013 | 0.58748 | 34.1 | 3000 | 70528 |
| rus-deu | newstest2013 | 0.51980 | 22.4 | 3000 | 63737 |
| rus-eng | newstest2013 | 0.55557 | 28.9 | 3000 | 64505 |
| rus-fra | newstest2013 | 0.54627 | 27.6 | 3000 | 70037 |
| rus-spa | newstest2013 | 0.55540 | 30.5 | 3000 | 70528 |
| spa-deu | newstest2013 | 0.53925 | 24.8 | 3000 | 63737 |
| ces-eng | newstest2014 | 0.61449 | 33.9 | 3003 | 68065 |
| deu-eng | newstest2014 | 0.58733 | 32.1 | 3003 | 67337 |
| eng-deu | newstest2014 | 0.57701 | 26.5 | 3003 | 62688 |
| eng-fra | newstest2014 | 0.63976 | 38.1 | 3003 | 77306 |
| fra-eng | newstest2014 | 0.62627 | 36.8 | 3003 | 70708 |
| hin-eng | newstest2014 | 0.56343 | 26.4 | 2507 | 55571 |
| rus-eng | newstest2014 | 0.62633 | 36.6 | 3003 | 69210 |
| ces-eng | newstest2015 | 0.56562 | 30.7 | 2656 | 53569 |
| deu-eng | newstest2015 | 0.59036 | 33.3 | 2169 | 46443 |
| eng-deu | newstest2015 | 0.58604 | 30.1 | 2169 | 44260 |
| rus-eng | newstest2015 | 0.58794 | 32.5 | 2818 | 64428 |
| ces-eng | newstest2016 | 0.58896 | 32.6 | 2999 | 64670 |
| deu-eng | newstest2016 | 0.63945 | 39.4 | 2999 | 64119 |
| eng-deu | newstest2016 | 0.62731 | 35.9 | 2999 | 62669 |
| ron-eng | newstest2016 | 0.63051 | 38.1 | 1999 | 47562 |
| rus-eng | newstest2016 | 0.58858 | 32.5 | 2998 | 69278 |
| ces-eng | newstest2017 | 0.55759 | 29.0 | 3005 | 61721 |
| deu-eng | newstest2017 | 0.60252 | 34.8 | 3004 | 64399 |
| eng-deu | newstest2017 | 0.57779 | 28.7 | 3004 | 61287 |
| lav-eng | newstest2017 | 0.51103 | 20.2 | 2001 | 47511 |
| rus-eng | newstest2017 | 0.61663 | 36.1 | 3001 | 69025 |
| ces-eng | newstest2018 | 0.56663 | 29.6 | 2983 | 63495 |
| deu-eng | newstest2018 | 0.65768 | 41.8 | 2998 | 67012 |
| eng-deu | newstest2018 | 0.67590 | 43.5 | 2998 | 64276 |
| rus-eng | newstest2018 | 0.58427 | 31.5 | 3000 | 71291 |
| ces-deu | newstest2019 | 0.53405 | 23.8 | 1997 | 48746 |
| deu-eng | newstest2019 | 0.62158 | 37.7 | 2000 | 39227 |
| deu-fra | newstest2019 | 0.61819 | 34.4 | 1701 | 42509 |
| eng-deu | newstest2019 | 0.64640 | 39.8 | 1997 | 48746 |
| fra-deu | newstest2019 | 0.59291 | 27.6 | 1701 | 36446 |
| guj-eng | newstest2019 | 0.51165 | 22.5 | 1016 | 17757 |
| lit-eng | newstest2019 | 0.58019 | 29.1 | 1000 | 25878 |
| rus-eng | newstest2019 | 0.62499 | 37.8 | 2000 | 42642 |
| deu-eng | newstest2020 | 0.56495 | 30.9 | 785 | 38220 |
| deu-fra | newstest2020 | 0.59211 | 31.6 | 1619 | 36890 |
| eng-deu | newstest2020 | 0.58436 | 30.2 | 1418 | 52383 |
| fra-deu | newstest2020 | 0.59478 | 26.6 | 1619 | 30265 |
| pol-eng | newstest2020 | 0.56674 | 27.7 | 1001 | 21755 |
| rus-eng | newstest2020 | 0.62387 | 33.6 | 991 | 20217 |
| ces-eng | newstest2021 | 0.54943 | 25.6 | 1000 | 22056 |
| deu-eng | newstest2021 | 0.58675 | 30.5 | 1000 | 20180 |
| deu-fra | newstest2021 | 0.57690 | 30.0 | 1000 | 23757 |
| eng-deu | newstest2021 | 0.55381 | 24.9 | 1002 | 27970 |
| fra-deu | newstest2021 | 0.63942 | 37.2 | 1026 | 26077 |
| isl-eng | newstest2021 | 0.53701 | 29.2 | 1000 | 22529 |
| rus-eng | newstest2021 | 0.60760 | 33.7 | 1000 | 21228 |
| deu-eng | newstestALL2020 | 0.56898 | 30.8 | 785 | 38220 |
| eng-deu | newstestALL2020 | 0.58436 | 30.2 | 1418 | 52383 |
| rus-eng | newstestALL2020 | 0.62387 | 33.6 | 991 | 20217 |
| deu-eng | newstestB2020 | 0.56571 | 30.3 | 785 | 37696 |
| eng-deu | newstestB2020 | 0.57458 | 29.7 | 1418 | 53092 |
| rus-eng | newstestB2020 | 0.62934 | 35.5 | 991 | 20423 |
| afr-deu | ntrex128 | 0.54806 | 25.7 | 1997 | 48761 |
| afr-eng | ntrex128 | 0.71452 | 50.6 | 1997 | 47673 |
| afr-fra | ntrex128 | 0.55624 | 28.2 | 1997 | 53481 |
| afr-por | ntrex128 | 0.54364 | 26.9 | 1997 | 51631 |
| afr-spa | ntrex128 | 0.57498 | 32.3 | 1997 | 54107 |
| bel-deu | ntrex128 | 0.48215 | 17.8 | 1997 | 48761 |
| bel-eng | ntrex128 | 0.55146 | 26.7 | 1997 | 47673 |
| bel-fra | ntrex128 | 0.49288 | 20.4 | 1997 | 53481 |
| bel-por | ntrex128 | 0.48488 | 19.9 | 1997 | 51631 |
| bel-spa | ntrex128 | 0.50933 | 23.7 | 1997 | 54107 |
| ben-deu | ntrex128 | 0.43995 | 13.7 | 1997 | 48761 |
| ben-eng | ntrex128 | 0.53312 | 24.9 | 1997 | 47673 |
| ben-fra | ntrex128 | 0.45297 | 17.1 | 1997 | 53481 |
| ben-por | ntrex128 | 0.44323 | 15.5 | 1997 | 51631 |
| ben-spa | ntrex128 | 0.46993 | 19.5 | 1997 | 54107 |
| bul-deu | ntrex128 | 0.51786 | 20.9 | 1997 | 48761 |
| bul-eng | ntrex128 | 0.59510 | 31.3 | 1997 | 47673 |
| bul-fra | ntrex128 | 0.53787 | 25.4 | 1997 | 53481 |
| bul-por | ntrex128 | 0.52650 | 24.2 | 1997 | 51631 |
| bul-spa | ntrex128 | 0.54950 | 28.4 | 1997 | 54107 |
| cat-deu | ntrex128 | 0.52907 | 22.5 | 1997 | 48761 |
| cat-eng | ntrex128 | 0.62247 | 34.6 | 1997 | 47673 |
| cat-fra | ntrex128 | 0.55858 | 27.5 | 1997 | 53481 |
| cat-por | ntrex128 | 0.55916 | 28.3 | 1997 | 51631 |
| cat-spa | ntrex128 | 0.61209 | 35.6 | 1997 | 54107 |
| ces-deu | ntrex128 | 0.52704 | 22.5 | 1997 | 48761 |
| ces-eng | ntrex128 | 0.60742 | 33.1 | 1997 | 47673 |
| ces-fra | ntrex128 | 0.54283 | 26.3 | 1997 | 53481 |
| ces-por | ntrex128 | 0.52392 | 24.1 | 1997 | 51631 |
| ces-spa | ntrex128 | 0.55467 | 28.9 | 1997 | 54107 |
| cym-deu | ntrex128 | 0.48064 | 19.1 | 1997 | 48761 |
| cym-eng | ntrex128 | 0.60592 | 34.7 | 1997 | 47673 |
| cym-fra | ntrex128 | 0.50667 | 23.9 | 1997 | 53481 |
| cym-por | ntrex128 | 0.48189 | 20.5 | 1997 | 51631 |
| cym-spa | ntrex128 | 0.52160 | 26.7 | 1997 | 54107 |
| dan-deu | ntrex128 | 0.53284 | 24.4 | 1997 | 48761 |
| dan-eng | ntrex128 | 0.62092 | 37.5 | 1997 | 47673 |
| dan-fra | ntrex128 | 0.53068 | 25.4 | 1997 | 53481 |
| dan-por | ntrex128 | 0.52754 | 26.2 | 1997 | 51631 |
| dan-spa | ntrex128 | 0.55304 | 29.8 | 1997 | 54107 |
| deu-eng | ntrex128 | 0.61371 | 33.7 | 1997 | 47673 |
| deu-fra | ntrex128 | 0.54844 | 27.4 | 1997 | 53481 |
| deu-por | ntrex128 | 0.53694 | 25.3 | 1997 | 51631 |
| deu-spa | ntrex128 | 0.56148 | 29.8 | 1997 | 54107 |
| ell-deu | ntrex128 | 0.51567 | 21.1 | 1997 | 48761 |
| ell-eng | ntrex128 | 0.60389 | 34.0 | 1997 | 47673 |
| ell-fra | ntrex128 | 0.53343 | 25.1 | 1997 | 53481 |
| ell-por | ntrex128 | 0.53030 | 25.9 | 1997 | 51631 |
| ell-spa | ntrex128 | 0.55542 | 29.7 | 1997 | 54107 |
| eng-deu | ntrex128 | 0.57592 | 28.9 | 1997 | 48761 |
| eng-fra | ntrex128 | 0.60159 | 33.9 | 1997 | 53481 |
| eng-por | ntrex128 | 0.59020 | 32.6 | 1997 | 51631 |
| eng-spa | ntrex128 | 0.62826 | 38.6 | 1997 | 54107 |
| fao-deu | ntrex128 | 0.42717 | 16.1 | 1997 | 48761 |
| fao-eng | ntrex128 | 0.48210 | 24.5 | 1997 | 47673 |
| fao-fra | ntrex128 | 0.40770 | 16.9 | 1997 | 53481 |
| fao-por | ntrex128 | 0.40603 | 16.2 | 1997 | 51631 |
| fao-spa | ntrex128 | 0.42980 | 18.8 | 1997 | 54107 |
| fas-deu | ntrex128 | 0.47062 | 15.7 | 1997 | 48761 |
| fas-eng | ntrex128 | 0.53552 | 24.0 | 1997 | 47673 |
| fas-fra | ntrex128 | 0.48958 | 20.1 | 1997 | 53481 |
| fas-por | ntrex128 | 0.47091 | 18.3 | 1997 | 51631 |
| fas-spa | ntrex128 | 0.49946 | 22.5 | 1997 | 54107 |
| fra-deu | ntrex128 | 0.52037 | 22.1 | 1997 | 48761 |
| fra-eng | ntrex128 | 0.59918 | 32.7 | 1997 | 47673 |
| fra-por | ntrex128 | 0.53484 | 25.0 | 1997 | 51631 |
| fra-spa | ntrex128 | 0.56500 | 30.3 | 1997 | 54107 |
| gle-deu | ntrex128 | 0.45357 | 16.0 | 1997 | 48761 |
| gle-eng | ntrex128 | 0.54960 | 27.0 | 1997 | 47673 |
| gle-fra | ntrex128 | 0.47041 | 18.7 | 1997 | 53481 |
| gle-por | ntrex128 | 0.45725 | 17.5 | 1997 | 51631 |
| gle-spa | ntrex128 | 0.48897 | 22.4 | 1997 | 54107 |
| glg-deu | ntrex128 | 0.52710 | 22.4 | 1997 | 48761 |
| glg-eng | ntrex128 | 0.63076 | 37.0 | 1997 | 47673 |
| glg-fra | ntrex128 | 0.55231 | 27.2 | 1997 | 53481 |
| glg-por | ntrex128 | 0.56272 | 28.9 | 1997 | 51631 |
| glg-spa | ntrex128 | 0.61675 | 36.6 | 1997 | 54107 |
| guj-deu | ntrex128 | 0.40361 | 11.9 | 1997 | 48761 |
| guj-eng | ntrex128 | 0.52283 | 23.0 | 1997 | 47673 |
| guj-fra | ntrex128 | 0.41597 | 14.7 | 1997 | 53481 |
| guj-por | ntrex128 | 0.40085 | 13.0 | 1997 | 51631 |
| guj-spa | ntrex128 | 0.44800 | 18.3 | 1997 | 54107 |
| hin-deu | ntrex128 | 0.45618 | 14.4 | 1997 | 48761 |
| hin-eng | ntrex128 | 0.57183 | 27.9 | 1997 | 47673 |
| hin-fra | ntrex128 | 0.47504 | 18.5 | 1997 | 53481 |
| hin-por | ntrex128 | 0.45829 | 16.9 | 1997 | 51631 |
| hin-spa | ntrex128 | 0.48784 | 21.4 | 1997 | 54107 |
| hrv-deu | ntrex128 | 0.53567 | 23.2 | 1997 | 48761 |
| hrv-eng | ntrex128 | 0.61932 | 34.8 | 1997 | 47673 |
| hrv-fra | ntrex128 | 0.55306 | 27.6 | 1997 | 53481 |
| hrv-por | ntrex128 | 0.53968 | 26.3 | 1997 | 51631 |
| hrv-spa | ntrex128 | 0.56765 | 30.4 | 1997 | 54107 |
| hye-deu | ntrex128 | 0.42987 | 14.0 | 1997 | 48761 |
| hye-eng | ntrex128 | 0.49189 | 20.9 | 1997 | 47673 |
| hye-fra | ntrex128 | 0.44434 | 17.2 | 1997 | 53481 |
| hye-por | ntrex128 | 0.43069 | 16.0 | 1997 | 51631 |
| hye-spa | ntrex128 | 0.45889 | 19.5 | 1997 | 54107 |
| isl-deu | ntrex128 | 0.48392 | 19.5 | 1997 | 48761 |
| isl-eng | ntrex128 | 0.54720 | 27.5 | 1997 | 47673 |
| isl-fra | ntrex128 | 0.49971 | 22.5 | 1997 | 53481 |
| isl-por | ntrex128 | 0.47811 | 20.2 | 1997 | 51631 |
| isl-spa | ntrex128 | 0.51060 | 25.1 | 1997 | 54107 |
| ita-deu | ntrex128 | 0.53354 | 23.3 | 1997 | 48761 |
| ita-eng | ntrex128 | 0.63069 | 37.1 | 1997 | 47673 |
| ita-fra | ntrex128 | 0.56721 | 29.1 | 1997 | 53481 |
| ita-por | ntrex128 | 0.56298 | 28.9 | 1997 | 51631 |
| ita-spa | ntrex128 | 0.58483 | 32.6 | 1997 | 54107 |
| lav-deu | ntrex128 | 0.48637 | 17.5 | 1997 | 48761 |
| lav-eng | ntrex128 | 0.55909 | 25.5 | 1997 | 47673 |
| lav-fra | ntrex128 | 0.49579 | 20.4 | 1997 | 53481 |
| lav-por | ntrex128 | 0.47936 | 18.9 | 1997 | 51631 |
| lav-spa | ntrex128 | 0.51105 | 23.3 | 1997 | 54107 |
| lit-deu | ntrex128 | 0.49203 | 18.0 | 1997 | 48761 |
| lit-eng | ntrex128 | 0.55075 | 25.7 | 1997 | 47673 |
| lit-fra | ntrex128 | 0.50667 | 21.9 | 1997 | 53481 |
| lit-por | ntrex128 | 0.49771 | 20.8 | 1997 | 51631 |
| lit-spa | ntrex128 | 0.52333 | 24.8 | 1997 | 54107 |
| ltz-deu | ntrex128 | 0.51232 | 22.0 | 1997 | 48761 |
| ltz-eng | ntrex128 | 0.58218 | 32.4 | 1997 | 47673 |
| ltz-fra | ntrex128 | 0.49182 | 21.6 | 1997 | 53481 |
| ltz-por | ntrex128 | 0.46871 | 20.3 | 1997 | 51631 |
| ltz-spa | ntrex128 | 0.48975 | 23.6 | 1997 | 54107 |
| mar-deu | ntrex128 | 0.42225 | 12.5 | 1997 | 48761 |
| mar-eng | ntrex128 | 0.51583 | 22.2 | 1997 | 47673 |
| mar-fra | ntrex128 | 0.43088 | 15.1 | 1997 | 53481 |
| mar-por | ntrex128 | 0.42394 | 14.6 | 1997 | 51631 |
| mar-spa | ntrex128 | 0.44945 | 17.7 | 1997 | 54107 |
| mkd-deu | ntrex128 | 0.52537 | 21.8 | 1997 | 48761 |
| mkd-eng | ntrex128 | 0.62757 | 35.8 | 1997 | 47673 |
| mkd-fra | ntrex128 | 0.54428 | 26.4 | 1997 | 53481 |
| mkd-por | ntrex128 | 0.52919 | 24.5 | 1997 | 51631 |
| mkd-spa | ntrex128 | 0.56365 | 30.0 | 1997 | 54107 |
| nep-deu | ntrex128 | 0.40783 | 11.6 | 1997 | 48761 |
| nep-eng | ntrex128 | 0.51242 | 23.1 | 1997 | 47673 |
| nep-fra | ntrex128 | 0.41414 | 14.5 | 1997 | 53481 |
| nep-por | ntrex128 | 0.41356 | 13.8 | 1997 | 51631 |
| nep-spa | ntrex128 | 0.43667 | 17.0 | 1997 | 54107 |
| nld-deu | ntrex128 | 0.55633 | 25.3 | 1997 | 48761 |
| nld-eng | ntrex128 | 0.63172 | 36.0 | 1997 | 47673 |
| nld-fra | ntrex128 | 0.55161 | 27.1 | 1997 | 53481 |
| nld-por | ntrex128 | 0.54074 | 26.8 | 1997 | 51631 |
| nld-spa | ntrex128 | 0.57106 | 31.7 | 1997 | 54107 |
| nno-deu | ntrex128 | 0.52489 | 23.9 | 1997 | 48761 |
| nno-eng | ntrex128 | 0.64889 | 41.6 | 1997 | 47673 |
| nno-fra | ntrex128 | 0.53358 | 26.2 | 1997 | 53481 |
| nno-por | ntrex128 | 0.52089 | 24.7 | 1997 | 51631 |
| nno-spa | ntrex128 | 0.54863 | 29.4 | 1997 | 54107 |
| nob-deu | ntrex128 | 0.54650 | 25.5 | 1997 | 48761 |
| nob-eng | ntrex128 | 0.64444 | 39.3 | 1997 | 47673 |
| nob-fra | ntrex128 | 0.55024 | 28.0 | 1997 | 53481 |
| nob-por | ntrex128 | 0.53537 | 25.9 | 1997 | 51631 |
| nob-spa | ntrex128 | 0.56899 | 31.4 | 1997 | 54107 |
| pan-deu | ntrex128 | 0.40429 | 11.6 | 1997 | 48761 |
| pan-eng | ntrex128 | 0.49942 | 20.6 | 1997 | 47673 |
| pan-fra | ntrex128 | 0.41440 | 14.8 | 1997 | 53481 |
| pan-spa | ntrex128 | 0.42840 | 16.6 | 1997 | 54107 |
| pol-deu | ntrex128 | 0.50884 | 20.4 | 1997 | 48761 |
| pol-eng | ntrex128 | 0.55781 | 26.2 | 1997 | 47673 |
| pol-fra | ntrex128 | 0.52511 | 23.9 | 1997 | 53481 |
| pol-por | ntrex128 | 0.50796 | 21.8 | 1997 | 51631 |
| pol-spa | ntrex128 | 0.53122 | 25.6 | 1997 | 54107 |
| por-deu | ntrex128 | 0.54003 | 23.7 | 1997 | 48761 |
| por-eng | ntrex128 | 0.63798 | 37.6 | 1997 | 47673 |
| por-fra | ntrex128 | 0.56317 | 28.3 | 1997 | 53481 |
| por-spa | ntrex128 | 0.59244 | 33.9 | 1997 | 54107 |
| prs-deu | ntrex128 | 0.44878 | 14.3 | 1997 | 48761 |
| prs-eng | ntrex128 | 0.52855 | 24.2 | 1997 | 47673 |
| prs-fra | ntrex128 | 0.46323 | 17.6 | 1997 | 53481 |
| prs-por | ntrex128 | 0.45211 | 16.9 | 1997 | 51631 |
| prs-spa | ntrex128 | 0.47595 | 20.5 | 1997 | 54107 |
| pus-eng | ntrex128 | 0.40630 | 13.0 | 1997 | 47673 |
| ron-deu | ntrex128 | 0.52534 | 21.6 | 1997 | 48761 |
| ron-eng | ntrex128 | 0.60733 | 32.2 | 1997 | 47673 |
| ron-fra | ntrex128 | 0.55222 | 26.1 | 1997 | 53481 |
| ron-por | ntrex128 | 0.54549 | 26.4 | 1997 | 51631 |
| ron-spa | ntrex128 | 0.57503 | 31.6 | 1997 | 54107 |
| rus-deu | ntrex128 | 0.49519 | 18.5 | 1997 | 48761 |
| rus-eng | ntrex128 | 0.55126 | 25.6 | 1997 | 47673 |
| rus-fra | ntrex128 | 0.51684 | 22.8 | 1997 | 53481 |
| rus-por | ntrex128 | 0.49329 | 20.4 | 1997 | 51631 |
| rus-spa | ntrex128 | 0.52316 | 24.8 | 1997 | 54107 |
| slk-deu | ntrex128 | 0.52066 | 22.0 | 1997 | 48761 |
| slk-eng | ntrex128 | 0.60940 | 33.0 | 1997 | 47673 |
| slk-fra | ntrex128 | 0.53303 | 25.8 | 1997 | 53481 |
| slk-por | ntrex128 | 0.51245 | 23.0 | 1997 | 51631 |
| slk-spa | ntrex128 | 0.54489 | 28.3 | 1997 | 54107 |
| slv-deu | ntrex128 | 0.52189 | 22.0 | 1997 | 48761 |
| slv-eng | ntrex128 | 0.58552 | 30.4 | 1997 | 47673 |
| slv-fra | ntrex128 | 0.53247 | 25.3 | 1997 | 53481 |
| slv-por | ntrex128 | 0.51817 | 23.4 | 1997 | 51631 |
| slv-spa | ntrex128 | 0.54582 | 27.7 | 1997 | 54107 |
| spa-fra | ntrex128 | 0.56549 | 28.3 | 1997 | 53481 |
| spa-por | ntrex128 | 0.56372 | 28.5 | 1997 | 51631 |
| sqi-deu | ntrex128 | 0.52259 | 21.7 | 1997 | 48761 |
| sqi-eng | ntrex128 | 0.62439 | 36.2 | 1997 | 47673 |
| sqi-fra | ntrex128 | 0.54643 | 26.2 | 1997 | 53481 |
| sqi-por | ntrex128 | 0.53857 | 26.2 | 1997 | 51631 |
| sqi-spa | ntrex128 | 0.56804 | 30.8 | 1997 | 54107 |
| srp_Cyrl-deu | ntrex128 | 0.48837 | 18.6 | 1997 | 48761 |
| srp_Cyrl-eng | ntrex128 | 0.54292 | 24.5 | 1997 | 47673 |
| srp_Cyrl-fra | ntrex128 | 0.48977 | 21.5 | 1997 | 53481 |
| srp_Cyrl-por | ntrex128 | 0.48429 | 20.5 | 1997 | 51631 |
| srp_Cyrl-spa | ntrex128 | 0.51373 | 24.9 | 1997 | 54107 |
| swe-deu | ntrex128 | 0.54871 | 25.9 | 1997 | 48761 |
| swe-eng | ntrex128 | 0.65427 | 41.2 | 1997 | 47673 |
| swe-fra | ntrex128 | 0.55294 | 28.2 | 1997 | 53481 |
| swe-por | ntrex128 | 0.53911 | 26.7 | 1997 | 51631 |
| swe-spa | ntrex128 | 0.57293 | 31.9 | 1997 | 54107 |
| tgk_Cyrl-deu | ntrex128 | 0.40503 | 11.8 | 1997 | 48761 |
| tgk_Cyrl-eng | ntrex128 | 0.45221 | 16.4 | 1997 | 47673 |
| tgk_Cyrl-fra | ntrex128 | 0.41930 | 14.4 | 1997 | 53481 |
| tgk_Cyrl-por | ntrex128 | 0.40576 | 12.7 | 1997 | 51631 |
| tgk_Cyrl-spa | ntrex128 | 0.43095 | 16.4 | 1997 | 54107 |
| ukr-deu | ntrex128 | 0.49644 | 18.5 | 1997 | 48761 |
| ukr-eng | ntrex128 | 0.55193 | 25.7 | 1997 | 47673 |
| ukr-fra | ntrex128 | 0.50914 | 21.8 | 1997 | 53481 |
| ukr-por | ntrex128 | 0.49879 | 21.3 | 1997 | 51631 |
| ukr-spa | ntrex128 | 0.52640 | 25.6 | 1997 | 54107 |
| urd-deu | ntrex128 | 0.43742 | 14.1 | 1997 | 48761 |
| urd-eng | ntrex128 | 0.52486 | 23.8 | 1997 | 47673 |
| urd-fra | ntrex128 | 0.45409 | 17.4 | 1997 | 53481 |
| urd-por | ntrex128 | 0.42660 | 14.6 | 1997 | 51631 |
| urd-spa | ntrex128 | 0.46414 | 19.4 | 1997 | 54107 |
| ben-eng | tico19-test | 0.55418 | 27.3 | 2100 | 56824 |
| ben-fra | tico19-test | 0.45176 | 18.3 | 2100 | 64661 |
| ben-por | tico19-test | 0.49778 | 20.9 | 2100 | 62729 |
| ben-spa | tico19-test | 0.51344 | 25.8 | 2100 | 66563 |
| eng-fra | tico19-test | 0.62001 | 38.2 | 2100 | 64661 |
| eng-por | tico19-test | 0.71654 | 48.3 | 2100 | 62729 |
| eng-spa | tico19-test | 0.71947 | 50.2 | 2100 | 66563 |
| fas-eng | tico19-test | 0.58617 | 31.6 | 2100 | 56315 |
| fas-fra | tico19-test | 0.50453 | 23.9 | 2100 | 64661 |
| fas-por | tico19-test | 0.55031 | 28.1 | 2100 | 62729 |
| fas-spa | tico19-test | 0.56113 | 29.9 | 2100 | 66563 |
| fra-eng | tico19-test | 0.60512 | 35.8 | 2100 | 56323 |
| fra-por | tico19-test | 0.57530 | 33.0 | 2100 | 62729 |
| fra-spa | tico19-test | 0.58823 | 35.6 | 2100 | 66563 |
| hin-eng | tico19-test | 0.64146 | 39.6 | 2100 | 56323 |
| hin-fra | tico19-test | 0.51582 | 25.4 | 2100 | 64661 |
| hin-por | tico19-test | 0.57182 | 30.9 | 2100 | 62729 |
| hin-spa | tico19-test | 0.58341 | 33.7 | 2100 | 66563 |
| mar-eng | tico19-test | 0.51194 | 21.4 | 2100 | 56315 |
| mar-fra | tico19-test | 0.43359 | 16.8 | 2100 | 64661 |
| mar-por | tico19-test | 0.47089 | 20.3 | 2100 | 62729 |
| mar-spa | tico19-test | 0.48435 | 22.8 | 2100 | 66563 |
| nep-eng | tico19-test | 0.57060 | 30.1 | 2100 | 56824 |
| nep-fra | tico19-test | 0.46212 | 19.7 | 2100 | 64661 |
| nep-por | tico19-test | 0.51024 | 24.0 | 2100 | 62729 |
| nep-spa | tico19-test | 0.51651 | 25.9 | 2100 | 66563 |
| por-eng | tico19-test | 0.72228 | 47.4 | 2100 | 56315 |
| por-fra | tico19-test | 0.58934 | 33.4 | 2100 | 64661 |
| por-spa | tico19-test | 0.67509 | 44.1 | 2100 | 66563 |
| prs-eng | tico19-test | 0.54979 | 26.6 | 2100 | 56824 |
| prs-fra | tico19-test | 0.47627 | 21.0 | 2100 | 64661 |
| prs-por | tico19-test | 0.52000 | 25.6 | 2100 | 62729 |
| prs-spa | tico19-test | 0.54172 | 28.5 | 2100 | 66563 |
| pus-eng | tico19-test | 0.48655 | 23.1 | 2100 | 56315 |
| pus-fra | tico19-test | 0.40980 | 16.2 | 2100 | 64661 |
| pus-por | tico19-test | 0.44879 | 19.5 | 2100 | 62729 |
| pus-spa | tico19-test | 0.45280 | 20.4 | 2100 | 66563 |
| rus-eng | tico19-test | 0.59787 | 30.4 | 2100 | 56323 |
| rus-fra | tico19-test | 0.52211 | 24.1 | 2100 | 64661 |
| rus-por | tico19-test | 0.56473 | 26.9 | 2100 | 62729 |
| rus-spa | tico19-test | 0.58626 | 31.1 | 2100 | 66563 |
| spa-fra | tico19-test | 0.59078 | 33.1 | 2100 | 64661 |
| urd-eng | tico19-test | 0.51957 | 25.0 | 2100 | 56315 |
| urd-fra | tico19-test | 0.43707 | 17.2 | 2100 | 64661 |
| urd-por | tico19-test | 0.47484 | 20.1 | 2100 | 62729 |
| urd-spa | tico19-test | 0.48812 | 22.4 | 2100 | 66563 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: 0882077
* port time: Tue Oct 8 11:49:33 EEST 2024
* port machine: LM0-400-22516.local
|
{"language": ["acf", "af", "an", "ang", "anp", "as", "ast", "awa", "bal", "bar", "be", "bg", "bho", "bi", "bn", "bpy", "br", "bs", "bzj", "ca", "cbk", "co", "crs", "cs", "csb", "cu", "cy", "da", "de", "diq", "djk", "drt", "dsb", "dv", "egl", "el", "en", "enm", "es", "ext", "fa", "fo", "fr", "frm", "fro", "frp", "frr", "fur", "fy", "ga", "gbm", "gcf", "gd", "gl", "glk", "gos", "got", "grc", "gsw", "gu", "gv", "hi", "hif", "hne", "hns", "hr", "hrx", "hsb", "ht", "hwc", "hy", "hyw", "icr", "is", "it", "jam", "jdt", "kea", "kok", "kri", "ks", "ksh", "ku", "kw", "la", "lad", "lah", "lb", "li", "lij", "lld", "lmo", "lou", "lrc", "lt", "lv", "mag", "mai", "mfe", "mk", "mo", "mr", "mwl", "mzn", "nap", "nb", "nds", "ne", "nl", "nn", false, "non", "oc", "ofs", "or", "orv", "os", "osp", "pa", "pal", "pap", "pcd", "pcm", "pdc", "pfl", "pi", "pih", "pis", "pl", "pms", "pnt", "prg", "ps", "pt", "rhg", "rm", "rmy", "ro", "rom", "rop", "ru", "rue", "rup", "sa", "sc", "scn", "sco", "sd", "sgs", "sh", "si", "sk", "skr", "sl", "sq", "sr", "srm", "srn", "stq", "sv", "swg", "syl", "szl", "tcs", "tg", "tly", "tpi", "uk", "ur", "vec", "vls", "wa", "wae", "xcl", "yi", "zea", "zza"], "library_name": "transformers", "license": "apache-2.0", "tags": ["translation", "opus-mt-tc-bible"], "language_bcp47": ["bs_Latn", "ku_Latn", "sr_Cyrl"], "model-index": [{"name": "opus-mt-tc-bible-big-ine-deu_eng_fra_por_spa", "results": [{"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "flores200-devtest", "type": "flores200-devtest", "args": "afr-deu"}, "metrics": [{"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57712, "name": "chr-F"}, {"type": "bleu", "value": 53.4, "name": "BLEU"}, {"type": "chrf", "value": 0.7369, "name": "chr-F"}, {"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61332, "name": "chr-F"}, {"type": "bleu", "value": 35.1, "name": "BLEU"}, {"type": "chrf", "value": 0.60899, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50836, "name": "chr-F"}, {"type": "bleu", "value": 13.4, "name": "BLEU"}, {"type": "chrf", "value": 0.42432, "name": "chr-F"}, {"type": "bleu", "value": 10.1, "name": "BLEU"}, {"type": "chrf", "value": 0.35035, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.52402, "name": "chr-F"}, {"type": "bleu", "value": 35.1, "name": "BLEU"}, {"type": "chrf", "value": 0.6064, "name": "chr-F"}, {"type": "bleu", "value": 31.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5706, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56982, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.49452, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.47101, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55042, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5023, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.49701, "name": "chr-F"}, {"type": "bleu", "value": 15.7, "name": "BLEU"}, {"type": "chrf", "value": 0.43913, "name": "chr-F"}, {"type": "bleu", "value": 12.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46906, "name": "chr-F"}, {"type": "bleu", "value": 16.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49995, "name": "chr-F"}, {"type": "bleu", "value": 17.1, "name": "BLEU"}, {"type": "chrf", "value": 0.49987, "name": "chr-F"}, {"type": "bleu", "value": 15.7, "name": "BLEU"}, {"type": "chrf", "value": 0.48319, "name": "chr-F"}, {"type": "bleu", "value": 14.4, "name": "BLEU"}, {"type": "chrf", "value": 0.45393, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.46413, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.54681, "name": "chr-F"}, {"type": "bleu", "value": 21.9, "name": "BLEU"}, {"type": "chrf", "value": 0.49843, "name": "chr-F"}, {"type": "bleu", "value": 21.0, "name": "BLEU"}, {"type": "chrf", "value": 0.49129, "name": "chr-F"}, {"type": "bleu", "value": 14.9, "name": "BLEU"}, {"type": "chrf", "value": 0.4331, "name": "chr-F"}, {"type": "bleu", "value": 12.4, "name": "BLEU"}, {"type": "chrf", "value": 0.41875, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48319, "name": "chr-F"}, {"type": "bleu", "value": 16.1, "name": "BLEU"}, {"type": "chrf", "value": 0.44504, "name": "chr-F"}, {"type": "bleu", "value": 15.5, "name": "BLEU"}, {"type": "chrf", "value": 0.43627, "name": "chr-F"}, {"type": "bleu", "value": 12.6, "name": "BLEU"}, {"type": "chrf", "value": 0.40189, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56591, "name": "chr-F"}, {"type": "bleu", "value": 37.8, "name": "BLEU"}, {"type": "chrf", "value": 0.64922, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.60386, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5907, "name": "chr-F"}, {"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.50968, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5703, "name": "chr-F"}, {"type": "bleu", "value": 43.0, "name": "BLEU"}, {"type": "chrf", "value": 0.67842, "name": "chr-F"}, {"type": "bleu", "value": 38.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63034, "name": "chr-F"}, {"type": "bleu", "value": 37.3, "name": "BLEU"}, {"type": "chrf", "value": 0.62567, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5326, "name": "chr-F"}, {"type": "bleu", "value": 27.1, "name": "BLEU"}, {"type": "chrf", "value": 0.56613, "name": "chr-F"}, {"type": "bleu", "value": 36.5, "name": "BLEU"}, {"type": "chrf", "value": 0.63574, "name": "chr-F"}, {"type": "bleu", "value": 32.8, "name": "BLEU"}, {"type": "chrf", "value": 0.59573, "name": "chr-F"}, {"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.58096, "name": "chr-F"}, {"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.50295, "name": "chr-F"}, {"type": "bleu", "value": 11.1, "name": "BLEU"}, {"type": "chrf", "value": 0.3865, "name": "chr-F"}, {"type": "bleu", "value": 16.7, "name": "BLEU"}, {"type": "chrf", "value": 0.43075, "name": "chr-F"}, {"type": "bleu", "value": 15.7, "name": "BLEU"}, {"type": "chrf", "value": 0.41038, "name": "chr-F"}, {"type": "bleu", "value": 14.3, "name": "BLEU"}, {"type": "chrf", "value": 0.39883, "name": "chr-F"}, {"type": "bleu", "value": 11.2, "name": "BLEU"}, {"type": "chrf", "value": 0.36422, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51003, "name": "chr-F"}, {"type": "bleu", "value": 45.7, "name": "BLEU"}, {"type": "chrf", "value": 0.67808, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55779, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5393, "name": "chr-F"}, {"type": "bleu", "value": 19.6, "name": "BLEU"}, {"type": "chrf", "value": 0.47129, "name": "chr-F"}, {"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59897, "name": "chr-F"}, {"type": "bleu", "value": 46.2, "name": "BLEU"}, {"type": "chrf", "value": 0.70142, "name": "chr-F"}, {"type": "bleu", "value": 37.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62669, "name": "chr-F"}, {"type": "bleu", "value": 35.3, "name": "BLEU"}, {"type": "chrf", "value": 0.61338, "name": "chr-F"}, {"type": "bleu", "value": 24.2, "name": "BLEU"}, {"type": "chrf", "value": 0.5236, "name": "chr-F"}, {"type": "bleu", "value": 40.3, "name": "BLEU"}, {"type": "chrf", "value": 0.66096, "name": "chr-F"}, {"type": "bleu", "value": 35.4, "name": "BLEU"}, {"type": "chrf", "value": 0.61562, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59775, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.51787, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52003, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59074, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56636, "name": "chr-F"}, {"type": "bleu", "value": 27.2, "name": "BLEU"}, {"type": "chrf", "value": 0.54903, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48701, "name": "chr-F"}, {"type": "bleu", "value": 36.8, "name": "BLEU"}, {"type": "chrf", "value": 0.63747, "name": "chr-F"}, {"type": "bleu", "value": 47.2, "name": "BLEU"}, {"type": "chrf", "value": 0.69505, "name": "chr-F"}, {"type": "bleu", "value": 47.3, "name": "BLEU"}, {"type": "chrf", "value": 0.69743, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54954, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.42943, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.46227, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.41404, "name": "chr-F"}, {"type": "bleu", "value": 17.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4185, "name": "chr-F"}, {"type": "bleu", "value": 13.2, "name": "BLEU"}, {"type": "chrf", "value": 0.37492, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.57718, "name": "chr-F"}, {"type": "bleu", "value": 41.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66534, "name": "chr-F"}, {"type": "bleu", "value": 36.2, "name": "BLEU"}, {"type": "chrf", "value": 0.61987, "name": "chr-F"}, {"type": "bleu", "value": 24.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52646, "name": "chr-F"}, {"type": "bleu", "value": 20.5, "name": "BLEU"}, {"type": "chrf", "value": 0.50429, "name": "chr-F"}, {"type": "bleu", "value": 32.0, "name": "BLEU"}, {"type": "chrf", "value": 0.58954, "name": "chr-F"}, {"type": "bleu", "value": 28.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55699, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54977, "name": "chr-F"}, {"type": "bleu", "value": 19.0, "name": "BLEU"}, {"type": "chrf", "value": 0.4755, "name": "chr-F"}, {"type": "bleu", "value": 11.3, "name": "BLEU"}, {"type": "chrf", "value": 0.39116, "name": "chr-F"}, {"type": "bleu", "value": 16.2, "name": "BLEU"}, {"type": "chrf", "value": 0.43561, "name": "chr-F"}, {"type": "bleu", "value": 15.3, "name": "BLEU"}, {"type": "chrf", "value": 0.4177, "name": "chr-F"}, {"type": "bleu", "value": 14.7, "name": "BLEU"}, {"type": "chrf", "value": 0.40473, "name": "chr-F"}, {"type": "bleu", "value": 12.0, "name": "BLEU"}, {"type": "chrf", "value": 0.37498, "name": "chr-F"}, {"type": "bleu", "value": 18.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48622, "name": "chr-F"}, {"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58337, "name": "chr-F"}, {"type": "bleu", "value": 24.6, "name": "BLEU"}, {"type": "chrf", "value": 0.52798, "name": "chr-F"}, {"type": "bleu", "value": 23.6, "name": "BLEU"}, {"type": "chrf", "value": 0.51712, "name": "chr-F"}, {"type": "bleu", "value": 18.1, "name": "BLEU"}, {"type": "chrf", "value": 0.45954, "name": "chr-F"}, {"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56174, "name": "chr-F"}, {"type": "bleu", "value": 38.4, "name": "BLEU"}, {"type": "chrf", "value": 0.65391, "name": "chr-F"}, {"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61762, "name": "chr-F"}, {"type": "bleu", "value": 32.9, "name": "BLEU"}, {"type": "chrf", "value": 0.6017, "name": "chr-F"}, {"type": "bleu", "value": 24.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53214, "name": "chr-F"}, {"type": "bleu", "value": 14.2, "name": "BLEU"}, {"type": "chrf", "value": 0.43101, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.55857, "name": "chr-F"}, {"type": "bleu", "value": 19.8, "name": "BLEU"}, {"type": "chrf", "value": 0.47047, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.45641, "name": "chr-F"}, {"type": "bleu", "value": 14.5, "name": "BLEU"}, {"type": "chrf", "value": 0.42457, "name": "chr-F"}, {"type": "bleu", "value": 19.2, "name": "BLEU"}, {"type": "chrf", "value": 0.49247, "name": "chr-F"}, {"type": "bleu", "value": 31.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58655, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.60736, "name": "chr-F"}, {"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54733, "name": "chr-F"}, {"type": "bleu", "value": 17.9, "name": "BLEU"}, {"type": "chrf", "value": 0.46963, "name": "chr-F"}, {"type": "bleu", "value": 20.3, "name": "BLEU"}, {"type": "chrf", "value": 0.50305, "name": "chr-F"}, {"type": "bleu", "value": 34.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60811, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53919, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53151, "name": "chr-F"}, {"type": "bleu", "value": 17.4, "name": "BLEU"}, {"type": "chrf", "value": 0.46051, "name": "chr-F"}, {"type": "bleu", "value": 18.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48386, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59671, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52013, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51345, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.44481, "name": "chr-F"}, {"type": "bleu", "value": 26.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55524, "name": "chr-F"}, {"type": "bleu", "value": 34.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61977, "name": "chr-F"}, {"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59318, "name": "chr-F"}, {"type": "bleu", "value": 30.2, "name": "BLEU"}, {"type": "chrf", "value": 0.57603, "name": "chr-F"}, {"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.50242, "name": "chr-F"}, {"type": "bleu", "value": 19.2, "name": "BLEU"}, {"type": "chrf", "value": 0.48676, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55729, "name": "chr-F"}, {"type": "bleu", "value": 25.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52152, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.51026, "name": "chr-F"}, {"type": "bleu", "value": 17.8, "name": "BLEU"}, {"type": "chrf", "value": 0.45459, "name": "chr-F"}, {"type": "bleu", "value": 20.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48677, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.54804, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51362, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50201, "name": "chr-F"}, {"type": "bleu", "value": 17.5, "name": "BLEU"}, {"type": "chrf", "value": 0.44801, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54589, "name": "chr-F"}, {"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.6066, "name": "chr-F"}, {"type": "bleu", "value": 31.0, "name": "BLEU"}, {"type": "chrf", "value": 0.59811, "name": "chr-F"}, {"type": "bleu", "value": 28.4, "name": "BLEU"}, {"type": "chrf", "value": 0.57808, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.52244, "name": "chr-F"}, {"type": "bleu", "value": 19.2, "name": "BLEU"}, {"type": "chrf", "value": 0.48107, "name": "chr-F"}, {"type": "bleu", "value": 34.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5957, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53683, "name": "chr-F"}, {"type": "bleu", "value": 30.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57642, "name": "chr-F"}, {"type": "bleu", "value": 18.6, "name": "BLEU"}, {"type": "chrf", "value": 0.47048, "name": "chr-F"}, {"type": "bleu", "value": 11.3, "name": "BLEU"}, {"type": "chrf", "value": 0.36876, "name": "chr-F"}, {"type": "bleu", "value": 10.5, "name": "BLEU"}, {"type": "chrf", "value": 0.34323, "name": "chr-F"}, {"type": "bleu", "value": 10.0, "name": "BLEU"}, {"type": "chrf", "value": 0.33904, "name": "chr-F"}, {"type": "bleu", "value": 19.4, "name": "BLEU"}, {"type": "chrf", "value": 0.4927, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.58369, "name": "chr-F"}, {"type": "bleu", "value": 28.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55002, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.54155, "name": "chr-F"}, {"type": "bleu", "value": 18.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46656, "name": "chr-F"}, {"type": "bleu", "value": 15.0, "name": "BLEU"}, {"type": "chrf", "value": 0.44183, "name": "chr-F"}, {"type": "bleu", "value": 20.3, "name": "BLEU"}, {"type": "chrf", "value": 0.46674, "name": "chr-F"}, {"type": "bleu", "value": 17.8, "name": "BLEU"}, {"type": "chrf", "value": 0.43685, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.42699, "name": "chr-F"}, {"type": "bleu", "value": 13.7, "name": "BLEU"}, {"type": "chrf", "value": 0.39587, "name": "chr-F"}, {"type": "bleu", "value": 21.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51669, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.57849, "name": "chr-F"}, {"type": "bleu", "value": 29.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55896, "name": "chr-F"}, {"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5396, "name": "chr-F"}, {"type": "bleu", "value": 19.7, "name": "BLEU"}, {"type": "chrf", "value": 0.4812, "name": "chr-F"}, {"type": "bleu", "value": 14.2, "name": "BLEU"}, {"type": "chrf", "value": 0.44732, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5171, "name": "chr-F"}, {"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49129, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49153, "name": "chr-F"}, {"type": "bleu", "value": 15.4, "name": "BLEU"}, {"type": "chrf", "value": 0.43363, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.58897, "name": "chr-F"}, {"type": "bleu", "value": 36.2, "name": "BLEU"}, {"type": "chrf", "value": 0.6225, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5746, "name": "chr-F"}, {"type": "bleu", "value": 27.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53674, "name": "chr-F"}, {"type": "bleu", "value": 18.8, "name": "BLEU"}, {"type": "chrf", "value": 0.46048, "name": "chr-F"}, {"type": "bleu", "value": 18.9, "name": "BLEU"}, {"type": "chrf", "value": 0.49176, "name": "chr-F"}, {"type": "bleu", "value": 32.2, "name": "BLEU"}, {"type": "chrf", "value": 0.59691, "name": "chr-F"}, {"type": "bleu", "value": 24.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52068, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52006, "name": "chr-F"}, {"type": "bleu", "value": 16.5, "name": "BLEU"}, {"type": "chrf", "value": 0.44945, "name": "chr-F"}, {"type": "bleu", "value": 16.5, "name": "BLEU"}, {"type": "chrf", "value": 0.46893, "name": "chr-F"}, {"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "chrf", "value": 0.56282, "name": "chr-F"}, {"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.50286, "name": "chr-F"}, {"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.49523, "name": "chr-F"}, {"type": "bleu", "value": 15.9, "name": "BLEU"}, {"type": "chrf", "value": 0.44271, "name": "chr-F"}, {"type": "bleu", "value": 14.8, "name": "BLEU"}, {"type": "chrf", "value": 0.44712, "name": "chr-F"}, {"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54222, "name": "chr-F"}, {"type": "bleu", "value": 19.6, "name": "BLEU"}, {"type": "chrf", "value": 0.47383, "name": "chr-F"}, {"type": "bleu", "value": 18.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46593, "name": "chr-F"}, {"type": "bleu", "value": 14.0, "name": "BLEU"}, {"type": "chrf", "value": 0.41912, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56267, "name": "chr-F"}, {"type": "bleu", "value": 38.8, "name": "BLEU"}, {"type": "chrf", "value": 0.64902, "name": "chr-F"}, {"type": "bleu", "value": 33.9, "name": "BLEU"}, {"type": "chrf", "value": 0.60051, "name": "chr-F"}, {"type": "bleu", "value": 32.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59197, "name": "chr-F"}, {"type": "bleu", "value": 22.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50972, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53072, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58671, "name": "chr-F"}, {"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55677, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53989, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48443, "name": "chr-F"}, {"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "chrf", "value": 0.56707, "name": "chr-F"}, {"type": "bleu", "value": 43.2, "name": "BLEU"}, {"type": "chrf", "value": 0.67683, "name": "chr-F"}, {"type": "bleu", "value": 34.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59829, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58723, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.50217, "name": "chr-F"}, {"type": "bleu", "value": 26.5, "name": "BLEU"}, {"type": "chrf", "value": 0.56197, "name": "chr-F"}, {"type": "bleu", "value": 41.7, "name": "BLEU"}, {"type": "chrf", "value": 0.66428, "name": "chr-F"}, {"type": "bleu", "value": 33.1, "name": "BLEU"}, {"type": "chrf", "value": 0.59531, "name": "chr-F"}, {"type": "bleu", "value": 31.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58521, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.50418, "name": "chr-F"}, {"type": "bleu", "value": 14.6, "name": "BLEU"}, {"type": "chrf", "value": 0.44364, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.54309, "name": "chr-F"}, {"type": "bleu", "value": 19.7, "name": "BLEU"}, {"type": "chrf", "value": 0.47458, "name": "chr-F"}, {"type": "bleu", "value": 18.9, "name": "BLEU"}, {"type": "chrf", "value": 0.46702, "name": "chr-F"}, {"type": "bleu", "value": 13.9, "name": "BLEU"}, {"type": "chrf", "value": 0.4172, "name": "chr-F"}, {"type": "bleu", "value": 26.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56668, "name": "chr-F"}, {"type": "bleu", "value": 46.8, "name": "BLEU"}, {"type": "chrf", "value": 0.70282, "name": "chr-F"}, {"type": "bleu", "value": 39.1, "name": "BLEU"}, {"type": "chrf", "value": 0.64408, "name": "chr-F"}, {"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62256, "name": "chr-F"}, {"type": "bleu", "value": 22.3, "name": "BLEU"}, {"type": "chrf", "value": 0.51705, "name": "chr-F"}, {"type": "bleu", "value": 15.1, "name": "BLEU"}, {"type": "chrf", "value": 0.44428, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52652, "name": "chr-F"}, {"type": "bleu", "value": 19.9, "name": "BLEU"}, {"type": "chrf", "value": 0.47743, "name": "chr-F"}, {"type": "bleu", "value": 18.8, "name": "BLEU"}, {"type": "chrf", "value": 0.46585, "name": "chr-F"}, {"type": "bleu", "value": 14.5, "name": "BLEU"}, {"type": "chrf", "value": 0.41798, "name": "chr-F"}, {"type": "bleu", "value": 23.5, "name": "BLEU"}, {"type": "chrf", "value": 0.53397, "name": "chr-F"}, {"type": "bleu", "value": 43.1, "name": "BLEU"}, {"type": "chrf", "value": 0.67741, "name": "chr-F"}, {"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "chrf", "value": 0.57787, "name": "chr-F"}, {"type": "bleu", "value": 32.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59003, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.49768, "name": "chr-F"}, {"type": "bleu", "value": 20.9, "name": "BLEU"}, {"type": "chrf", "value": 0.50787, "name": "chr-F"}, {"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58693, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5506, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54139, "name": "chr-F"}, {"type": "bleu", "value": 18.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4723, "name": "chr-F"}, {"type": "bleu", "value": 20.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51514, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.56021, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55176, "name": "chr-F"}, {"type": "bleu", "value": 24.3, "name": "BLEU"}, {"type": "chrf", "value": 0.52998, "name": "chr-F"}, {"type": "bleu", "value": 19.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48344, "name": "chr-F"}, {"type": "bleu", "value": 29.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58002, "name": "chr-F"}, {"type": "bleu", "value": 46.0, "name": "BLEU"}, {"type": "chrf", "value": 0.69694, "name": "chr-F"}, {"type": "bleu", "value": 39.6, "name": "BLEU"}, {"type": "chrf", "value": 0.64146, "name": "chr-F"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53508, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49849, "name": "chr-F"}, {"type": "bleu", "value": 32.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5812, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53939, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.53479, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.46241, "name": "chr-F"}, {"type": "bleu", "value": 27.4, "name": "BLEU"}, {"type": "chrf", "value": 0.57214, "name": "chr-F"}, {"type": "bleu", "value": 40.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66701, "name": "chr-F"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "chrf", "value": 0.63234, "name": "chr-F"}, {"type": "bleu", "value": 35.4, "name": "BLEU"}, {"type": "chrf", "value": 0.61838, "name": "chr-F"}, {"type": "bleu", "value": 24.3, "name": "BLEU"}, {"type": "chrf", "value": 0.52856, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54446, "name": "chr-F"}, {"type": "bleu", "value": 32.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60131, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.57986, "name": "chr-F"}, {"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.566, "name": "chr-F"}, {"type": "bleu", "value": 21.2, "name": "BLEU"}, {"type": "chrf", "value": 0.49871, "name": "chr-F"}, {"type": "bleu", "value": 17.0, "name": "BLEU"}, {"type": "chrf", "value": 0.46523, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53341, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51481, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50343, "name": "chr-F"}, {"type": "bleu", "value": 17.1, "name": "BLEU"}, {"type": "chrf", "value": 0.44756, "name": "chr-F"}, {"type": "bleu", "value": 10.2, "name": "BLEU"}, {"type": "chrf", "value": 0.38685, "name": "chr-F"}, {"type": "bleu", "value": 23.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53932, "name": "chr-F"}, {"type": "bleu", "value": 35.4, "name": "BLEU"}, {"type": "chrf", "value": 0.63137, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56587, "name": "chr-F"}, {"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54523, "name": "chr-F"}, {"type": "bleu", "value": 20.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48275, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.54583, "name": "chr-F"}, {"type": "bleu", "value": 32.4, "name": "BLEU"}, {"type": "chrf", "value": 0.59952, "name": "chr-F"}, {"type": "bleu", "value": 30.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57418, "name": "chr-F"}, {"type": "bleu", "value": 28.4, "name": "BLEU"}, {"type": "chrf", "value": 0.55838, "name": "chr-F"}, {"type": "bleu", "value": 20.7, "name": "BLEU"}, {"type": "chrf", "value": 0.49438, "name": "chr-F"}, {"type": "bleu", "value": 20.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52303, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57648, "name": "chr-F"}, {"type": "bleu", "value": 18.6, "name": "BLEU"}, {"type": "chrf", "value": 0.47651, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.56624, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52746, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52301, "name": "chr-F"}, {"type": "bleu", "value": 17.7, "name": "BLEU"}, {"type": "chrf", "value": 0.45213, "name": "chr-F"}, {"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57563, "name": "chr-F"}, {"type": "bleu", "value": 39.9, "name": "BLEU"}, {"type": "chrf", "value": 0.66201, "name": "chr-F"}, {"type": "bleu", "value": 35.0, "name": "BLEU"}, {"type": "chrf", "value": 0.6157, "name": "chr-F"}, {"type": "bleu", "value": 33.6, "name": "BLEU"}, {"type": "chrf", "value": 0.60561, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.515, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59607, "name": "chr-F"}, {"type": "bleu", "value": 46.0, "name": "BLEU"}, {"type": "chrf", "value": 0.69032, "name": "chr-F"}, {"type": "bleu", "value": 37.8, "name": "BLEU"}, {"type": "chrf", "value": 0.6261, "name": "chr-F"}, {"type": "bleu", "value": 35.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60692, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51448, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51005, "name": "chr-F"}, {"type": "bleu", "value": 30.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57536, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.54029, "name": "chr-F"}, {"type": "bleu", "value": 26.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52911, "name": "chr-F"}, {"type": "bleu", "value": 18.8, "name": "BLEU"}, {"type": "chrf", "value": 0.4628, "name": "chr-F"}, {"type": "bleu", "value": 15.8, "name": "BLEU"}, {"type": "chrf", "value": 0.45372, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.51096, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.4862, "name": "chr-F"}, {"type": "bleu", "value": 19.4, "name": "BLEU"}, {"type": "chrf", "value": 0.4687, "name": "chr-F"}, {"type": "bleu", "value": 15.1, "name": "BLEU"}, {"type": "chrf", "value": 0.42689, "name": "chr-F"}, {"type": "bleu", "value": 11.1, "name": "BLEU"}, {"type": "chrf", "value": 0.41078, "name": "chr-F"}, {"type": "bleu", "value": 20.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48619, "name": "chr-F"}, {"type": "bleu", "value": 16.3, "name": "BLEU"}, {"type": "chrf", "value": 0.4385, "name": "chr-F"}, {"type": "bleu", "value": 15.8, "name": "BLEU"}, {"type": "chrf", "value": 0.4304, "name": "chr-F"}, {"type": "bleu", "value": 13.4, "name": "BLEU"}, {"type": "chrf", "value": 0.39849, "name": "chr-F"}, {"type": "bleu", "value": 25.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5529, "name": "chr-F"}, {"type": "bleu", "value": 34.9, "name": "BLEU"}, {"type": "chrf", "value": 0.6215, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.59093, "name": "chr-F"}, {"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57706, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50128, "name": "chr-F"}, {"type": "bleu", "value": 15.6, "name": "BLEU"}, {"type": "chrf", "value": 0.45107, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5313, "name": "chr-F"}, {"type": "bleu", "value": 20.7, "name": "BLEU"}, {"type": "chrf", "value": 0.48377, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.4529, "name": "chr-F"}, {"type": "bleu", "value": 13.8, "name": "BLEU"}, {"type": "chrf", "value": 0.41342, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48212, "name": "chr-F"}, {"type": "bleu", "value": 29.3, "name": "BLEU"}, {"type": "chrf", "value": 0.56243, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5334, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.52845, "name": "chr-F"}, {"type": "bleu", "value": 17.9, "name": "BLEU"}, {"type": "chrf", "value": 0.46136, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "afr deu devtest"}, "metrics": [{"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5709, "name": "chr-F"}, {"type": "bleu", "value": 52.4, "name": "BLEU"}, {"type": "chrf", "value": 0.73127, "name": "chr-F"}, {"type": "bleu", "value": 34.8, "name": "BLEU"}, {"type": "chrf", "value": 0.60726, "name": "chr-F"}, {"type": "bleu", "value": 34.4, "name": "BLEU"}, {"type": "chrf", "value": 0.60399, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50655, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56575, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56438, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.49455, "name": "chr-F"}, {"type": "bleu", "value": 11.8, "name": "BLEU"}, {"type": "chrf", "value": 0.46177, "name": "chr-F"}, {"type": "bleu", "value": 15.6, "name": "BLEU"}, {"type": "chrf", "value": 0.49344, "name": "chr-F"}, {"type": "bleu", "value": 16.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49372, "name": "chr-F"}, {"type": "bleu", "value": 13.8, "name": "BLEU"}, {"type": "chrf", "value": 0.44802, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53648, "name": "chr-F"}, {"type": "bleu", "value": 19.9, "name": "BLEU"}, {"type": "chrf", "value": 0.48236, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.58471, "name": "chr-F"}, {"type": "bleu", "value": 27.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56499, "name": "chr-F"}, {"type": "bleu", "value": 42.3, "name": "BLEU"}, {"type": "chrf", "value": 0.67443, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5314, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57503, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.4986, "name": "chr-F"}, {"type": "bleu", "value": 10.2, "name": "BLEU"}, {"type": "chrf", "value": 0.36979, "name": "chr-F"}, {"type": "bleu", "value": 15.8, "name": "BLEU"}, {"type": "chrf", "value": 0.4131, "name": "chr-F"}, {"type": "bleu", "value": 28.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5461, "name": "chr-F"}, {"type": "bleu", "value": 34.7, "name": "BLEU"}, {"type": "chrf", "value": 0.60877, "name": "chr-F"}, {"type": "bleu", "value": 39.8, "name": "BLEU"}, {"type": "chrf", "value": 0.65706, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54336, "name": "chr-F"}, {"type": "bleu", "value": 41.0, "name": "BLEU"}, {"type": "chrf", "value": 0.66301, "name": "chr-F"}, {"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61592, "name": "chr-F"}, {"type": "bleu", "value": 17.0, "name": "BLEU"}, {"type": "chrf", "value": 0.47354, "name": "chr-F"}, {"type": "bleu", "value": 21.7, "name": "BLEU"}, {"type": "chrf", "value": 0.50115, "name": "chr-F"}, {"type": "bleu", "value": 13.5, "name": "BLEU"}, {"type": "chrf", "value": 0.42069, "name": "chr-F"}, {"type": "bleu", "value": 19.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4948, "name": "chr-F"}, {"type": "bleu", "value": 32.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59392, "name": "chr-F"}, {"type": "bleu", "value": 29.5, "name": "BLEU"}, {"type": "chrf", "value": 0.57004, "name": "chr-F"}, {"type": "bleu", "value": 17.5, "name": "BLEU"}, {"type": "chrf", "value": 0.47323, "name": "chr-F"}, {"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5445, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53875, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.54033, "name": "chr-F"}, {"type": "bleu", "value": 30.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59488, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51946, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.46784, "name": "chr-F"}, {"type": "bleu", "value": 24.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54017, "name": "chr-F"}, {"type": "bleu", "value": 19.3, "name": "BLEU"}, {"type": "chrf", "value": 0.48185, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.51261, "name": "chr-F"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53223, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58286, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53241, "name": "chr-F"}, {"type": "bleu", "value": 14.1, "name": "BLEU"}, {"type": "chrf", "value": 0.44237, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52755, "name": "chr-F"}, {"type": "bleu", "value": 18.1, "name": "BLEU"}, {"type": "chrf", "value": 0.45667, "name": "chr-F"}, {"type": "bleu", "value": 32.8, "name": "BLEU"}, {"type": "chrf", "value": 0.59219, "name": "chr-F"}, {"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52899, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.5823, "name": "chr-F"}, {"type": "bleu", "value": 21.2, "name": "BLEU"}, {"type": "chrf", "value": 0.50054, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53179, "name": "chr-F"}, {"type": "bleu", "value": 13.6, "name": "BLEU"}, {"type": "chrf", "value": 0.41165, "name": "chr-F"}, {"type": "bleu", "value": 13.6, "name": "BLEU"}, {"type": "chrf", "value": 0.42831, "name": "chr-F"}, {"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.51203, "name": "chr-F"}, {"type": "bleu", "value": 19.2, "name": "BLEU"}, {"type": "chrf", "value": 0.46357, "name": "chr-F"}, {"type": "bleu", "value": 17.4, "name": "BLEU"}, {"type": "chrf", "value": 0.44885, "name": "chr-F"}, {"type": "bleu", "value": 20.1, "name": "BLEU"}, {"type": "chrf", "value": 0.50973, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55772, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.5459, "name": "chr-F"}, {"type": "bleu", "value": 18.9, "name": "BLEU"}, {"type": "chrf", "value": 0.47816, "name": "chr-F"}, {"type": "bleu", "value": 45.5, "name": "BLEU"}, {"type": "chrf", "value": 0.69438, "name": "chr-F"}, {"type": "bleu", "value": 38.9, "name": "BLEU"}, {"type": "chrf", "value": 0.63701, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53216, "name": "chr-F"}, {"type": "bleu", "value": 36.2, "name": "BLEU"}, {"type": "chrf", "value": 0.62744, "name": "chr-F"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53823, "name": "chr-F"}, {"type": "bleu", "value": 31.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59829, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.57384, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.56082, "name": "chr-F"}, {"type": "bleu", "value": 34.4, "name": "BLEU"}, {"type": "chrf", "value": 0.62376, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54486, "name": "chr-F"}, {"type": "bleu", "value": 20.0, "name": "BLEU"}, {"type": "chrf", "value": 0.48253, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.5413, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.56838, "name": "chr-F"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55554, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.51807, "name": "chr-F"}, {"type": "bleu", "value": 22.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51211, "name": "chr-F"}, {"type": "bleu", "value": 19.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4729, "name": "chr-F"}, {"type": "bleu", "value": 14.3, "name": "BLEU"}, {"type": "chrf", "value": 0.41393, "name": "chr-F"}, {"type": "bleu", "value": 34.3, "name": "BLEU"}, {"type": "chrf", "value": 0.61588, "name": "chr-F"}, {"type": "bleu", "value": 31.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58296, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.49535, "name": "chr-F"}, {"type": "bleu", "value": 15.2, "name": "BLEU"}, {"type": "chrf", "value": 0.44211, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "generaltest2022", "type": "generaltest2022", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 40.2, "name": "BLEU"}, {"type": "chrf", "value": 0.64599, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54993, "name": "chr-F"}, {"type": "bleu", "value": 35.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59361, "name": "chr-F"}, {"type": "bleu", "value": 31.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59885, "name": "chr-F"}, {"type": "bleu", "value": 40.1, "name": "BLEU"}, {"type": "chrf", "value": 0.64266, "name": "chr-F"}, {"type": "bleu", "value": 37.8, "name": "BLEU"}, {"type": "chrf", "value": 0.63746, "name": "chr-F"}, {"type": "bleu", "value": 35.9, "name": "BLEU"}, {"type": "chrf", "value": 0.60704, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "multi30k_test_2016_flickr", "type": "multi30k-2016_flickr", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 26.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5637, "name": "chr-F"}, {"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57217, "name": "chr-F"}, {"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57498, "name": "chr-F"}, {"type": "bleu", "value": 39.1, "name": "BLEU"}, {"type": "chrf", "value": 0.60234, "name": "chr-F"}, {"type": "bleu", "value": 36.7, "name": "BLEU"}, {"type": "chrf", "value": 0.60951, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.62191, "name": "chr-F"}, {"type": "bleu", "value": 47.9, "name": "BLEU"}, {"type": "chrf", "value": 0.69376, "name": "chr-F"}, {"type": "bleu", "value": 29.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59597, "name": "chr-F"}, {"type": "bleu", "value": 45.4, "name": "BLEU"}, {"type": "chrf", "value": 0.6481, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2017_flickr", "type": "multi30k-2017_flickr", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 38.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61895, "name": "chr-F"}, {"type": "bleu", "value": 34.6, "name": "BLEU"}, {"type": "chrf", "value": 0.6057, "name": "chr-F"}, {"type": "bleu", "value": 32.1, "name": "BLEU"}, {"type": "chrf", "value": 0.61458, "name": "chr-F"}, {"type": "bleu", "value": 48.1, "name": "BLEU"}, {"type": "chrf", "value": 0.6963, "name": "chr-F"}, {"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58207, "name": "chr-F"}, {"type": "bleu", "value": 48.0, "name": "BLEU"}, {"type": "chrf", "value": 0.67447, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "multi30k_test_2017_mscoco", "type": "multi30k-2017_mscoco", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54299, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57789, "name": "chr-F"}, {"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "chrf", "value": 0.56164, "name": "chr-F"}, {"type": "bleu", "value": 51.9, "name": "BLEU"}, {"type": "chrf", "value": 0.71453, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53897, "name": "chr-F"}, {"type": "bleu", "value": 46.5, "name": "BLEU"}, {"type": "chrf", "value": 0.65274, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "multi30k_test_2018_flickr", "type": "multi30k-2018_flickr", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.51543, "name": "chr-F"}, {"type": "bleu", "value": 33.1, "name": "BLEU"}, {"type": "chrf", "value": 0.57995, "name": "chr-F"}, {"type": "bleu", "value": 26.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53232, "name": "chr-F"}, {"type": "bleu", "value": 35.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58274, "name": "chr-F"}, {"type": "bleu", "value": 29.3, "name": "BLEU"}, {"type": "chrf", "value": 0.55809, "name": "chr-F"}, {"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58395, "name": "chr-F"}, {"type": "bleu", "value": 39.3, "name": "BLEU"}, {"type": "chrf", "value": 0.6377, "name": "chr-F"}, {"type": "bleu", "value": 22.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53677, "name": "chr-F"}, {"type": "bleu", "value": 41.0, "name": "BLEU"}, {"type": "chrf", "value": 0.62909, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation eng-fra"}, "dataset": {"name": "newsdiscusstest2015", "type": "newsdiscusstest2015", "args": "eng-fra"}, "metrics": [{"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62144, "name": "chr-F"}, {"type": "bleu", "value": 37.5, "name": "BLEU"}, {"type": "chrf", "value": 0.60513, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstestALL2020", "type": "newstestALL2020", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56898, "name": "chr-F"}, {"type": "bleu", "value": 30.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58436, "name": "chr-F"}, {"type": "bleu", "value": 33.6, "name": "BLEU"}, {"type": "chrf", "value": 0.62387, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "ntrex128", "type": "ntrex128", "args": "afr-deu"}, "metrics": [{"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.54806, "name": "chr-F"}, {"type": "bleu", "value": 50.6, "name": "BLEU"}, {"type": "chrf", "value": 0.71452, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.55624, "name": "chr-F"}, {"type": "bleu", "value": 26.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54364, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.57498, "name": "chr-F"}, {"type": "bleu", "value": 17.8, "name": "BLEU"}, {"type": "chrf", "value": 0.48215, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55146, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49288, "name": "chr-F"}, {"type": "bleu", "value": 19.9, "name": "BLEU"}, {"type": "chrf", "value": 0.48488, "name": "chr-F"}, {"type": "bleu", "value": 23.7, "name": "BLEU"}, {"type": "chrf", "value": 0.50933, "name": "chr-F"}, {"type": "bleu", "value": 13.7, "name": "BLEU"}, {"type": "chrf", "value": 0.43995, "name": "chr-F"}, {"type": "bleu", "value": 24.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53312, "name": "chr-F"}, {"type": "bleu", "value": 17.1, "name": "BLEU"}, {"type": "chrf", "value": 0.45297, "name": "chr-F"}, {"type": "bleu", "value": 15.5, "name": "BLEU"}, {"type": "chrf", "value": 0.44323, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.46993, "name": "chr-F"}, {"type": "bleu", "value": 20.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51786, "name": "chr-F"}, {"type": "bleu", "value": 31.3, "name": "BLEU"}, {"type": "chrf", "value": 0.5951, "name": "chr-F"}, {"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53787, "name": "chr-F"}, {"type": "bleu", "value": 24.2, "name": "BLEU"}, {"type": "chrf", "value": 0.5265, "name": "chr-F"}, {"type": "bleu", "value": 28.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5495, "name": "chr-F"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52907, "name": "chr-F"}, {"type": "bleu", "value": 34.6, "name": "BLEU"}, {"type": "chrf", "value": 0.62247, "name": "chr-F"}, {"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55858, "name": "chr-F"}, {"type": "bleu", "value": 28.3, "name": "BLEU"}, {"type": "chrf", "value": 0.55916, "name": "chr-F"}, {"type": "bleu", "value": 35.6, "name": "BLEU"}, {"type": "chrf", "value": 0.61209, "name": "chr-F"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52704, "name": "chr-F"}, {"type": "bleu", "value": 33.1, "name": "BLEU"}, {"type": "chrf", "value": 0.60742, "name": "chr-F"}, {"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54283, "name": "chr-F"}, {"type": "bleu", "value": 24.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52392, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55467, "name": "chr-F"}, {"type": "bleu", "value": 19.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48064, "name": "chr-F"}, {"type": "bleu", "value": 34.7, "name": "BLEU"}, {"type": "chrf", "value": 0.60592, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.50667, "name": "chr-F"}, {"type": "bleu", "value": 20.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48189, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.5216, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53284, "name": "chr-F"}, {"type": "bleu", "value": 37.5, "name": "BLEU"}, {"type": "chrf", "value": 0.62092, "name": "chr-F"}, {"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.53068, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52754, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55304, "name": "chr-F"}, {"type": "bleu", "value": 33.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61371, "name": "chr-F"}, {"type": "bleu", "value": 27.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54844, "name": "chr-F"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53694, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56148, "name": "chr-F"}, {"type": "bleu", "value": 21.1, "name": "BLEU"}, {"type": "chrf", "value": 0.51567, "name": "chr-F"}, {"type": "bleu", "value": 34.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60389, "name": "chr-F"}, {"type": "bleu", "value": 25.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53343, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5303, "name": "chr-F"}, {"type": "bleu", "value": 29.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55542, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57592, "name": "chr-F"}, {"type": "bleu", "value": 33.9, "name": "BLEU"}, {"type": "chrf", "value": 0.60159, "name": "chr-F"}, {"type": "bleu", "value": 32.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5902, "name": "chr-F"}, {"type": "bleu", "value": 38.6, "name": "BLEU"}, {"type": "chrf", "value": 0.62826, "name": "chr-F"}, {"type": "bleu", "value": 16.1, "name": "BLEU"}, {"type": "chrf", "value": 0.42717, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.4821, "name": "chr-F"}, {"type": "bleu", "value": 16.9, "name": "BLEU"}, {"type": "chrf", "value": 0.4077, "name": "chr-F"}, {"type": "bleu", "value": 16.2, "name": "BLEU"}, {"type": "chrf", "value": 0.40603, "name": "chr-F"}, {"type": "bleu", "value": 18.8, "name": "BLEU"}, {"type": "chrf", "value": 0.4298, "name": "chr-F"}, {"type": "bleu", "value": 15.7, "name": "BLEU"}, {"type": "chrf", "value": 0.47062, "name": "chr-F"}, {"type": "bleu", "value": 24.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53552, "name": "chr-F"}, {"type": "bleu", "value": 20.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48958, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.47091, "name": "chr-F"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49946, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52037, "name": "chr-F"}, {"type": "bleu", "value": 32.7, "name": "BLEU"}, {"type": "chrf", "value": 0.59918, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53484, "name": "chr-F"}, {"type": "bleu", "value": 30.3, "name": "BLEU"}, {"type": "chrf", "value": 0.565, "name": "chr-F"}, {"type": "bleu", "value": 16.0, "name": "BLEU"}, {"type": "chrf", "value": 0.45357, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5496, "name": "chr-F"}, {"type": "bleu", "value": 18.7, "name": "BLEU"}, {"type": "chrf", "value": 0.47041, "name": "chr-F"}, {"type": "bleu", "value": 17.5, "name": "BLEU"}, {"type": "chrf", "value": 0.45725, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48897, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5271, "name": "chr-F"}, {"type": "bleu", "value": 37.0, "name": "BLEU"}, {"type": "chrf", "value": 0.63076, "name": "chr-F"}, {"type": "bleu", "value": 27.2, "name": "BLEU"}, {"type": "chrf", "value": 0.55231, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56272, "name": "chr-F"}, {"type": "bleu", "value": 36.6, "name": "BLEU"}, {"type": "chrf", "value": 0.61675, "name": "chr-F"}, {"type": "bleu", "value": 11.9, "name": "BLEU"}, {"type": "chrf", "value": 0.40361, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52283, "name": "chr-F"}, {"type": "bleu", "value": 14.7, "name": "BLEU"}, {"type": "chrf", "value": 0.41597, "name": "chr-F"}, {"type": "bleu", "value": 13.0, "name": "BLEU"}, {"type": "chrf", "value": 0.40085, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.448, "name": "chr-F"}, {"type": "bleu", "value": 14.4, "name": "BLEU"}, {"type": "chrf", "value": 0.45618, "name": "chr-F"}, {"type": "bleu", "value": 27.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57183, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.47504, "name": "chr-F"}, {"type": "bleu", "value": 16.9, "name": "BLEU"}, {"type": "chrf", "value": 0.45829, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48784, "name": "chr-F"}, {"type": "bleu", "value": 23.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53567, "name": "chr-F"}, {"type": "bleu", "value": 34.8, "name": "BLEU"}, {"type": "chrf", "value": 0.61932, "name": "chr-F"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55306, "name": "chr-F"}, {"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53968, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56765, "name": "chr-F"}, {"type": "bleu", "value": 14.0, "name": "BLEU"}, {"type": "chrf", "value": 0.42987, "name": "chr-F"}, {"type": "bleu", "value": 20.9, "name": "BLEU"}, {"type": "chrf", "value": 0.49189, "name": "chr-F"}, {"type": "bleu", "value": 17.2, "name": "BLEU"}, {"type": "chrf", "value": 0.44434, "name": "chr-F"}, {"type": "bleu", "value": 16.0, "name": "BLEU"}, {"type": "chrf", "value": 0.43069, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.45889, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48392, "name": "chr-F"}, {"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5472, "name": "chr-F"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49971, "name": "chr-F"}, {"type": "bleu", "value": 20.2, "name": "BLEU"}, {"type": "chrf", "value": 0.47811, "name": "chr-F"}, {"type": "bleu", "value": 25.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5106, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53354, "name": "chr-F"}, {"type": "bleu", "value": 37.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63069, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.56721, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56298, "name": "chr-F"}, {"type": "bleu", "value": 32.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58483, "name": "chr-F"}, {"type": "bleu", "value": 11.6, "name": "BLEU"}, {"type": "chrf", "value": 0.3662, "name": "chr-F"}, {"type": "bleu", "value": 10.3, "name": "BLEU"}, {"type": "chrf", "value": 0.33936, "name": "chr-F"}, {"type": "bleu", "value": 10.8, "name": "BLEU"}, {"type": "chrf", "value": 0.34636, "name": "chr-F"}, {"type": "bleu", "value": 17.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48637, "name": "chr-F"}, {"type": "bleu", "value": 25.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55909, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49579, "name": "chr-F"}, {"type": "bleu", "value": 18.9, "name": "BLEU"}, {"type": "chrf", "value": 0.47936, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.51105, "name": "chr-F"}, {"type": "bleu", "value": 18.0, "name": "BLEU"}, {"type": "chrf", "value": 0.49203, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55075, "name": "chr-F"}, {"type": "bleu", "value": 21.9, "name": "BLEU"}, {"type": "chrf", "value": 0.50667, "name": "chr-F"}, {"type": "bleu", "value": 20.8, "name": "BLEU"}, {"type": "chrf", "value": 0.49771, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52333, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51232, "name": "chr-F"}, {"type": "bleu", "value": 32.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58218, "name": "chr-F"}, {"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.49182, "name": "chr-F"}, {"type": "bleu", "value": 20.3, "name": "BLEU"}, {"type": "chrf", "value": 0.46871, "name": "chr-F"}, {"type": "bleu", "value": 23.6, "name": "BLEU"}, {"type": "chrf", "value": 0.48975, "name": "chr-F"}, {"type": "bleu", "value": 12.5, "name": "BLEU"}, {"type": "chrf", "value": 0.42225, "name": "chr-F"}, {"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.51583, "name": "chr-F"}, {"type": "bleu", "value": 15.1, "name": "BLEU"}, {"type": "chrf", "value": 0.43088, "name": "chr-F"}, {"type": "bleu", "value": 14.6, "name": "BLEU"}, {"type": "chrf", "value": 0.42394, "name": "chr-F"}, {"type": "bleu", "value": 17.7, "name": "BLEU"}, {"type": "chrf", "value": 0.44945, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52537, "name": "chr-F"}, {"type": "bleu", "value": 35.8, "name": "BLEU"}, {"type": "chrf", "value": 0.62757, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54428, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52919, "name": "chr-F"}, {"type": "bleu", "value": 30.0, "name": "BLEU"}, {"type": "chrf", "value": 0.56365, "name": "chr-F"}, {"type": "bleu", "value": 11.6, "name": "BLEU"}, {"type": "chrf", "value": 0.40783, "name": "chr-F"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}, {"type": "chrf", "value": 0.51242, "name": "chr-F"}, {"type": "bleu", "value": 14.5, "name": "BLEU"}, {"type": "chrf", "value": 0.41414, "name": "chr-F"}, {"type": "bleu", "value": 13.8, "name": "BLEU"}, {"type": "chrf", "value": 0.41356, "name": "chr-F"}, {"type": "bleu", "value": 17.0, "name": "BLEU"}, {"type": "chrf", "value": 0.43667, "name": "chr-F"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.55633, "name": "chr-F"}, {"type": "bleu", "value": 36.0, "name": "BLEU"}, {"type": "chrf", "value": 0.63172, "name": "chr-F"}, {"type": "bleu", "value": 27.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55161, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54074, "name": "chr-F"}, {"type": "bleu", "value": 31.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57106, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.52489, "name": "chr-F"}, {"type": "bleu", "value": 41.6, "name": "BLEU"}, {"type": "chrf", "value": 0.64889, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53358, "name": "chr-F"}, {"type": "bleu", "value": 24.7, "name": "BLEU"}, {"type": "chrf", "value": 0.52089, "name": "chr-F"}, {"type": "bleu", "value": 29.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54863, "name": "chr-F"}, {"type": "bleu", "value": 25.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5465, "name": "chr-F"}, {"type": "bleu", "value": 39.3, "name": "BLEU"}, {"type": "chrf", "value": 0.64444, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55024, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53537, "name": "chr-F"}, {"type": "bleu", "value": 31.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56899, "name": "chr-F"}, {"type": "bleu", "value": 11.6, "name": "BLEU"}, {"type": "chrf", "value": 0.40429, "name": "chr-F"}, {"type": "bleu", "value": 20.6, "name": "BLEU"}, {"type": "chrf", "value": 0.49942, "name": "chr-F"}, {"type": "bleu", "value": 14.8, "name": "BLEU"}, {"type": "chrf", "value": 0.4144, "name": "chr-F"}, {"type": "bleu", "value": 13.1, "name": "BLEU"}, {"type": "chrf", "value": 0.39925, "name": "chr-F"}, {"type": "bleu", "value": 16.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4284, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.50884, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.55781, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.52511, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50796, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53122, "name": "chr-F"}, {"type": "bleu", "value": 23.7, "name": "BLEU"}, {"type": "chrf", "value": 0.54003, "name": "chr-F"}, {"type": "bleu", "value": 37.6, "name": "BLEU"}, {"type": "chrf", "value": 0.63798, "name": "chr-F"}, {"type": "bleu", "value": 28.3, "name": "BLEU"}, {"type": "chrf", "value": 0.56317, "name": "chr-F"}, {"type": "bleu", "value": 33.9, "name": "BLEU"}, {"type": "chrf", "value": 0.59244, "name": "chr-F"}, {"type": "bleu", "value": 14.3, "name": "BLEU"}, {"type": "chrf", "value": 0.44878, "name": "chr-F"}, {"type": "bleu", "value": 24.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52855, "name": "chr-F"}, {"type": "bleu", "value": 17.6, "name": "BLEU"}, {"type": "chrf", "value": 0.46323, "name": "chr-F"}, {"type": "bleu", "value": 16.9, "name": "BLEU"}, {"type": "chrf", "value": 0.45211, "name": "chr-F"}, {"type": "bleu", "value": 20.5, "name": "BLEU"}, {"type": "chrf", "value": 0.47595, "name": "chr-F"}, {"type": "bleu", "value": 13.0, "name": "BLEU"}, {"type": "chrf", "value": 0.4063, "name": "chr-F"}, {"type": "bleu", "value": 10.6, "name": "BLEU"}, {"type": "chrf", "value": 0.37292, "name": "chr-F"}, {"type": "bleu", "value": 10.0, "name": "BLEU"}, {"type": "chrf", "value": 0.36366, "name": "chr-F"}, {"type": "bleu", "value": 12.4, "name": "BLEU"}, {"type": "chrf", "value": 0.38558, "name": "chr-F"}, {"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.52534, "name": "chr-F"}, {"type": "bleu", "value": 32.2, "name": "BLEU"}, {"type": "chrf", "value": 0.60733, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55222, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54549, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57503, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49519, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.55126, "name": "chr-F"}, {"type": "bleu", "value": 22.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51684, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.49329, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52316, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52066, "name": "chr-F"}, {"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.6094, "name": "chr-F"}, {"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53303, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51245, "name": "chr-F"}, {"type": "bleu", "value": 28.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54489, "name": "chr-F"}, {"type": "bleu", "value": 22.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52189, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58552, "name": "chr-F"}, {"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.53247, "name": "chr-F"}, {"type": "bleu", "value": 23.4, "name": "BLEU"}, {"type": "chrf", "value": 0.51817, "name": "chr-F"}, {"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "chrf", "value": 0.54582, "name": "chr-F"}, {"type": "bleu", "value": 28.3, "name": "BLEU"}, {"type": "chrf", "value": 0.56549, "name": "chr-F"}, {"type": "bleu", "value": 28.5, "name": "BLEU"}, {"type": "chrf", "value": 0.56372, "name": "chr-F"}, {"type": "bleu", "value": 21.7, "name": "BLEU"}, {"type": "chrf", "value": 0.52259, "name": "chr-F"}, {"type": "bleu", "value": 36.2, "name": "BLEU"}, {"type": "chrf", "value": 0.62439, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.54643, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53857, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56804, "name": "chr-F"}, {"type": "bleu", "value": 18.6, "name": "BLEU"}, {"type": "chrf", "value": 0.48837, "name": "chr-F"}, {"type": "bleu", "value": 24.5, "name": "BLEU"}, {"type": "chrf", "value": 0.54292, "name": "chr-F"}, {"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48977, "name": "chr-F"}, {"type": "bleu", "value": 20.5, "name": "BLEU"}, {"type": "chrf", "value": 0.48429, "name": "chr-F"}, {"type": "bleu", "value": 24.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51373, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.54871, "name": "chr-F"}, {"type": "bleu", "value": 41.2, "name": "BLEU"}, {"type": "chrf", "value": 0.65427, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.55294, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.53911, "name": "chr-F"}, {"type": "bleu", "value": 31.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57293, "name": "chr-F"}, {"type": "bleu", "value": 11.8, "name": "BLEU"}, {"type": "chrf", "value": 0.40503, "name": "chr-F"}, {"type": "bleu", "value": 16.4, "name": "BLEU"}, {"type": "chrf", "value": 0.45221, "name": "chr-F"}, {"type": "bleu", "value": 14.4, "name": "BLEU"}, {"type": "chrf", "value": 0.4193, "name": "chr-F"}, {"type": "bleu", "value": 12.7, "name": "BLEU"}, {"type": "chrf", "value": 0.40576, "name": "chr-F"}, {"type": "bleu", "value": 16.4, "name": "BLEU"}, {"type": "chrf", "value": 0.43095, "name": "chr-F"}, {"type": "bleu", "value": 18.5, "name": "BLEU"}, {"type": "chrf", "value": 0.49644, "name": "chr-F"}, {"type": "bleu", "value": 25.7, "name": "BLEU"}, {"type": "chrf", "value": 0.55193, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.50914, "name": "chr-F"}, {"type": "bleu", "value": 21.3, "name": "BLEU"}, {"type": "chrf", "value": 0.49879, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5264, "name": "chr-F"}, {"type": "bleu", "value": 14.1, "name": "BLEU"}, {"type": "chrf", "value": 0.43742, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52486, "name": "chr-F"}, {"type": "bleu", "value": 17.4, "name": "BLEU"}, {"type": "chrf", "value": 0.45409, "name": "chr-F"}, {"type": "bleu", "value": 14.6, "name": "BLEU"}, {"type": "chrf", "value": 0.4266, "name": "chr-F"}, {"type": "bleu", "value": 19.4, "name": "BLEU"}, {"type": "chrf", "value": 0.46414, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation afr-deu"}, "dataset": {"name": "tatoeba-test-v2021-08-07", "type": "tatoeba_mt", "args": "afr-deu"}, "metrics": [{"type": "bleu", "value": 48.8, "name": "BLEU"}, {"type": "chrf", "value": 0.68516, "name": "chr-F"}, {"type": "bleu", "value": 60.8, "name": "BLEU"}, {"type": "chrf", "value": 0.73535, "name": "chr-F"}, {"type": "bleu", "value": 57.6, "name": "BLEU"}, {"type": "chrf", "value": 0.72814, "name": "chr-F"}, {"type": "bleu", "value": 42.4, "name": "BLEU"}, {"type": "chrf", "value": 0.62154, "name": "chr-F"}, {"type": "bleu", "value": 44.1, "name": "BLEU"}, {"type": "chrf", "value": 0.65145, "name": "chr-F"}, {"type": "bleu", "value": 44.8, "name": "BLEU"}, {"type": "chrf", "value": 0.62648, "name": "chr-F"}, {"type": "bleu", "value": 47.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66291, "name": "chr-F"}, {"type": "bleu", "value": 46.5, "name": "BLEU"}, {"type": "chrf", "value": 0.66644, "name": "chr-F"}, {"type": "bleu", "value": 46.1, "name": "BLEU"}, {"type": "chrf", "value": 0.62742, "name": "chr-F"}, {"type": "bleu", "value": 62.5, "name": "BLEU"}, {"type": "chrf", "value": 0.76603, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.47135, "name": "chr-F"}, {"type": "bleu", "value": 49.1, "name": "BLEU"}, {"type": "chrf", "value": 0.68593, "name": "chr-F"}, {"type": "bleu", "value": 55.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6998, "name": "chr-F"}, {"type": "bleu", "value": 52.4, "name": "BLEU"}, {"type": "chrf", "value": 0.69233, "name": "chr-F"}, {"type": "bleu", "value": 49.2, "name": "BLEU"}, {"type": "chrf", "value": 0.66731, "name": "chr-F"}, {"type": "bleu", "value": 45.7, "name": "BLEU"}, {"type": "chrf", "value": 0.65296, "name": "chr-F"}, {"type": "bleu", "value": 55.6, "name": "BLEU"}, {"type": "chrf", "value": 0.70714, "name": "chr-F"}, {"type": "bleu", "value": 53.7, "name": "BLEU"}, {"type": "chrf", "value": 0.71112, "name": "chr-F"}, {"type": "bleu", "value": 56.3, "name": "BLEU"}, {"type": "chrf", "value": 0.74022, "name": "chr-F"}, {"type": "bleu", "value": 74.0, "name": "BLEU"}, {"type": "chrf", "value": 0.85238, "name": "chr-F"}, {"type": "bleu", "value": 50.1, "name": "BLEU"}, {"type": "chrf", "value": 0.68073, "name": "chr-F"}, {"type": "bleu", "value": 53.6, "name": "BLEU"}, {"type": "chrf", "value": 0.68902, "name": "chr-F"}, {"type": "bleu", "value": 53.5, "name": "BLEU"}, {"type": "chrf", "value": 0.70071, "name": "chr-F"}, {"type": "bleu", "value": 52.5, "name": "BLEU"}, {"type": "chrf", "value": 0.69957, "name": "chr-F"}, {"type": "bleu", "value": 47.5, "name": "BLEU"}, {"type": "chrf", "value": 0.65153, "name": "chr-F"}, {"type": "bleu", "value": 53.7, "name": "BLEU"}, {"type": "chrf", "value": 0.7232, "name": "chr-F"}, {"type": "bleu", "value": 62.3, "name": "BLEU"}, {"type": "chrf", "value": 0.75679, "name": "chr-F"}, {"type": "bleu", "value": 61.8, "name": "BLEU"}, {"type": "chrf", "value": 0.76077, "name": "chr-F"}, {"type": "bleu", "value": 58.8, "name": "BLEU"}, {"type": "chrf", "value": 0.7646, "name": "chr-F"}, {"type": "bleu", "value": 53.8, "name": "BLEU"}, {"type": "chrf", "value": 0.71685, "name": "chr-F"}, {"type": "bleu", "value": 37.6, "name": "BLEU"}, {"type": "chrf", "value": 0.60029, "name": "chr-F"}, {"type": "bleu", "value": 48.4, "name": "BLEU"}, {"type": "chrf", "value": 0.65647, "name": "chr-F"}, {"type": "bleu", "value": 48.7, "name": "BLEU"}, {"type": "chrf", "value": 0.66811, "name": "chr-F"}, {"type": "bleu", "value": 42.2, "name": "BLEU"}, {"type": "chrf", "value": 0.62766, "name": "chr-F"}, {"type": "bleu", "value": 48.2, "name": "BLEU"}, {"type": "chrf", "value": 0.67276, "name": "chr-F"}, {"type": "bleu", "value": 34.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55993, "name": "chr-F"}, {"type": "bleu", "value": 51.9, "name": "BLEU"}, {"type": "chrf", "value": 0.68199, "name": "chr-F"}, {"type": "bleu", "value": 63.6, "name": "BLEU"}, {"type": "chrf", "value": 0.76316, "name": "chr-F"}, {"type": "bleu", "value": 59.1, "name": "BLEU"}, {"type": "chrf", "value": 0.74291, "name": "chr-F"}, {"type": "bleu", "value": 50.0, "name": "BLEU"}, {"type": "chrf", "value": 0.69593, "name": "chr-F"}, {"type": "bleu", "value": 47.9, "name": "BLEU"}, {"type": "chrf", "value": 0.64482, "name": "chr-F"}, {"type": "bleu", "value": 39.7, "name": "BLEU"}, {"type": "chrf", "value": 0.61606, "name": "chr-F"}, {"type": "bleu", "value": 65.4, "name": "BLEU"}, {"type": "chrf", "value": 0.82285, "name": "chr-F"}, {"type": "bleu", "value": 49.4, "name": "BLEU"}, {"type": "chrf", "value": 0.67435, "name": "chr-F"}, {"type": "bleu", "value": 51.9, "name": "BLEU"}, {"type": "chrf", "value": 0.70975, "name": "chr-F"}, {"type": "bleu", "value": 53.9, "name": "BLEU"}, {"type": "chrf", "value": 0.71497, "name": "chr-F"}, {"type": "bleu", "value": 40.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55253, "name": "chr-F"}, {"type": "bleu", "value": 34.2, "name": "BLEU"}, {"type": "chrf", "value": 0.57907, "name": "chr-F"}, {"type": "bleu", "value": 39.2, "name": "BLEU"}, {"type": "chrf", "value": 0.5828, "name": "chr-F"}, {"type": "bleu", "value": 35.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57554, "name": "chr-F"}, {"type": "bleu", "value": 47.6, "name": "BLEU"}, {"type": "chrf", "value": 0.67258, "name": "chr-F"}, {"type": "bleu", "value": 56.3, "name": "BLEU"}, {"type": "chrf", "value": 0.71355, "name": "chr-F"}, {"type": "bleu", "value": 43.9, "name": "BLEU"}, {"type": "chrf", "value": 0.63538, "name": "chr-F"}, {"type": "bleu", "value": 50.7, "name": "BLEU"}, {"type": "chrf", "value": 0.69703, "name": "chr-F"}, {"type": "bleu", "value": 53.3, "name": "BLEU"}, {"type": "chrf", "value": 0.71014, "name": "chr-F"}, {"type": "bleu", "value": 37.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55802, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.44054, "name": "chr-F"}, {"type": "bleu", "value": 23.0, "name": "BLEU"}, {"type": "chrf", "value": 0.44549, "name": "chr-F"}, {"type": "bleu", "value": 48.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63566, "name": "chr-F"}, {"type": "bleu", "value": 54.1, "name": "BLEU"}, {"type": "chrf", "value": 0.69249, "name": "chr-F"}, {"type": "bleu", "value": 61.5, "name": "BLEU"}, {"type": "chrf", "value": 0.76777, "name": "chr-F"}, {"type": "bleu", "value": 68.8, "name": "BLEU"}, {"type": "chrf", "value": 0.80359, "name": "chr-F"}, {"type": "bleu", "value": 20.8, "name": "BLEU"}, {"type": "chrf", "value": 0.37952, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.4836, "name": "chr-F"}, {"type": "bleu", "value": 50.7, "name": "BLEU"}, {"type": "chrf", "value": 0.68769, "name": "chr-F"}, {"type": "bleu", "value": 54.9, "name": "BLEU"}, {"type": "chrf", "value": 0.68956, "name": "chr-F"}, {"type": "bleu", "value": 47.0, "name": "BLEU"}, {"type": "chrf", "value": 0.66551, "name": "chr-F"}, {"type": "bleu", "value": 53.4, "name": "BLEU"}, {"type": "chrf", "value": 0.70241, "name": "chr-F"}, {"type": "bleu", "value": 47.1, "name": "BLEU"}, {"type": "chrf", "value": 0.64048, "name": "chr-F"}, {"type": "bleu", "value": 48.9, "name": "BLEU"}, {"type": "chrf", "value": 0.66676, "name": "chr-F"}, {"type": "bleu", "value": 56.8, "name": "BLEU"}, {"type": "chrf", "value": 0.71884, "name": "chr-F"}, {"type": "bleu", "value": 42.3, "name": "BLEU"}, {"type": "chrf", "value": 0.62438, "name": "chr-F"}, {"type": "bleu", "value": 52.9, "name": "BLEU"}, {"type": "chrf", "value": 0.68433, "name": "chr-F"}, {"type": "bleu", "value": 40.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61176, "name": "chr-F"}, {"type": "bleu", "value": 29.0, "name": "BLEU"}, {"type": "chrf", "value": 0.50806, "name": "chr-F"}, {"type": "bleu", "value": 47.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66238, "name": "chr-F"}, {"type": "bleu", "value": 48.1, "name": "BLEU"}, {"type": "chrf", "value": 0.64466, "name": "chr-F"}, {"type": "bleu", "value": 42.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6198, "name": "chr-F"}, {"type": "bleu", "value": 47.8, "name": "BLEU"}, {"type": "chrf", "value": 0.67198, "name": "chr-F"}, {"type": "bleu", "value": 68.3, "name": "BLEU"}, {"type": "chrf", "value": 0.79538, "name": "chr-F"}, {"type": "bleu", "value": 62.7, "name": "BLEU"}, {"type": "chrf", "value": 0.7654, "name": "chr-F"}, {"type": "bleu", "value": 54.1, "name": "BLEU"}, {"type": "chrf", "value": 0.73006, "name": "chr-F"}, {"type": "bleu", "value": 61.0, "name": "BLEU"}, {"type": "chrf", "value": 0.76476, "name": "chr-F"}, {"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.38732, "name": "chr-F"}, {"type": "bleu", "value": 22.8, "name": "BLEU"}, {"type": "chrf", "value": 0.39058, "name": "chr-F"}, {"type": "bleu", "value": 26.5, "name": "BLEU"}, {"type": "chrf", "value": 0.47244, "name": "chr-F"}, {"type": "bleu", "value": 26.7, "name": "BLEU"}, {"type": "chrf", "value": 0.51096, "name": "chr-F"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53303, "name": "chr-F"}, {"type": "bleu", "value": 42.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59686, "name": "chr-F"}, {"type": "bleu", "value": 25.2, "name": "BLEU"}, {"type": "chrf", "value": 0.42426, "name": "chr-F"}, {"type": "bleu", "value": 23.5, "name": "BLEU"}, {"type": "chrf", "value": 0.41822, "name": "chr-F"}, {"type": "bleu", "value": 23.3, "name": "BLEU"}, {"type": "chrf", "value": 0.44259, "name": "chr-F"}, {"type": "bleu", "value": 55.0, "name": "BLEU"}, {"type": "chrf", "value": 0.70077, "name": "chr-F"}, {"type": "bleu", "value": 46.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6572, "name": "chr-F"}, {"type": "bleu", "value": 57.3, "name": "BLEU"}, {"type": "chrf", "value": 0.7163, "name": "chr-F"}, {"type": "bleu", "value": 50.9, "name": "BLEU"}, {"type": "chrf", "value": 0.67909, "name": "chr-F"}, {"type": "bleu", "value": 47.0, "name": "BLEU"}, {"type": "chrf", "value": 0.6342, "name": "chr-F"}, {"type": "bleu", "value": 53.6, "name": "BLEU"}, {"type": "chrf", "value": 0.64228, "name": "chr-F"}, {"type": "bleu", "value": 47.0, "name": "BLEU"}, {"type": "chrf", "value": 0.64526, "name": "chr-F"}, {"type": "bleu", "value": 52.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66313, "name": "chr-F"}, {"type": "bleu", "value": 55.7, "name": "BLEU"}, {"type": "chrf", "value": 0.71066, "name": "chr-F"}, {"type": "bleu", "value": 49.9, "name": "BLEU"}, {"type": "chrf", "value": 0.67499, "name": "chr-F"}, {"type": "bleu", "value": 47.6, "name": "BLEU"}, {"type": "chrf", "value": 0.66221, "name": "chr-F"}, {"type": "bleu", "value": 44.4, "name": "BLEU"}, {"type": "chrf", "value": 0.6148, "name": "chr-F"}, {"type": "bleu", "value": 45.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61459, "name": "chr-F"}, {"type": "bleu", "value": 41.8, "name": "BLEU"}, {"type": "chrf", "value": 0.60646, "name": "chr-F"}, {"type": "bleu", "value": 44.6, "name": "BLEU"}, {"type": "chrf", "value": 0.63982, "name": "chr-F"}, {"type": "bleu", "value": 54.8, "name": "BLEU"}, {"type": "chrf", "value": 0.72111, "name": "chr-F"}, {"type": "bleu", "value": 59.3, "name": "BLEU"}, {"type": "chrf", "value": 0.73199, "name": "chr-F"}, {"type": "bleu", "value": 46.7, "name": "BLEU"}, {"type": "chrf", "value": 0.67269, "name": "chr-F"}, {"type": "bleu", "value": 48.9, "name": "BLEU"}, {"type": "chrf", "value": 0.68204, "name": "chr-F"}, {"type": "bleu", "value": 51.0, "name": "BLEU"}, {"type": "chrf", "value": 0.69314, "name": "chr-F"}, {"type": "bleu", "value": 55.8, "name": "BLEU"}, {"type": "chrf", "value": 0.6923, "name": "chr-F"}, {"type": "bleu", "value": 48.8, "name": "BLEU"}, {"type": "chrf", "value": 0.68483, "name": "chr-F"}, {"type": "bleu", "value": 57.4, "name": "BLEU"}, {"type": "chrf", "value": 0.71685, "name": "chr-F"}, {"type": "bleu", "value": 52.6, "name": "BLEU"}, {"type": "chrf", "value": 0.70312, "name": "chr-F"}, {"type": "bleu", "value": 56.2, "name": "BLEU"}, {"type": "chrf", "value": 0.7388, "name": "chr-F"}, {"type": "bleu", "value": 48.9, "name": "BLEU"}, {"type": "chrf", "value": 0.68518, "name": "chr-F"}, {"type": "bleu", "value": 57.3, "name": "BLEU"}, {"type": "chrf", "value": 0.71465, "name": "chr-F"}, {"type": "bleu", "value": 55.2, "name": "BLEU"}, {"type": "chrf", "value": 0.71415, "name": "chr-F"}, {"type": "bleu", "value": 45.8, "name": "BLEU"}, {"type": "chrf", "value": 0.67705, "name": "chr-F"}, {"type": "bleu", "value": 56.0, "name": "BLEU"}, {"type": "chrf", "value": 0.73721, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.41564, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.47832, "name": "chr-F"}, {"type": "bleu", "value": 39.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58486, "name": "chr-F"}, {"type": "bleu", "value": 20.2, "name": "BLEU"}, {"type": "chrf", "value": 0.39772, "name": "chr-F"}, {"type": "bleu", "value": 47.9, "name": "BLEU"}, {"type": "chrf", "value": 0.66592, "name": "chr-F"}, {"type": "bleu", "value": 51.8, "name": "BLEU"}, {"type": "chrf", "value": 0.6768, "name": "chr-F"}, {"type": "bleu", "value": 47.7, "name": "BLEU"}, {"type": "chrf", "value": 0.65788, "name": "chr-F"}, {"type": "bleu", "value": 43.1, "name": "BLEU"}, {"type": "chrf", "value": 0.64124, "name": "chr-F"}, {"type": "bleu", "value": 46.9, "name": "BLEU"}, {"type": "chrf", "value": 0.65488, "name": "chr-F"}, {"type": "bleu", "value": 46.8, "name": "BLEU"}, {"type": "chrf", "value": 0.66941, "name": "chr-F"}, {"type": "bleu", "value": 62.4, "name": "BLEU"}, {"type": "chrf", "value": 0.75755, "name": "chr-F"}, {"type": "bleu", "value": 58.6, "name": "BLEU"}, {"type": "chrf", "value": 0.74773, "name": "chr-F"}, {"type": "bleu", "value": 51.8, "name": "BLEU"}, {"type": "chrf", "value": 0.72256, "name": "chr-F"}, {"type": "bleu", "value": 63.6, "name": "BLEU"}, {"type": "chrf", "value": 0.78598, "name": "chr-F"}, {"type": "bleu", "value": 49.1, "name": "BLEU"}, {"type": "chrf", "value": 0.67249, "name": "chr-F"}, {"type": "bleu", "value": 57.3, "name": "BLEU"}, {"type": "chrf", "value": 0.7174, "name": "chr-F"}, {"type": "bleu", "value": 53.0, "name": "BLEU"}, {"type": "chrf", "value": 0.69777, "name": "chr-F"}, {"type": "bleu", "value": 53.5, "name": "BLEU"}, {"type": "chrf", "value": 0.72413, "name": "chr-F"}, {"type": "bleu", "value": 56.3, "name": "BLEU"}, {"type": "chrf", "value": 0.7296, "name": "chr-F"}, {"type": "bleu", "value": 48.2, "name": "BLEU"}, {"type": "chrf", "value": 0.67364, "name": "chr-F"}, {"type": "bleu", "value": 53.7, "name": "BLEU"}, {"type": "chrf", "value": 0.68851, "name": "chr-F"}, {"type": "bleu", "value": 49.1, "name": "BLEU"}, {"type": "chrf", "value": 0.66299, "name": "chr-F"}, {"type": "bleu", "value": 43.4, "name": "BLEU"}, {"type": "chrf", "value": 0.64106, "name": "chr-F"}, {"type": "bleu", "value": 49.1, "name": "BLEU"}, {"type": "chrf", "value": 0.6761, "name": "chr-F"}, {"type": "bleu", "value": 55.2, "name": "BLEU"}, {"type": "chrf", "value": 0.72746, "name": "chr-F"}, {"type": "bleu", "value": 55.4, "name": "BLEU"}, {"type": "chrf", "value": 0.7058, "name": "chr-F"}, {"type": "bleu", "value": 43.0, "name": "BLEU"}, {"type": "chrf", "value": 0.61642, "name": "chr-F"}, {"type": "bleu", "value": 47.0, "name": "BLEU"}, {"type": "chrf", "value": 0.66185, "name": "chr-F"}, {"type": "bleu", "value": 56.5, "name": "BLEU"}, {"type": "chrf", "value": 0.71252, "name": "chr-F"}, {"type": "bleu", "value": 52.0, "name": "BLEU"}, {"type": "chrf", "value": 0.65934, "name": "chr-F"}, {"type": "bleu", "value": 53.5, "name": "BLEU"}, {"type": "chrf", "value": 0.70356, "name": "chr-F"}, {"type": "bleu", "value": 62.7, "name": "BLEU"}, {"type": "chrf", "value": 0.74751, "name": "chr-F"}, {"type": "bleu", "value": 56.7, "name": "BLEU"}, {"type": "chrf", "value": 0.71714, "name": "chr-F"}, {"type": "bleu", "value": 48.7, "name": "BLEU"}, {"type": "chrf", "value": 0.68849, "name": "chr-F"}, {"type": "bleu", "value": 53.3, "name": "BLEU"}, {"type": "chrf", "value": 0.7016, "name": "chr-F"}, {"type": "bleu", "value": 50.8, "name": "BLEU"}, {"type": "chrf", "value": 0.68602, "name": "chr-F"}, {"type": "bleu", "value": 52.4, "name": "BLEU"}, {"type": "chrf", "value": 0.68162, "name": "chr-F"}, {"type": "bleu", "value": 48.4, "name": "BLEU"}, {"type": "chrf", "value": 0.66118, "name": "chr-F"}, {"type": "bleu", "value": 46.6, "name": "BLEU"}, {"type": "chrf", "value": 0.65923, "name": "chr-F"}, {"type": "bleu", "value": 49.7, "name": "BLEU"}, {"type": "chrf", "value": 0.67601, "name": "chr-F"}, {"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.52376, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.44187, "name": "chr-F"}, {"type": "bleu", "value": 20.2, "name": "BLEU"}, {"type": "chrf", "value": 0.4341, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ben-eng"}, "dataset": {"name": "tico19-test", "type": "tico19-test", "args": "ben-eng"}, "metrics": [{"type": "bleu", "value": 27.3, "name": "BLEU"}, {"type": "chrf", "value": 0.55418, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.45176, "name": "chr-F"}, {"type": "bleu", "value": 20.9, "name": "BLEU"}, {"type": "chrf", "value": 0.49778, "name": "chr-F"}, {"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "chrf", "value": 0.51344, "name": "chr-F"}, {"type": "bleu", "value": 15.0, "name": "BLEU"}, {"type": "chrf", "value": 0.39153, "name": "chr-F"}, {"type": "bleu", "value": 12.4, "name": "BLEU"}, {"type": "chrf", "value": 0.35348, "name": "chr-F"}, {"type": "bleu", "value": 13.1, "name": "BLEU"}, {"type": "chrf", "value": 0.36879, "name": "chr-F"}, {"type": "bleu", "value": 14.7, "name": "BLEU"}, {"type": "chrf", "value": 0.38526, "name": "chr-F"}, {"type": "bleu", "value": 38.2, "name": "BLEU"}, {"type": "chrf", "value": 0.62001, "name": "chr-F"}, {"type": "bleu", "value": 48.3, "name": "BLEU"}, {"type": "chrf", "value": 0.71654, "name": "chr-F"}, {"type": "bleu", "value": 50.2, "name": "BLEU"}, {"type": "chrf", "value": 0.71947, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58617, "name": "chr-F"}, {"type": "bleu", "value": 23.9, "name": "BLEU"}, {"type": "chrf", "value": 0.50453, "name": "chr-F"}, {"type": "bleu", "value": 28.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55031, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56113, "name": "chr-F"}, {"type": "bleu", "value": 35.8, "name": "BLEU"}, {"type": "chrf", "value": 0.60512, "name": "chr-F"}, {"type": "bleu", "value": 33.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5753, "name": "chr-F"}, {"type": "bleu", "value": 35.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58823, "name": "chr-F"}, {"type": "bleu", "value": 39.6, "name": "BLEU"}, {"type": "chrf", "value": 0.64146, "name": "chr-F"}, {"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.51582, "name": "chr-F"}, {"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.57182, "name": "chr-F"}, {"type": "bleu", "value": 33.7, "name": "BLEU"}, {"type": "chrf", "value": 0.58341, "name": "chr-F"}, {"type": "bleu", "value": 21.4, "name": "BLEU"}, {"type": "chrf", "value": 0.51194, "name": "chr-F"}, {"type": "bleu", "value": 16.8, "name": "BLEU"}, {"type": "chrf", "value": 0.43359, "name": "chr-F"}, {"type": "bleu", "value": 20.3, "name": "BLEU"}, {"type": "chrf", "value": 0.47089, "name": "chr-F"}, {"type": "bleu", "value": 22.8, "name": "BLEU"}, {"type": "chrf", "value": 0.48435, "name": "chr-F"}, {"type": "bleu", "value": 30.1, "name": "BLEU"}, {"type": "chrf", "value": 0.5706, "name": "chr-F"}, {"type": "bleu", "value": 19.7, "name": "BLEU"}, {"type": "chrf", "value": 0.46212, "name": "chr-F"}, {"type": "bleu", "value": 24.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51024, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51651, "name": "chr-F"}, {"type": "bleu", "value": 47.4, "name": "BLEU"}, {"type": "chrf", "value": 0.72228, "name": "chr-F"}, {"type": "bleu", "value": 33.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58934, "name": "chr-F"}, {"type": "bleu", "value": 44.1, "name": "BLEU"}, {"type": "chrf", "value": 0.67509, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54979, "name": "chr-F"}, {"type": "bleu", "value": 21.0, "name": "BLEU"}, {"type": "chrf", "value": 0.47627, "name": "chr-F"}, {"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.52, "name": "chr-F"}, {"type": "bleu", "value": 28.5, "name": "BLEU"}, {"type": "chrf", "value": 0.54172, "name": "chr-F"}, {"type": "bleu", "value": 23.1, "name": "BLEU"}, {"type": "chrf", "value": 0.48655, "name": "chr-F"}, {"type": "bleu", "value": 16.2, "name": "BLEU"}, {"type": "chrf", "value": 0.4098, "name": "chr-F"}, {"type": "bleu", "value": 19.5, "name": "BLEU"}, {"type": "chrf", "value": 0.44879, "name": "chr-F"}, {"type": "bleu", "value": 20.4, "name": "BLEU"}, {"type": "chrf", "value": 0.4528, "name": "chr-F"}, {"type": "bleu", "value": 30.4, "name": "BLEU"}, {"type": "chrf", "value": 0.59787, "name": "chr-F"}, {"type": "bleu", "value": 24.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52211, "name": "chr-F"}, {"type": "bleu", "value": 26.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56473, "name": "chr-F"}, {"type": "bleu", "value": 31.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58626, "name": "chr-F"}, {"type": "bleu", "value": 33.1, "name": "BLEU"}, {"type": "chrf", "value": 0.59078, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.51957, "name": "chr-F"}, {"type": "bleu", "value": 17.2, "name": "BLEU"}, {"type": "chrf", "value": 0.43707, "name": "chr-F"}, {"type": "bleu", "value": 20.1, "name": "BLEU"}, {"type": "chrf", "value": 0.47484, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.48812, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2008", "type": "wmt-2008-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5245, "name": "chr-F"}, {"type": "bleu", "value": 24.9, "name": "BLEU"}, {"type": "chrf", "value": 0.52805, "name": "chr-F"}, {"type": "bleu", "value": 25.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54135, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53925, "name": "chr-F"}, {"type": "bleu", "value": 26.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53756, "name": "chr-F"}, {"type": "bleu", "value": 25.5, "name": "BLEU"}, {"type": "chrf", "value": 0.54147, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53296, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52399, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.54809, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.56027, "name": "chr-F"}, {"type": "bleu", "value": 21.8, "name": "BLEU"}, {"type": "chrf", "value": 0.52211, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.53878, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58122, "name": "chr-F"}, {"type": "bleu", "value": 20.9, "name": "BLEU"}, {"type": "chrf", "value": 0.51468, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2009", "type": "wmt-2009-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52537, "name": "chr-F"}, {"type": "bleu", "value": 27.1, "name": "BLEU"}, {"type": "chrf", "value": 0.54467, "name": "chr-F"}, {"type": "bleu", "value": 26.1, "name": "BLEU"}, {"type": "chrf", "value": 0.54545, "name": "chr-F"}, {"type": "bleu", "value": 26.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54339, "name": "chr-F"}, {"type": "bleu", "value": 25.9, "name": "BLEU"}, {"type": "chrf", "value": 0.53323, "name": "chr-F"}, {"type": "bleu", "value": 25.0, "name": "BLEU"}, {"type": "chrf", "value": 0.53408, "name": "chr-F"}, {"type": "bleu", "value": 24.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52999, "name": "chr-F"}, {"type": "bleu", "value": 21.5, "name": "BLEU"}, {"type": "chrf", "value": 0.52387, "name": "chr-F"}, {"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57057, "name": "chr-F"}, {"type": "bleu", "value": 29.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57376, "name": "chr-F"}, {"type": "bleu", "value": 21.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5198, "name": "chr-F"}, {"type": "bleu", "value": 29.5, "name": "BLEU"}, {"type": "chrf", "value": 0.56151, "name": "chr-F"}, {"type": "bleu", "value": 31.4, "name": "BLEU"}, {"type": "chrf", "value": 0.58173, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52409, "name": "chr-F"}, {"type": "bleu", "value": 32.9, "name": "BLEU"}, {"type": "chrf", "value": 0.58598, "name": "chr-F"}, {"type": "bleu", "value": 31.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58722, "name": "chr-F"}, {"type": "bleu", "value": 33.1, "name": "BLEU"}, {"type": "chrf", "value": 0.59235, "name": "chr-F"}, {"type": "bleu", "value": 20.7, "name": "BLEU"}, {"type": "chrf", "value": 0.51708, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.56094, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2010", "type": "wmt-2010-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 23.5, "name": "BLEU"}, {"type": "chrf", "value": 0.53608, "name": "chr-F"}, {"type": "bleu", "value": 28.8, "name": "BLEU"}, {"type": "chrf", "value": 0.56348, "name": "chr-F"}, {"type": "bleu", "value": 27.2, "name": "BLEU"}, {"type": "chrf", "value": 0.5551, "name": "chr-F"}, {"type": "bleu", "value": 30.6, "name": "BLEU"}, {"type": "chrf", "value": 0.57375, "name": "chr-F"}, {"type": "bleu", "value": 29.8, "name": "BLEU"}, {"type": "chrf", "value": 0.57666, "name": "chr-F"}, {"type": "bleu", "value": 28.2, "name": "BLEU"}, {"type": "chrf", "value": 0.56822, "name": "chr-F"}, {"type": "bleu", "value": 31.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58446, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54037, "name": "chr-F"}, {"type": "bleu", "value": 31.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58935, "name": "chr-F"}, {"type": "bleu", "value": 35.6, "name": "BLEU"}, {"type": "chrf", "value": 0.6123, "name": "chr-F"}, {"type": "bleu", "value": 23.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52993, "name": "chr-F"}, {"type": "bleu", "value": 31.7, "name": "BLEU"}, {"type": "chrf", "value": 0.5858, "name": "chr-F"}, {"type": "bleu", "value": 36.8, "name": "BLEU"}, {"type": "chrf", "value": 0.61883, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54232, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2011", "type": "wmt-2011-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 22.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52042, "name": "chr-F"}, {"type": "bleu", "value": 27.8, "name": "BLEU"}, {"type": "chrf", "value": 0.5538, "name": "chr-F"}, {"type": "bleu", "value": 28.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55651, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56004, "name": "chr-F"}, {"type": "bleu", "value": 25.8, "name": "BLEU"}, {"type": "chrf", "value": 0.54263, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.54883, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.55738, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.52251, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.60292, "name": "chr-F"}, {"type": "bleu", "value": 37.6, "name": "BLEU"}, {"type": "chrf", "value": 0.61355, "name": "chr-F"}, {"type": "bleu", "value": 22.1, "name": "BLEU"}, {"type": "chrf", "value": 0.52082, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58971, "name": "chr-F"}, {"type": "bleu", "value": 38.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62318, "name": "chr-F"}, {"type": "bleu", "value": 34.0, "name": "BLEU"}, {"type": "chrf", "value": 0.60467, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2012", "type": "wmt-2012-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.52126, "name": "chr-F"}, {"type": "bleu", "value": 27.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5498, "name": "chr-F"}, {"type": "bleu", "value": 26.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55088, "name": "chr-F"}, {"type": "bleu", "value": 29.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5595, "name": "chr-F"}, {"type": "bleu", "value": 27.5, "name": "BLEU"}, {"type": "chrf", "value": 0.55507, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5516, "name": "chr-F"}, {"type": "bleu", "value": 30.1, "name": "BLEU"}, {"type": "chrf", "value": 0.56307, "name": "chr-F"}, {"type": "bleu", "value": 22.9, "name": "BLEU"}, {"type": "chrf", "value": 0.52121, "name": "chr-F"}, {"type": "bleu", "value": 30.8, "name": "BLEU"}, {"type": "chrf", "value": 0.58675, "name": "chr-F"}, {"type": "bleu", "value": 37.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61689, "name": "chr-F"}, {"type": "bleu", "value": 23.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52009, "name": "chr-F"}, {"type": "bleu", "value": 32.3, "name": "BLEU"}, {"type": "chrf", "value": 0.58405, "name": "chr-F"}, {"type": "bleu", "value": 38.5, "name": "BLEU"}, {"type": "chrf", "value": 0.62038, "name": "chr-F"}, {"type": "bleu", "value": 18.3, "name": "BLEU"}, {"type": "chrf", "value": 0.47965, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.61258, "name": "chr-F"}, {"type": "bleu", "value": 24.2, "name": "BLEU"}, {"type": "chrf", "value": 0.52674, "name": "chr-F"}, {"type": "bleu", "value": 27.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5376, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2013", "type": "wmt-2013-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 25.3, "name": "BLEU"}, {"type": "chrf", "value": 0.54483, "name": "chr-F"}, {"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57212, "name": "chr-F"}, {"type": "bleu", "value": 28.4, "name": "BLEU"}, {"type": "chrf", "value": 0.55258, "name": "chr-F"}, {"type": "bleu", "value": 30.6, "name": "BLEU"}, {"type": "chrf", "value": 0.56179, "name": "chr-F"}, {"type": "bleu", "value": 31.0, "name": "BLEU"}, {"type": "chrf", "value": 0.57382, "name": "chr-F"}, {"type": "bleu", "value": 28.8, "name": "BLEU"}, {"type": "chrf", "value": 0.55576, "name": "chr-F"}, {"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.5622, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.5483, "name": "chr-F"}, {"type": "bleu", "value": 32.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58195, "name": "chr-F"}, {"type": "bleu", "value": 34.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59254, "name": "chr-F"}, {"type": "bleu", "value": 24.6, "name": "BLEU"}, {"type": "chrf", "value": 0.53465, "name": "chr-F"}, {"type": "bleu", "value": 32.9, "name": "BLEU"}, {"type": "chrf", "value": 0.58395, "name": "chr-F"}, {"type": "bleu", "value": 34.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58748, "name": "chr-F"}, {"type": "bleu", "value": 22.4, "name": "BLEU"}, {"type": "chrf", "value": 0.5198, "name": "chr-F"}, {"type": "bleu", "value": 28.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55557, "name": "chr-F"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54627, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.5554, "name": "chr-F"}, {"type": "bleu", "value": 24.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53925, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2014", "type": "wmt-2014-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 33.9, "name": "BLEU"}, {"type": "chrf", "value": 0.61449, "name": "chr-F"}, {"type": "bleu", "value": 32.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58733, "name": "chr-F"}, {"type": "bleu", "value": 26.5, "name": "BLEU"}, {"type": "chrf", "value": 0.57701, "name": "chr-F"}, {"type": "bleu", "value": 38.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63976, "name": "chr-F"}, {"type": "bleu", "value": 36.8, "name": "BLEU"}, {"type": "chrf", "value": 0.62627, "name": "chr-F"}, {"type": "bleu", "value": 26.4, "name": "BLEU"}, {"type": "chrf", "value": 0.56343, "name": "chr-F"}, {"type": "bleu", "value": 36.6, "name": "BLEU"}, {"type": "chrf", "value": 0.62633, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2015", "type": "wmt-2015-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 30.7, "name": "BLEU"}, {"type": "chrf", "value": 0.56562, "name": "chr-F"}, {"type": "bleu", "value": 33.3, "name": "BLEU"}, {"type": "chrf", "value": 0.59036, "name": "chr-F"}, {"type": "bleu", "value": 30.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58604, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58794, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2016", "type": "wmt-2016-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 32.6, "name": "BLEU"}, {"type": "chrf", "value": 0.58896, "name": "chr-F"}, {"type": "bleu", "value": 39.4, "name": "BLEU"}, {"type": "chrf", "value": 0.63945, "name": "chr-F"}, {"type": "bleu", "value": 35.9, "name": "BLEU"}, {"type": "chrf", "value": 0.62731, "name": "chr-F"}, {"type": "bleu", "value": 38.1, "name": "BLEU"}, {"type": "chrf", "value": 0.63051, "name": "chr-F"}, {"type": "bleu", "value": 32.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58858, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2017", "type": "wmt-2017-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 29.0, "name": "BLEU"}, {"type": "chrf", "value": 0.55759, "name": "chr-F"}, {"type": "bleu", "value": 34.8, "name": "BLEU"}, {"type": "chrf", "value": 0.60252, "name": "chr-F"}, {"type": "bleu", "value": 28.7, "name": "BLEU"}, {"type": "chrf", "value": 0.57779, "name": "chr-F"}, {"type": "bleu", "value": 20.2, "name": "BLEU"}, {"type": "chrf", "value": 0.51103, "name": "chr-F"}, {"type": "bleu", "value": 36.1, "name": "BLEU"}, {"type": "chrf", "value": 0.61663, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2018", "type": "wmt-2018-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 29.6, "name": "BLEU"}, {"type": "chrf", "value": 0.56663, "name": "chr-F"}, {"type": "bleu", "value": 41.8, "name": "BLEU"}, {"type": "chrf", "value": 0.65768, "name": "chr-F"}, {"type": "bleu", "value": 43.5, "name": "BLEU"}, {"type": "chrf", "value": 0.6759, "name": "chr-F"}, {"type": "bleu", "value": 31.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58427, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-deu"}, "dataset": {"name": "newstest2019", "type": "wmt-2019-news", "args": "ces-deu"}, "metrics": [{"type": "bleu", "value": 23.8, "name": "BLEU"}, {"type": "chrf", "value": 0.53405, "name": "chr-F"}, {"type": "bleu", "value": 37.7, "name": "BLEU"}, {"type": "chrf", "value": 0.62158, "name": "chr-F"}, {"type": "bleu", "value": 34.4, "name": "BLEU"}, {"type": "chrf", "value": 0.61819, "name": "chr-F"}, {"type": "bleu", "value": 39.8, "name": "BLEU"}, {"type": "chrf", "value": 0.6464, "name": "chr-F"}, {"type": "bleu", "value": 27.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59291, "name": "chr-F"}, {"type": "bleu", "value": 22.5, "name": "BLEU"}, {"type": "chrf", "value": 0.51165, "name": "chr-F"}, {"type": "bleu", "value": 29.1, "name": "BLEU"}, {"type": "chrf", "value": 0.58019, "name": "chr-F"}, {"type": "bleu", "value": 37.8, "name": "BLEU"}, {"type": "chrf", "value": 0.62499, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation deu-eng"}, "dataset": {"name": "newstest2020", "type": "wmt-2020-news", "args": "deu-eng"}, "metrics": [{"type": "bleu", "value": 30.9, "name": "BLEU"}, {"type": "chrf", "value": 0.56495, "name": "chr-F"}, {"type": "bleu", "value": 31.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59211, "name": "chr-F"}, {"type": "bleu", "value": 30.2, "name": "BLEU"}, {"type": "chrf", "value": 0.58436, "name": "chr-F"}, {"type": "bleu", "value": 26.6, "name": "BLEU"}, {"type": "chrf", "value": 0.59478, "name": "chr-F"}, {"type": "bleu", "value": 27.7, "name": "BLEU"}, {"type": "chrf", "value": 0.56674, "name": "chr-F"}, {"type": "bleu", "value": 10.8, "name": "BLEU"}, {"type": "chrf", "value": 0.37276, "name": "chr-F"}, {"type": "bleu", "value": 33.6, "name": "BLEU"}, {"type": "chrf", "value": 0.62387, "name": "chr-F"}]}, {"task": {"type": "translation", "name": "Translation ces-eng"}, "dataset": {"name": "newstest2021", "type": "wmt-2021-news", "args": "ces-eng"}, "metrics": [{"type": "bleu", "value": 25.6, "name": "BLEU"}, {"type": "chrf", "value": 0.54943, "name": "chr-F"}, {"type": "bleu", "value": 30.5, "name": "BLEU"}, {"type": "chrf", "value": 0.58675, "name": "chr-F"}, {"type": "bleu", "value": 30.0, "name": "BLEU"}, {"type": "chrf", "value": 0.5769, "name": "chr-F"}, {"type": "bleu", "value": 24.9, "name": "BLEU"}, {"type": "chrf", "value": 0.55381, "name": "chr-F"}, {"type": "bleu", "value": 37.2, "name": "BLEU"}, {"type": "chrf", "value": 0.63942, "name": "chr-F"}, {"type": "bleu", "value": 29.2, "name": "BLEU"}, {"type": "chrf", "value": 0.53701, "name": "chr-F"}, {"type": "bleu", "value": 33.7, "name": "BLEU"}, {"type": "chrf", "value": 0.6076, "name": "chr-F"}]}]}]}
|
task
|
[
"TRANSLATION"
] | 45,130 |
marmolpen3/all-roberta-large-v1-sla-obligations-rights
|
marmolpen3
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-08-19T10:10:47Z |
2023-08-19T10:11:41+00:00
| 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# marmolpen3/all-roberta-large-v1-sla-obligations-rights
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("marmolpen3/all-roberta-large-v1-sla-obligations-rights")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| null |
Non_BioNLP
|
# marmolpen3/all-roberta-large-v1-sla-obligations-rights
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("marmolpen3/all-roberta-large-v1-sla-obligations-rights")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,132 |
sohyun416/opus-mt-ko-en-finetuned-ko-to-en-2780616
|
sohyun416
|
translation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-02T02:46:58Z |
2023-06-02T03:01:18+00:00
| 15 | 0 |
---
tags:
- translation
- generated_from_trainer
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en-2780616
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en-2780616
This model is a fine-tuned version of [QuoQA-NLP/KE-T5-Ko2En-Base](https://huggingface.co/QuoQA-NLP/KE-T5-Ko2En-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en-2780616
This model is a fine-tuned version of [QuoQA-NLP/KE-T5-Ko2En-Base](https://huggingface.co/QuoQA-NLP/KE-T5-Ko2En-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
{"tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "opus-mt-ko-en-finetuned-ko-to-en-2780616", "results": []}]}
|
task
|
[
"TRANSLATION"
] | 45,133 |
TransQuest/microtransquest-de_en-pharmaceutical-smt
|
TransQuest
|
token-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"Quality Estimation",
"microtransquest",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05Z |
2021-06-04T08:18:20+00:00
| 135 | 0 |
---
language: de-en
license: apache-2.0
tags:
- Quality Estimation
- microtransquest
---
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-de_en-pharmaceutical-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
| null |
Non_BioNLP
|
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-de_en-pharmaceutical-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
{"language": "de-en", "license": "apache-2.0", "tags": ["Quality Estimation", "microtransquest"]}
|
task
|
[
"TRANSLATION"
] | 45,134 |
dcssdc/finetuning-emotion-model
|
dcssdc
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-24T17:22:05Z |
2023-12-15T21:31:13+00:00
| 94 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-emotion-model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.9225
name: Accuracy
- type: f1
value: 0.9223884493145167
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
- Accuracy: 0.9225
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3250 | 0.9055 | 0.9042 |
| 0.5403 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9224 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2209
- Accuracy: 0.9225
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3250 | 0.9055 | 0.9042 |
| 0.5403 | 2.0 | 500 | 0.2209 | 0.9225 | 0.9224 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-emotion-model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9225, "name": "Accuracy"}, {"type": "f1", "value": 0.9223884493145167, "name": "F1"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,135 |
MBZUAI/AIN
|
MBZUAI
|
image-text-to-text
|
[
"safetensors",
"qwen2_vl",
"LMM",
"Arabic",
"OCR",
"image-text-to-text",
"conversational",
"en",
"ar",
"arxiv:2502.00094",
"license:mit",
"region:us"
] | 2025-01-07T20:30:17Z |
2025-03-13T11:30:39+00:00
| 370 | 5 |
---
base_model:
- qwen2-VL-7B
language:
- en
- ar
license: mit
pipeline_tag: image-text-to-text
tags:
- LMM
- Arabic
- OCR
---
<div style="display: flex; align-items: center;">
<img src="assets_hf/AIN.png" width="10%" alt="logo" style="margin-right: 10px;" />
<h1 style="margin: 0; font-size: 28px;";">AIN: The Arabic INclusive Large Multimodal Model</h1>
</div>
[Ahmed Heakl](https://huggingface.co/ahmedheakl) <sup> * </sup>
[Sara Ghaboura](https://huggingface.co/SLMLAH) <sup> * </sup>
[Omkar Thawakar](https://omkarthawakar.github.io)
[Fahad Shahbaz Khan](https://scholar.google.com/citations?hl=en&user=zvaeYnUAAAAJ)
[Hisham Cholakkal](https://scholar.google.com/citations?hl=en&user=bZ3YBRcAAAAJ)
[Rao M. Anwer](https://scholar.google.com/citations?hl=en&user=_KlvMVoAAAAJ)
[Salman Khan](https://scholar.google.com/citations?hl=en&user=M59O9lkAAAAJ)
<br>
<em> <sup> *Equal Contribution </sup> </em>
<br>
#### **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
[](https://arxiv.org/abs/2502.00094)
[](https://mbzuai-oryx.github.io/AIN/)
[](https://github.com/mbzuai-oryx/AIN)
[](https://github.com/mbzuai-oryx/AIN/issues)
[](https://github.com/mbzuai-oryx/AIN/stargazers)
[](https://github.com/mbzuai-oryx/AIN/blob/main/LICENSE)
---
<div class="abstract-container">
<h2>Abstract</h2>
<div class="abstract-content">
<p>
Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a few specific aspects of the language and visual understanding. To bridge this gap, we introduce <b><em>AIN - the Arabic Inclusive Multimodal Model-</em></b> designed to excel across diverse domains.
AIN is an English-Arabic <b>bilingual LMM</b> designed to excel in English and Arabic, leveraging carefully constructed <b>3.6 million</b> high-quality Arabic-English multimodal data samples. AIN demonstrates state-of-the-art Arabic performance, while also possessing strong English-language visual capabilities.
</p>
</div>
</div>
## 🌟 Key Features
- The **first Arabic-centric inclusive Large Multimodal Model (LMM)** trained on **3.6M samples**.
- Includes **35% authentic Arabic data** within its Arabic data subset.
- Achieves **superior performance compared to open- and closed-source models** (e.g., GPT-4o) and open-source models (e.g., Qwen2-VL-7B) across tasks such as OCR and specialized domains.
- Demonstrates **robust bilingual capabilities** (Arabic/English), **validated** through **comprehensive testing** and **human evaluation** across 17 Arab countries.
- Exhibits **advanced cultural understanding** and domain expertise in fields such as **medical imaging**, **agriculture**, and **scientific visualization**.
<p align="center">
<img src="assets_hf/intro_bar.png" width="70%" alt="intro_bar" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 1.</b> Comparative performance of AIN-7B against other models across key domains, including OCR & Document Understanding, Remote Sensing, Agricultural Understanding, and overall performance across all domains. </em>
</h6>
</p>
<p align="center" >
<img src="assets_hf/radar_chart.png" width="52%" alt="radar_chart" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 2.</b> showcases a comprehensive performance analysis of AIN-7B across CAMEL-Bench domains, comparing it with prominent closed-source models as well as open-source counterparts. <strong>OCR:</strong> "OCR & Document Understanding", <strong>Video:</strong> "General Video & Multi-Image Understanding", <strong>RS:</strong> "Remote Sensing Understanding", <strong>CDT:</strong> "Chart, Diagram & Table Understanding", <strong>Agro.:</strong> "Agricultural Image Understanding", <strong>Cultural:</strong> "Cultural-Specific Understanding", <strong>Medical:</strong> "Medical Image Understanding".
</em>
</h6>
---
## ⚖️ Quick Start
Please install the qwen vision kit. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"MBZUAI/AIN", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "MBZUAI/AIN",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("MBZUAI/AIN")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("MBZUAI/AIN", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://huggingface.co/MBZUAI/AIN/resolve/main/assets_hf/demo_image.jpeg",
},
{"type": "text", "text": "يرجى وصف هذه الصورة."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"MBZUAI/AIN", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("MBZUAI/AIN")
# Image
url = "https://huggingface.co/MBZUAI/AIN/resolve/main/assets_hf/demo_image.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image in Arabic."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"MBZUAI/AIN", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
---
## ⚖️ Quantitative Evaluation and Results
AIN demonstrates state-of-the-art performance across diverse domains, surpassing both open- and closed-source models. Notably, it achieves an aggregate performance score of 63.77%, with significant gains in OCR, remote sensing, and agricultural image understanding.
<div align="center" >
<table>
<caption>
<h6>
<strong>Table 1. Performance comparison of AIN and different closed- and open-source LMMs across CAMEL-Bench domains.</strong>
<br> <em>Best performance is marked with 🥇; second-best is 🥈.</em>
<strong>OCR</strong>: "OCR & Document Understanding",
<strong>Video</strong>: "General Video & Multi-Image Understanding",
<strong>RS</strong>: "Remote Sensing Understanding",
<strong>CDT</strong>: "Chart, Diagram & Table Understanding",
<strong>Agro.</strong>: "Agricultural Image Understanding",
<strong>Cult.</strong>: "Cultural-Specific Understanding",
<strong>Med.</strong>: "Medical Image Understanding".
</h6>
</caption>
<thead>
<tr style="background-color: #e0e0e0;">
<th>Models</th>
<th>VQA</th>
<th>OCR</th>
<th>Video</th>
<th>RS</th>
<th>CDT</th>
<th>Agro.</th>
<th>Cult.</th>
<th>Med.</th>
<th style="background-color: #d0d0d0;">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>GPT-4o</td>
<td>🥈55.15</td>
<td>🥈54.98</td>
<td>🥇69.65</td>
<td>🥈27.36</td>
<td>🥈62.35</td>
<td>🥈80.75</td>
<td>🥇80.86</td>
<td>🥇49.91</td>
<td style="background-color: #d0d0d0;">🥈60.13</td>
</tr>
<tr>
<td>GPT-4o-mini</td>
<td>48.83</td>
<td>39.38</td>
<td>🥈66.28</td>
<td>16.93</td>
<td>56.37</td>
<td>78.80</td>
<td>65.92</td>
<td>🥈47.37</td>
<td style="background-color: #d0d0d0;">52.49</td>
</tr>
<tr>
<td>Gemini-1.5-Pro</td>
<td>46.68</td>
<td>28.68</td>
<td>42.95</td>
<td>17.07</td>
<td>47.06</td>
<td>72.14</td>
<td>56.24</td>
<td>33.78</td>
<td style="background-color: #d0d0d0;">52.38</td>
</tr>
<tr>
<td>Gemini-1.5-flash</td>
<td>45.59</td>
<td>27.58</td>
<td>53.31</td>
<td>14.95</td>
<td>48.26</td>
<td>76.07</td>
<td>46.54</td>
<td>42.87</td>
<td style="background-color: #d0d0d0;">44.40</td>
</tr>
<tr>
<td>InternVL-8B </td>
<td>30.41 </td>
<td>15.91 </td>
<td>51.42 </td>
<td>5.36 </td>
<td>30.27 </td>
<td>44.47 </td>
<td>20.88 </td>
<td>29.48 </td>
<td style="background-color: #d0d0d0;">28.52 </td>
</tr>
<tr>
<td>InternVL2.5-1B </td>
<td>27.22 </td>
<td>19.45 </td>
<td>38.20 </td>
<td>3.39 </td>
<td>30.75 </td>
<td>39.53 </td>
<td>35.68 </td>
<td>21.27 </td>
<td style="background-color: #d0d0d0;">26.94 </td>
</tr>
<tr>
<td>Qwen-VL-2B </td>
<td>41.02 </td>
<td>22.93 </td>
<td>38.90 </td>
<td>12.56 </td>
<td>27.83 </td>
<td>52.02 </td>
<td>34.28 </td>
<td>29.12 </td>
<td style="background-color: #d0d0d0;">32.33 </td>
</tr>
<tr>
<td>Qwen2-VL-7B </td>
<td>48.76 </td>
<td>42.73 </td>
<td>61.97 </td>
<td>21.30 </td>
<td>54.67 </td>
<td>79.32 </td>
<td>75.96 </td>
<td>35.81 </td>
<td style="background-color: #d0d0d0;">52.57 </td>
</tr>
<tr>
<td>AIN-7B <em>(ours)</em> </td>
<td>🥇56.78 </td>
<td>🥇72.35 </td>
<td>64.09 </td>
<td>🥇45.92 </td>
<td>🥇64.10 </td>
<td>🥇85.05 </td>
<td>🥈78.09 </td>
<td>43.77 </td>
<td style="background-color: #d0d0d0;">🏆63.77 </td>
</tr>
</tbody>
</table>
</div>
---
## 🎯 Qualitative Evaluation
The qualitative evaluation showcases AIN's advanced capabilities in handling diverse, complex tasks, including OCR, medical imaging, remote sensing, and cultural-specific understanding, with remarkable precision and contextual relevance. Unlike GPT-4o and LLaVA, AIN demonstrates superior performance in identifying intricate details and maintaining accuracy across varied query formats and multi-domain challenges.
<div align="center">
<img src="assets_hf/qualitative.png" width="75%" alt="qualitative" />
<h6>
<em> <b>Figure 3.</b> Qualitative examples showcasing AIN-7B’s capabilities across various domains, including general VQA, OCR & Document Understanding, Remote Sensing, Medical Imaging, Agricultural Understanding, and Cultural-Specific tasks. </em>
</h6>
</div>
---
## 🧐 Data Verification and Toxicity Filtering
A multi-step verification pipeline was implemented to ensure high-quality translations and safe visual data. Translation accuracy was assessed through human evaluation, where native Arabic speakers rated outputs against reference translations, and semantic similarity checks were conducted using **LaBSE**. Additionally, translated samples were reverse-translated and validated using **BLEU, METEOR, and ROUGE scores** to measure correctness, correlation, and overlap. For visual data, toxicity filtering was applied using **LLavaGuard’s safety policies and GPT-4o**, identifying and removing unsafe content related to violence, substance abuse, and harmful imagery, ensuring compliance with ethical AI standards.
<p align="center">
<img src="assets_hf/verify_pipeline.png" width="75%" alt="verify" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 4.</b> Data verification and filtering pipeline for textual and visual data, ensuring high-quality training data through semantic similarity checks, translation quality evaluations, and toxicity screening for safety compliance. </em>
</h6>
</p>
<p align="center">
<img src="assets_hf/toxicity.png" width=48%" alt="verify" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 5.</b> Distribution of visual data toxicity filtering results, showing that 95% of the data is classified as safe, while 5% is identified as unsafe due to categories like weapons or substance abuse, violence, and animal cruelty. </em>
</h6>
</p>
---
## 🔒 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 💬 Contact us
For questions or suggestions, feel free to reach out to us on [GitHub Discussions](https://github.com/mbzuai-oryx/AIN/discussions).
---
If you use AIN in your research, please cite our work as follows:
```
@misc{heakl2025ainarabicinclusivelarge,
title={AIN: The Arabic INclusive Large Multimodal Model},
author={Ahmed Heakl and Sara Ghaboura and Omkar Thawkar and Fahad Shahbaz Khan and Hisham Cholakkal and Rao Muhammad Anwer and Salman Khan},
year={2025},
eprint={2502.00094},
url={https://arxiv.org/abs/2502.00094},
```
---
| null |
Non_BioNLP
|
<div style="display: flex; align-items: center;">
<img src="assets_hf/AIN.png" width="10%" alt="logo" style="margin-right: 10px;" />
<h1 style="margin: 0; font-size: 28px;";">AIN: The Arabic INclusive Large Multimodal Model</h1>
</div>
[Ahmed Heakl](https://huggingface.co/ahmedheakl) <sup> * </sup>
[Sara Ghaboura](https://huggingface.co/SLMLAH) <sup> * </sup>
[Omkar Thawakar](https://omkarthawakar.github.io)
[Fahad Shahbaz Khan](https://scholar.google.com/citations?hl=en&user=zvaeYnUAAAAJ)
[Hisham Cholakkal](https://scholar.google.com/citations?hl=en&user=bZ3YBRcAAAAJ)
[Rao M. Anwer](https://scholar.google.com/citations?hl=en&user=_KlvMVoAAAAJ)
[Salman Khan](https://scholar.google.com/citations?hl=en&user=M59O9lkAAAAJ)
<br>
<em> <sup> *Equal Contribution </sup> </em>
<br>
#### **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
[](https://arxiv.org/abs/2502.00094)
[](https://mbzuai-oryx.github.io/AIN/)
[](https://github.com/mbzuai-oryx/AIN)
[](https://github.com/mbzuai-oryx/AIN/issues)
[](https://github.com/mbzuai-oryx/AIN/stargazers)
[](https://github.com/mbzuai-oryx/AIN/blob/main/LICENSE)
---
<div class="abstract-container">
<h2>Abstract</h2>
<div class="abstract-content">
<p>
Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a few specific aspects of the language and visual understanding. To bridge this gap, we introduce <b><em>AIN - the Arabic Inclusive Multimodal Model-</em></b> designed to excel across diverse domains.
AIN is an English-Arabic <b>bilingual LMM</b> designed to excel in English and Arabic, leveraging carefully constructed <b>3.6 million</b> high-quality Arabic-English multimodal data samples. AIN demonstrates state-of-the-art Arabic performance, while also possessing strong English-language visual capabilities.
</p>
</div>
</div>
## 🌟 Key Features
- The **first Arabic-centric inclusive Large Multimodal Model (LMM)** trained on **3.6M samples**.
- Includes **35% authentic Arabic data** within its Arabic data subset.
- Achieves **superior performance compared to open- and closed-source models** (e.g., GPT-4o) and open-source models (e.g., Qwen2-VL-7B) across tasks such as OCR and specialized domains.
- Demonstrates **robust bilingual capabilities** (Arabic/English), **validated** through **comprehensive testing** and **human evaluation** across 17 Arab countries.
- Exhibits **advanced cultural understanding** and domain expertise in fields such as **medical imaging**, **agriculture**, and **scientific visualization**.
<p align="center">
<img src="assets_hf/intro_bar.png" width="70%" alt="intro_bar" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 1.</b> Comparative performance of AIN-7B against other models across key domains, including OCR & Document Understanding, Remote Sensing, Agricultural Understanding, and overall performance across all domains. </em>
</h6>
</p>
<p align="center" >
<img src="assets_hf/radar_chart.png" width="52%" alt="radar_chart" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 2.</b> showcases a comprehensive performance analysis of AIN-7B across CAMEL-Bench domains, comparing it with prominent closed-source models as well as open-source counterparts. <strong>OCR:</strong> "OCR & Document Understanding", <strong>Video:</strong> "General Video & Multi-Image Understanding", <strong>RS:</strong> "Remote Sensing Understanding", <strong>CDT:</strong> "Chart, Diagram & Table Understanding", <strong>Agro.:</strong> "Agricultural Image Understanding", <strong>Cultural:</strong> "Cultural-Specific Understanding", <strong>Medical:</strong> "Medical Image Understanding".
</em>
</h6>
---
## ⚖️ Quick Start
Please install the qwen vision kit. This includes base64, URLs, and interleaved images and videos. You can install it using the following command:
```bash
pip install qwen-vl-utils
```
Here we show a code snippet to show you how to use the chat model with `transformers` and `qwen_vl_utils`:
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"MBZUAI/AIN", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
# "MBZUAI/AIN",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("MBZUAI/AIN")
# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("MBZUAI/AIN", min_pixels=min_pixels, max_pixels=max_pixels)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://huggingface.co/MBZUAI/AIN/resolve/main/assets_hf/demo_image.jpeg",
},
{"type": "text", "text": "يرجى وصف هذه الصورة."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Without qwen_vl_utils</summary>
```python
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"MBZUAI/AIN", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("MBZUAI/AIN")
# Image
url = "https://huggingface.co/MBZUAI/AIN/resolve/main/assets_hf/demo_image.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{
"type": "image",
},
{"type": "text", "text": "Describe this image in Arabic."},
],
}
]
# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
inputs = processor(
text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)
```
</details>
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "Identify the similarities between these images."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Video inference</summary>
```python
# Messages containing a images list as a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": [
"file:///path/to/frame1.jpg",
"file:///path/to/frame2.jpg",
"file:///path/to/frame3.jpg",
"file:///path/to/frame4.jpg",
],
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Messages containing a video and a text query
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"video": "file:///path/to/video1.mp4",
"max_pixels": 360 * 420,
"fps": 1.0,
},
{"type": "text", "text": "Describe this video."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
<details>
<summary>Batch inference</summary>
```python
# Sample messages for batch inference
messages1 = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/image1.jpg"},
{"type": "image", "image": "file:///path/to/image2.jpg"},
{"type": "text", "text": "What are the common elements in these pictures?"},
],
}
]
messages2 = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]
# Preparation for batch inference
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=texts,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
```
</details>
### More Usage Tips
For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
```python
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "file:///path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Image URL
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "http://path/to/your/image.jpg"},
{"type": "text", "text": "Describe this image."},
],
}
]
## Base64 encoded image
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "data:image;base64,/9j/..."},
{"type": "text", "text": "Describe this image."},
],
}
]
```
#### Image Resolution for performance boost
The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
```python
min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"MBZUAI/AIN", min_pixels=min_pixels, max_pixels=max_pixels
)
```
Besides, We provide two methods for fine-grained control over the image size input to the model:
1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
```python
# min_pixels and max_pixels
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"resized_height": 280,
"resized_width": 420,
},
{"type": "text", "text": "Describe this image."},
],
}
]
# resized_height and resized_width
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "file:///path/to/your/image.jpg",
"min_pixels": 50176,
"max_pixels": 50176,
},
{"type": "text", "text": "Describe this image."},
],
}
]
```
---
## ⚖️ Quantitative Evaluation and Results
AIN demonstrates state-of-the-art performance across diverse domains, surpassing both open- and closed-source models. Notably, it achieves an aggregate performance score of 63.77%, with significant gains in OCR, remote sensing, and agricultural image understanding.
<div align="center" >
<table>
<caption>
<h6>
<strong>Table 1. Performance comparison of AIN and different closed- and open-source LMMs across CAMEL-Bench domains.</strong>
<br> <em>Best performance is marked with 🥇; second-best is 🥈.</em>
<strong>OCR</strong>: "OCR & Document Understanding",
<strong>Video</strong>: "General Video & Multi-Image Understanding",
<strong>RS</strong>: "Remote Sensing Understanding",
<strong>CDT</strong>: "Chart, Diagram & Table Understanding",
<strong>Agro.</strong>: "Agricultural Image Understanding",
<strong>Cult.</strong>: "Cultural-Specific Understanding",
<strong>Med.</strong>: "Medical Image Understanding".
</h6>
</caption>
<thead>
<tr style="background-color: #e0e0e0;">
<th>Models</th>
<th>VQA</th>
<th>OCR</th>
<th>Video</th>
<th>RS</th>
<th>CDT</th>
<th>Agro.</th>
<th>Cult.</th>
<th>Med.</th>
<th style="background-color: #d0d0d0;">Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>GPT-4o</td>
<td>🥈55.15</td>
<td>🥈54.98</td>
<td>🥇69.65</td>
<td>🥈27.36</td>
<td>🥈62.35</td>
<td>🥈80.75</td>
<td>🥇80.86</td>
<td>🥇49.91</td>
<td style="background-color: #d0d0d0;">🥈60.13</td>
</tr>
<tr>
<td>GPT-4o-mini</td>
<td>48.83</td>
<td>39.38</td>
<td>🥈66.28</td>
<td>16.93</td>
<td>56.37</td>
<td>78.80</td>
<td>65.92</td>
<td>🥈47.37</td>
<td style="background-color: #d0d0d0;">52.49</td>
</tr>
<tr>
<td>Gemini-1.5-Pro</td>
<td>46.68</td>
<td>28.68</td>
<td>42.95</td>
<td>17.07</td>
<td>47.06</td>
<td>72.14</td>
<td>56.24</td>
<td>33.78</td>
<td style="background-color: #d0d0d0;">52.38</td>
</tr>
<tr>
<td>Gemini-1.5-flash</td>
<td>45.59</td>
<td>27.58</td>
<td>53.31</td>
<td>14.95</td>
<td>48.26</td>
<td>76.07</td>
<td>46.54</td>
<td>42.87</td>
<td style="background-color: #d0d0d0;">44.40</td>
</tr>
<tr>
<td>InternVL-8B </td>
<td>30.41 </td>
<td>15.91 </td>
<td>51.42 </td>
<td>5.36 </td>
<td>30.27 </td>
<td>44.47 </td>
<td>20.88 </td>
<td>29.48 </td>
<td style="background-color: #d0d0d0;">28.52 </td>
</tr>
<tr>
<td>InternVL2.5-1B </td>
<td>27.22 </td>
<td>19.45 </td>
<td>38.20 </td>
<td>3.39 </td>
<td>30.75 </td>
<td>39.53 </td>
<td>35.68 </td>
<td>21.27 </td>
<td style="background-color: #d0d0d0;">26.94 </td>
</tr>
<tr>
<td>Qwen-VL-2B </td>
<td>41.02 </td>
<td>22.93 </td>
<td>38.90 </td>
<td>12.56 </td>
<td>27.83 </td>
<td>52.02 </td>
<td>34.28 </td>
<td>29.12 </td>
<td style="background-color: #d0d0d0;">32.33 </td>
</tr>
<tr>
<td>Qwen2-VL-7B </td>
<td>48.76 </td>
<td>42.73 </td>
<td>61.97 </td>
<td>21.30 </td>
<td>54.67 </td>
<td>79.32 </td>
<td>75.96 </td>
<td>35.81 </td>
<td style="background-color: #d0d0d0;">52.57 </td>
</tr>
<tr>
<td>AIN-7B <em>(ours)</em> </td>
<td>🥇56.78 </td>
<td>🥇72.35 </td>
<td>64.09 </td>
<td>🥇45.92 </td>
<td>🥇64.10 </td>
<td>🥇85.05 </td>
<td>🥈78.09 </td>
<td>43.77 </td>
<td style="background-color: #d0d0d0;">🏆63.77 </td>
</tr>
</tbody>
</table>
</div>
---
## 🎯 Qualitative Evaluation
The qualitative evaluation showcases AIN's advanced capabilities in handling diverse, complex tasks, including OCR, medical imaging, remote sensing, and cultural-specific understanding, with remarkable precision and contextual relevance. Unlike GPT-4o and LLaVA, AIN demonstrates superior performance in identifying intricate details and maintaining accuracy across varied query formats and multi-domain challenges.
<div align="center">
<img src="assets_hf/qualitative.png" width="75%" alt="qualitative" />
<h6>
<em> <b>Figure 3.</b> Qualitative examples showcasing AIN-7B’s capabilities across various domains, including general VQA, OCR & Document Understanding, Remote Sensing, Medical Imaging, Agricultural Understanding, and Cultural-Specific tasks. </em>
</h6>
</div>
---
## 🧐 Data Verification and Toxicity Filtering
A multi-step verification pipeline was implemented to ensure high-quality translations and safe visual data. Translation accuracy was assessed through human evaluation, where native Arabic speakers rated outputs against reference translations, and semantic similarity checks were conducted using **LaBSE**. Additionally, translated samples were reverse-translated and validated using **BLEU, METEOR, and ROUGE scores** to measure correctness, correlation, and overlap. For visual data, toxicity filtering was applied using **LLavaGuard’s safety policies and GPT-4o**, identifying and removing unsafe content related to violence, substance abuse, and harmful imagery, ensuring compliance with ethical AI standards.
<p align="center">
<img src="assets_hf/verify_pipeline.png" width="75%" alt="verify" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 4.</b> Data verification and filtering pipeline for textual and visual data, ensuring high-quality training data through semantic similarity checks, translation quality evaluations, and toxicity screening for safety compliance. </em>
</h6>
</p>
<p align="center">
<img src="assets_hf/toxicity.png" width=48%" alt="verify" style="margin-right: 2px";/>
<h6>
<em> <b>Figure 5.</b> Distribution of visual data toxicity filtering results, showing that 95% of the data is classified as safe, while 5% is identified as unsafe due to categories like weapons or substance abuse, violence, and animal cruelty. </em>
</h6>
</p>
---
## 🔒 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 💬 Contact us
For questions or suggestions, feel free to reach out to us on [GitHub Discussions](https://github.com/mbzuai-oryx/AIN/discussions).
---
If you use AIN in your research, please cite our work as follows:
```
@misc{heakl2025ainarabicinclusivelarge,
title={AIN: The Arabic INclusive Large Multimodal Model},
author={Ahmed Heakl and Sara Ghaboura and Omkar Thawkar and Fahad Shahbaz Khan and Hisham Cholakkal and Rao Muhammad Anwer and Salman Khan},
year={2025},
eprint={2502.00094},
url={https://arxiv.org/abs/2502.00094},
```
---
|
{"base_model": ["qwen2-VL-7B"], "language": ["en", "ar"], "license": "mit", "pipeline_tag": "image-text-to-text", "tags": ["LMM", "Arabic", "OCR"]}
|
task
|
[
"SEMANTIC_SIMILARITY",
"TRANSLATION"
] | 45,136 |
PKU-ONELab/Themis
|
PKU-ONELab
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:PKU-ONELab/NLG-Eval",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-27T11:34:34Z |
2025-02-22T15:23:32+00:00
| 59 | 8 |
---
base_model:
- meta-llama/Meta-Llama-3-8B
datasets:
- PKU-ONELab/NLG-Eval
language:
- en
license: apache-2.0
---
# Themis
Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability
Paper: https://aclanthology.org/2024.emnlp-main.891
Github: https://github.com/PKU-ONELab/Themis
## Introduction
We propose **Themis**, an 8B-parameter large language model (LLM) specifically designed and trained for NLG evaluation with more comprehensive capabilities.
Our Themis can evaluate various NLG tasks, including uncommon ones like question-answering evaluation (**Versatility**), in a reference-free manner (**Independence**). Moreover, it allows for specific and customized evaluation aspects and criteria, including overall quality and more fine-grained aspects (**Flexibility**), and its evaluation contains corresponding analysis and explanation together with the rating (**Interpretability**).
We believe that an ideal evaluator should be convenient to use and possess these characteristics. The comparison between related methods and Themis is shown in the table below.
| Method | Versatility | Independence | Flexibility | Interpretability | Open-source |
| :---------------: | :---------: | :----------: | :---------: | :--------------: | :---------: |
| UniEval | ❌ | ❌ | ✔️ | ❌ | ✔️ |
| G-Eval | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
| X-Eval | ✔️ | ❌ | ✔️ | ❌ | ❌ |
| Prometheus | ✔️ | ❌ | ✔️ | ✔️ | ✔️ |
| Auto-J | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| InstructScore | ✔️ | ❌ | ❌ | ✔️ | ✔️ |
| TIGERScore | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| **Themis (Ours)** | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
## Performance
We implement experiments on several common NLG evaluation tasks and datasets to compare our Themis with other methods, including SummEval for summarization, Topical-Chat for dialogue response generation, SFRES&SFHOT for data-to-text, QAGS for factuality, MANS for story generation, and WMT23 zh-en for machine translation. Experimental results show that our Themis achieves better overall evaluation performance over other evaluation models, including GPT-4.
| Method | SummEval | Topical-Chat | SFHOT& SFRES | QAGS | MANS | WMT23 | Average Spearman |
| -------------------- | :-------: | :----------: | :---------: | :-------: | :-------: | :-------: | :------------: |
| BLEU | 0.075 | 0.388 | 0.024 | - | 0.032 | 0.021 | - |
| ROUGE | 0.152 | 0.412 | 0.101 | - | -0.002 | 0.151 | - |
| BARTScore | 0.329 | 0.086 | 0.208 | 0.425 | 0.350 | 0.118 | 0.253 |
| BERTScore | 0.231 | 0.394 | 0.139 | - | 0.285 | 0.219 | - |
| BLEURT | 0.152 | 0.388 | 0.244 | - | 0.138 | 0.263 | - |
| CometKiwi | 0.228 | 0.340 | 0.251 | 0.094 | 0.251 | 0.343 | 0.251 |
| UniEval | 0.474 | 0.577 | 0.282 | - | - | - | - |
| G-Eval (GPT-3.5) | 0.409 | 0.585 | - | 0.461 | - | - | - |
| G-Eval (GPT-4) | 0.523 | 0.588 | - | 0.611 | - | - | - |
| GPT-3.5 Turbo | 0.416 | 0.578 | 0.306 | 0.431 | 0.328 | 0.347 | 0.401 |
| GPT-4 Turbo | 0.511 | **0.746** | 0.320 | 0.637 | 0.473 | **0.437** | 0.521 |
| X-Eval | 0.480 | 0.605 | 0.303 | 0.578 | - | - | - |
| Prometheus-13B | 0.163 | 0.434 | 0.173 | - | 0.007 | 0.129 | - |
| Auto-J-13B | 0.198 | 0.425 | 0.141 | 0.226 | 0.380 | 0.104 | 0.246 |
| TIGERScore-13B | 0.384 | 0.346 | 0.200 | 0.504 | 0.231 | 0.248 | 0.319 |
| InstructScore-7B | 0.258 | 0.241 | 0.247 | - | 0.298 | 0.219 | - |
| **Themis-8B (ours)** | **0.553** | 0.725 | **0.333** | **0.684** | **0.551** | 0.405 | **0.542** |
We further conduct more in-depth analyses, including generalization tests on unseen tasks like the instruction-following evaluation as well as aspect-targeted perturbation tests, and our Themis also exhibits superior evaluation performance. For more experimental results and details, please refer to our paper.
## Requirements and Usage
Please refer to our [github repo](https://github.com/PKU-ONELab/Themis) for more details.
## Citation
```
@inproceedings{hu2024themis,
title={Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability},
author={Hu, Xinyu and Lin, Li and Gao, Mingqi and Yin, Xunjian and Wan, Xiaojun},
booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
pages={15924--15951},
year={2024}
}
```
| null |
Non_BioNLP
|
# Themis
Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability
Paper: https://aclanthology.org/2024.emnlp-main.891
Github: https://github.com/PKU-ONELab/Themis
## Introduction
We propose **Themis**, an 8B-parameter large language model (LLM) specifically designed and trained for NLG evaluation with more comprehensive capabilities.
Our Themis can evaluate various NLG tasks, including uncommon ones like question-answering evaluation (**Versatility**), in a reference-free manner (**Independence**). Moreover, it allows for specific and customized evaluation aspects and criteria, including overall quality and more fine-grained aspects (**Flexibility**), and its evaluation contains corresponding analysis and explanation together with the rating (**Interpretability**).
We believe that an ideal evaluator should be convenient to use and possess these characteristics. The comparison between related methods and Themis is shown in the table below.
| Method | Versatility | Independence | Flexibility | Interpretability | Open-source |
| :---------------: | :---------: | :----------: | :---------: | :--------------: | :---------: |
| UniEval | ❌ | ❌ | ✔️ | ❌ | ✔️ |
| G-Eval | ✔️ | ✔️ | ✔️ | ✔️ | ❌ |
| X-Eval | ✔️ | ❌ | ✔️ | ❌ | ❌ |
| Prometheus | ✔️ | ❌ | ✔️ | ✔️ | ✔️ |
| Auto-J | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| InstructScore | ✔️ | ❌ | ❌ | ✔️ | ✔️ |
| TIGERScore | ✔️ | ✔️ | ❌ | ✔️ | ✔️ |
| **Themis (Ours)** | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
## Performance
We implement experiments on several common NLG evaluation tasks and datasets to compare our Themis with other methods, including SummEval for summarization, Topical-Chat for dialogue response generation, SFRES&SFHOT for data-to-text, QAGS for factuality, MANS for story generation, and WMT23 zh-en for machine translation. Experimental results show that our Themis achieves better overall evaluation performance over other evaluation models, including GPT-4.
| Method | SummEval | Topical-Chat | SFHOT& SFRES | QAGS | MANS | WMT23 | Average Spearman |
| -------------------- | :-------: | :----------: | :---------: | :-------: | :-------: | :-------: | :------------: |
| BLEU | 0.075 | 0.388 | 0.024 | - | 0.032 | 0.021 | - |
| ROUGE | 0.152 | 0.412 | 0.101 | - | -0.002 | 0.151 | - |
| BARTScore | 0.329 | 0.086 | 0.208 | 0.425 | 0.350 | 0.118 | 0.253 |
| BERTScore | 0.231 | 0.394 | 0.139 | - | 0.285 | 0.219 | - |
| BLEURT | 0.152 | 0.388 | 0.244 | - | 0.138 | 0.263 | - |
| CometKiwi | 0.228 | 0.340 | 0.251 | 0.094 | 0.251 | 0.343 | 0.251 |
| UniEval | 0.474 | 0.577 | 0.282 | - | - | - | - |
| G-Eval (GPT-3.5) | 0.409 | 0.585 | - | 0.461 | - | - | - |
| G-Eval (GPT-4) | 0.523 | 0.588 | - | 0.611 | - | - | - |
| GPT-3.5 Turbo | 0.416 | 0.578 | 0.306 | 0.431 | 0.328 | 0.347 | 0.401 |
| GPT-4 Turbo | 0.511 | **0.746** | 0.320 | 0.637 | 0.473 | **0.437** | 0.521 |
| X-Eval | 0.480 | 0.605 | 0.303 | 0.578 | - | - | - |
| Prometheus-13B | 0.163 | 0.434 | 0.173 | - | 0.007 | 0.129 | - |
| Auto-J-13B | 0.198 | 0.425 | 0.141 | 0.226 | 0.380 | 0.104 | 0.246 |
| TIGERScore-13B | 0.384 | 0.346 | 0.200 | 0.504 | 0.231 | 0.248 | 0.319 |
| InstructScore-7B | 0.258 | 0.241 | 0.247 | - | 0.298 | 0.219 | - |
| **Themis-8B (ours)** | **0.553** | 0.725 | **0.333** | **0.684** | **0.551** | 0.405 | **0.542** |
We further conduct more in-depth analyses, including generalization tests on unseen tasks like the instruction-following evaluation as well as aspect-targeted perturbation tests, and our Themis also exhibits superior evaluation performance. For more experimental results and details, please refer to our paper.
## Requirements and Usage
Please refer to our [github repo](https://github.com/PKU-ONELab/Themis) for more details.
## Citation
```
@inproceedings{hu2024themis,
title={Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability},
author={Hu, Xinyu and Lin, Li and Gao, Mingqi and Yin, Xunjian and Wan, Xiaojun},
booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
pages={15924--15951},
year={2024}
}
```
|
{"base_model": ["meta-llama/Meta-Llama-3-8B"], "datasets": ["PKU-ONELab/NLG-Eval"], "language": ["en"], "license": "apache-2.0"}
|
task
|
[
"TRANSLATION",
"SUMMARIZATION"
] | 45,137 |
unsloth/gemma-3-27b-pt-unsloth-bnb-4bit
|
unsloth
|
image-text-to-text
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-27b-pt",
"base_model:quantized:google/gemma-3-27b-pt",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 2025-03-13T13:39:39Z |
2025-04-11T05:05:18+00:00
| 610 | 0 |
---
base_model: google/gemma-3-27b-pt
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's <a href="https://unsloth.ai/blog/deepseekr1-dynamic">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1>
</div>
- Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
<br>
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
| null |
Non_BioNLP
|
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Unsloth's <a href="https://unsloth.ai/blog/deepseekr1-dynamic">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ Fine-tune Gemma 3 with Unsloth!</h1>
</div>
- Fine-tune Gemma 3 (12B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Gemma 3 support: [unsloth.ai/blog/gemma3](https://unsloth.ai/blog/gemma3)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Export your fine-tuned model to GGUF, Ollama, llama.cpp or 🤗HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Gemma 3 (12B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
<br>
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805
|
{"base_model": "google/gemma-3-27b-pt", "language": ["en"], "library_name": "transformers", "license": "gemma", "tags": ["unsloth", "transformers", "gemma3", "gemma", "google"]}
|
task
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | 45,138 |
DGSMsRzJ6xC2JthtHG9W/nomic-v2-tuned-1
|
DGSMsRzJ6xC2JthtHG9W
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"nomic_bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:13186",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:nomic-ai/nomic-embed-text-v2-moe",
"base_model:finetune:nomic-ai/nomic-embed-text-v2-moe",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-12T06:39:32Z |
2025-03-12T06:41:03+00:00
| 38 | 1 |
---
base_model: nomic-ai/nomic-embed-text-v2-moe
language:
- en
library_name: sentence-transformers
license: mit
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:13186
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Гражданин Иванов взял в займ у гражданина Петрова 50 000 рублей
без указания процентов в договоре. Через год Иванов вернул долг. Какие проценты
должен был выплатить Иванов Петрову?
sentences:
- <p>1. Заказчик, получивший сообщение подрядчика о готовности к сдаче результата
выполненных по договору строительного подряда работ либо, если это предусмотрено
договором, выполненного этапа работ, обязан немедленно приступить к его приемке.</p><p>2.
Заказчик организует и осуществляет приемку результата работ за свой счет, если
иное не предусмотрено договором строительного подряда.</p><p>В предусмотренных
законом или иными правовыми актами случаях в приемке результата работ должны участвовать
представители государственных органов и органов местного самоуправления.</p><p>3.
Заказчик, предварительно принявший результат отдельного этапа работ, несет риск
последствий гибели или повреждения результата работ, которые произошли не по вине
подрядчика.</p><p>4. Сдача результата работ подрядчиком и приемка его заказчиком
оформляются актом, подписанным обеими сторонами. При отказе одной из сторон от
подписания акта в нем делается отметка об этом и акт подписывается другой стороной.</p><p>Односторонний
акт сдачи или приемки результата работ может быть признан судом недействительным
лишь в случае, если мотивы отказа от подписания акта признаны им обоснованными.</p><p>5.
В случаях, когда это предусмотрено законом или договором строительного подряда
либо вытекает из характера работ, выполняемых по договору, приемке результата
работ должны предшествовать предварительные испытания. В этих случаях приемка
может осуществляться только при положительном результате предварительных испытаний.</p><p>6.
Заказчик вправе отказаться от приемки результата работ в случае обнаружения недостатков,
которые исключают возможность его использования для указанной в договоре строительного
подряда цели и не могут быть устранены подрядчиком или заказчиком.</p>
- <p>Перевозчик обязан доставить груз, пассажира или багаж в пункт назначения в
сроки, определенные в порядке, предусмотренном транспортными уставами, кодексами
и иными законами, а при отсутствии таких сроков в разумный срок. (В редакции Федерального
закона <a href="102456097">от 29.12.2017 № 442-ФЗ</a>)</p>
- <p>1. Если иное не предусмотрено законом или договором займа, займодавец имеет
право на получение с заемщика процентов за пользование займом в размерах и в порядке,
определенных договором. При отсутствии в договоре условия о размере процентов
за пользование займом их размер определяется ключевой ставкой Банка России, действовавшей
в соответствующие периоды.</p><p>2. Размер процентов за пользование займом может
быть установлен в договоре с применением ставки в процентах годовых в виде фиксированной
величины, с применением ставки в процентах годовых, величина которой может изменяться
в зависимости от предусмотренных договором условий, в том числе в зависимости
от изменения переменной величины, либо иным путем, позволяющим определить надлежащий
размер процентов на момент их уплаты.</p><p>3. При отсутствии иного соглашения
проценты за пользование займом выплачиваются ежемесячно до дня возврата займа
включительно.</p><p>4. Договор займа предполагается беспроцентным, если в нем
прямо не предусмотрено иное, в случаях, когда:</p><p>договор заключен между гражданами,
в том числе индивидуальными предпринимателями, на сумму, не превышающую ста тысяч
рублей;</p><p>по договору заемщику передаются не деньги, а другие вещи, определенные
родовыми признаками.</p><p>5. Размер процентов за пользование займом по договору
займа, заключенному между гражданами или между юридическим лицом, не осуществляющим
профессиональной деятельности по предоставлению потребительских займов, и заемщиком-гражданином,
в два и более раза превышающий обычно взимаемые в подобных случаях проценты и
поэтому являющийся чрезмерно обременительным для должника (ростовщические проценты),
может быть уменьшен судом до размера процентов, обычно взимаемых при сравнимых
обстоятельствах.</p>
- source_sentence: Может ли собственник, владеющий 10% доли в общем имуществе многоквартирного
дома, отказаться от участия в оплате капитального ремонта крыши, если он считает,
что ремонт не нужен? Укажите, при каких условиях это возможно.
sentences:
- <p>При передаче в доверительное управление ценных бумаг может быть предусмотрено
объединение ценных бумаг, передаваемых в доверительное управление разными лицами.</p><p>Правомочия
доверительного управляющего по распоряжению ценными бумагами определяются в договоре
доверительного управления.</p><p>Особенности доверительного управления ценными
бумагами определяются законом.</p><p>Правила настоящей статьи соответственно применяются
к правам, удостоверенным бездокументарными ценными бумагами (статья 149).</p>
- <p>Принадлежащее пережившему супругу наследодателя в силу завещания или закона
право наследования не умаляет его права на часть имущества, нажитого во время
брака с наследодателем и являющегося их совместной собственностью. Доля умершего
супруга в этом имуществе, определяемая в соответствии со статьей 256 настоящего
Кодекса, входит в состав наследства и переходит к наследникам в соответствии с
правилами, установленными настоящим Кодексом.</p><p>Иное может быть предусмотрено
совместным завещанием супругов или наследственным договором. (Дополнение частью
- Федеральный закон <a href="102476871">от 19.07.2018 № 217-ФЗ</a>)</p>
- <p>1. Если иное не установлено единогласным решением собственников недвижимых
вещей, каждый собственник недвижимой вещи обязан участвовать в расходах и издержках
по содержанию и сохранению общего имущества соразмерно со своей долей в праве
на общее имущество (пункт 1 статьи 259.2). Собственник недвижимой вещи, в результате
действий или бездействия которого возникают дополнительные расходы и издержки
по содержанию и сохранению общего имущества, обязан их покрывать.</p><p>2. Каждый
собственник недвижимой вещи обязан соразмерно со своей долей в праве общей собственности
на общее имущество (пункт 1 статьи 259.2) участвовать в уплате налогов, сборов
и иных обязательных платежей, связанных с общим имуществом.</p>
- source_sentence: Гражданин Петров заключил наследственный договор со своей племянницей
Ивановой. Через год Петров решил отказаться от договора. Он уведомил Иванову о
своем отказе, но не удостоверил уведомление нотариально. Иванова понесла убытки
в связи с исполнением договора. Может ли Иванова требовать от Петрова возмещения
убытков, и если да, то в каком объеме?
sentences:
- <p>10. Наследодатель вправе совершить в любое время односторонний отказ от наследственного
договора путем уведомления всех сторон наследственного договора о таком отказе.
Уведомление об отказе наследодателя от наследственного договора подлежит нотариальному
удостоверению. Нотариус, удостоверивший уведомление об отказе наследодателя от
наследственного договора, обязан в порядке, предусмотренном законодательством
о нотариате и нотариальной деятельности, в течение трех рабочих дней направить
копию этого уведомления другим сторонам наследственного договора.</p><p>Наследодатель,
отказавшийся от наследственного договора, обязан возместить другим сторонам наследственного
договора убытки, которые возникли у них в связи с исполнением наследственного
договора к моменту получения копии уведомления об отказе наследодателя от наследственного
договора.</p><p>Другие стороны наследственного договора вправе совершить односторонний
отказ от наследственного договора в порядке, предусмотренном законом или наследственным
договором.</p><p>11. Наследственный договор может быть оспорен при жизни наследодателя
по иску стороны наследственного договора, а после открытия наследства по иску
лица, права или законные интересы которого нарушены этим наследственным договором.</p><p>12.
После заключения наследственного договора наследодатель вправе совершать любые
сделки в отношении принадлежащего ему имущества и иным образом распоряжаться принадлежащим
ему имуществом своей волей и в своем интересе, даже если такое распоряжение лишит
лицо, которое может быть призвано к наследованию, прав на имущество наследодателя.
Соглашение об ином ничтожно.</p><p>(Дополнение статьей - Федеральный закон <a
href="102476871">от 19.07.2018 № 217-ФЗ</a>)</p>
- <p>В случаях, когда заказчик на основании пункта 2 статьи 715 или пункта 3 статьи
723 настоящего Кодекса расторгает договор подряда, подрядчик обязан возвратить
предоставленные заказчиком материалы, оборудование, переданную для переработки
(обработки) вещь и иное имущество либо передать их указанному заказчиком лицу,
а если это оказалось невозможным, - возместить стоимость материалов, оборудования
и иного имущества.</p>
- <p>4. Акционеры публичного общества, голосовавшие против или не принимавшие участия
в голосовании по вопросу, указанному в пункте 3 настоящей статьи, вправе требовать
выкупа обществом принадлежащих им акций в соответствии с правилами, установленными
статьями 75 и 76 настоящего Федерального закона.</p><p>Решения по вопросу, указанному
в пункте 3 настоящей статьи, вступают в силу при условии, что общее количество
акций, в отношении которых заявлены требования о выкупе, не превышает количество
акций, которое может быть выкуплено обществом с учетом ограничения, установленного
пунктом 5 статьи 76 настоящего Федерального закона.</p><p>(Дополнение статьей
- Федеральный закон <a href="102375391">от 29.06.2015 № 210-ФЗ</a>)</p>
- source_sentence: Умерший Сидоров не оставил после себя наследников первой очереди.
У него есть сестра, которая имеет двоих детей. Кроме того, у Сидорова есть дедушка
и бабушка по материнской линии. Кто наследует имущество Сидорова, и кто наследует
по праву представления?
sentences:
- <p>1. Одаряемый вправе в любое время до передачи ему дара от него отказаться.
В этом случае договор дарения считается расторгнутым.</p><p>2. Если договор дарения
заключен в письменной форме, отказ от дара должен быть совершен также в письменной
форме. В случае, когда договор дарения зарегистрирован (пункт 3 статьи 574), отказ
от принятия дара также подлежит государственной регистрации.</p><p>3. Если договор
дарения был заключен в письменной форме, даритель вправе требовать от одаряемого
возмещения реального ущерба, причиненного отказом принять дар.</p>
- <p>1. Если нет наследников первой очереди, наследниками второй очереди по закону
являются полнородные и неполнородные братья и сестры наследодателя, его дедушка
и бабушка как со стороны отца, так и со стороны матери.</p><p>2. Дети полнородных
и неполнородных братьев и сестер наследодателя (племянники и племянницы наследодателя)
наследуют по праву представления.</p>
- <p>1. Патент на селекционное достижение может быть признан недействительным в
течение срока его действия, если будет установлено, что:</p><p>1) патент выдан
на основании неподтвердившихся данных об однородности и о стабильности селекционного
достижения, представленных заявителем;</p><p>2) на дату выдачи патента селекционное
достижение не соответствовало критерию новизны или отличимости;</p><p>3) лицо,
указанное в патенте в качестве патентообладателя, не имело законных оснований
для получения патента.</p><p>2. Выдача патента на селекционное достижение может
быть оспорена любым лицом, которому стало известно о нарушениях, предусмотренных
пунктом 1 настоящей статьи, путем подачи заявления в федеральный орган исполнительной
власти по селекционным достижениям.</p><p>Федеральный орган исполнительной власти
по селекционным достижениям направляет копию указанного заявления патентообладателю,
который в течение трех месяцев со дня направления ему такой копии может представить
мотивированное возражение.</p><p>Федеральный орган исполнительной власти по селекционным
достижениям должен принять решение по указанному заявлению в течение шести месяцев
со дня подачи указанного заявления, если не потребуется проведение дополнительных
испытаний.</p><p>3. Патент на селекционное достижение, признанный недействительным,
аннулируется со дня подачи заявки на выдачу патента. При этом лицензионные договоры,
заключенные до принятия решения о недействительности патента, сохраняют свое действие
в той мере, в какой они были исполнены к этому дню.</p><p>4. Признание патента
на селекционное достижение недействительным означает отмену решения федерального
органа исполнительной власти по селекционным достижениям о выдаче патента (статья
1439) и аннулирование соответствующей записи в Государственном реестре охраняемых
селекционных достижений.</p>
- source_sentence: Если гражданин, ограниченный в дееспособности из-за психического
расстройства, совершил сделку, повлекшую имущественные потери, кто несет ответственность
за причиненный ущерб и на каких основаниях?
sentences:
- <p>1. По договору складского хранения товарный склад (хранитель) обязуется за
вознаграждение хранить товары, переданные ему товаровладельцем (поклажедателем),
и возвратить эти товары в сохранности.</p><p>Товарным складом признается организация,
осуществляющая в качестве предпринимательской деятельности хранение товаров и
оказывающая связанные с хранением услуги.</p><p>2. Письменная форма договора складского
хранения считается соблюденной, если его заключение и принятие товара на склад
удостоверены складским документом (статья 912).</p>
- <p>1. Если договором купли-продажи предусмотрена обязанность продавца передать
покупателю определенный набор товаров в комплекте (комплект товаров), обязательство
считается исполненным с момента передачи всех товаров, включенных в комплект.</p><p>2.
Если иное не предусмотрено договором купли-продажи и не вытекает из существа обязательства,
продавец обязан передать покупателю все товары, входящие в комплект, одновременно.</p>
- <p>Гражданин, ограниченный судом в дееспособности по основаниям, предусмотренным
настоящим пунктом, может распоряжаться выплачиваемыми на него алиментами, социальной
пенсией, возмещением вреда здоровью и в связи со смертью кормильца и иными предоставляемыми
на его содержание выплатами с письменного согласия попечителя, за исключением
выплат, которые указаны в подпункте 1 пункта 2 статьи 26 настоящего Кодекса и
которыми он вправе распоряжаться самостоятельно. Такой гражданин вправе распоряжаться
указанными выплатами в течение срока, определенного попечителем. Распоряжение
указанными выплатами может быть прекращено до истечения данного срока по решению
попечителя.</p><p>При наличии достаточных оснований суд по ходатайству попечителя
либо органа опеки и попечительства может ограничить или лишить такого гражданина
права самостоятельно распоряжаться своими доходами, указанными в подпункте 1 пункта
2 статьи 26 настоящего Кодекса.</p><p>Гражданин, дееспособность которого ограничена
вследствие психического расстройства, самостоятельно несет имущественную ответственность
по сделкам, совершенным им в соответствии с настоящей статьей. За причиненный
им вред такой гражданин несет ответственность в соответствии с настоящим Кодексом.</p><p>(Пункт
в редакции Федерального закона <a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p><p>3.
Если основания, в силу которых гражданин был ограничен в дееспособности, отпали,
суд отменяет ограничение его дееспособности. На основании решения суда отменяется
установленное над гражданином попечительство.</p><p>Если психическое состояние
гражданина, который вследствие психического расстройства был в соответствии с
пунктом 2 настоящей статьи ограничен в дееспособности, изменилось, суд признает
его недееспособным в соответствии со статьей 29 настоящего Кодекса или отменяет
ограничение его дееспособности.</p><p>(Дополнение пунктом - Федеральный закон
<a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p>
model-index:
- name: tuned nomic v2
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.0068212824010914054
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.08321964529331514
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.46248294679399726
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7933151432469304
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0068212824010914054
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.027739881764438375
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09249658935879947
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07933151432469303
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0068212824010914054
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.08321964529331514
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.46248294679399726
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7933151432469304
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.31641269883522866
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1717382359947135
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.18416268406289302
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.007503410641200546
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.07366984993178717
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4433833560709413
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7851296043656207
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.007503410641200546
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.02455661664392906
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.08867667121418826
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07851296043656207
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.007503410641200546
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.07366984993178717
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4433833560709413
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7851296043656207
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3120146406417205
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.16864159033326584
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.18144585630264604
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.0068212824010914054
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.07503410641200546
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.422237380627558
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7701227830832197
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0068212824010914054
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.02501136880400182
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.0844474761255116
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07701227830832195
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0068212824010914054
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.07503410641200546
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.422237380627558
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7701227830832197
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.30479184560913625
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.16402369042205106
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.17692662052162458
name: Cosine Map@100
---
# tuned nomic v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) <!-- at revision 45301cc35fd6988724c4698ee0d97981889ef7a0 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** mit
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DGSMsRzJ6xC2JthtHG9W/nomic-v2-tuned-1")
# Run inference
sentences = [
'Если гражданин, ограниченный в дееспособности из-за психического расстройства, совершил сделку, повлекшую имущественные потери, кто несет ответственность за причиненный ущерб и на каких основаниях?',
'<p>Гражданин, ограниченный судом в дееспособности по основаниям, предусмотренным настоящим пунктом, может распоряжаться выплачиваемыми на него алиментами, социальной пенсией, возмещением вреда здоровью и в связи со смертью кормильца и иными предоставляемыми на его содержание выплатами с письменного согласия попечителя, за исключением выплат, которые указаны в подпункте 1 пункта 2 статьи 26 настоящего Кодекса и которыми он вправе распоряжаться самостоятельно. Такой гражданин вправе распоряжаться указанными выплатами в течение срока, определенного попечителем. Распоряжение указанными выплатами может быть прекращено до истечения данного срока по решению попечителя.</p><p>При наличии достаточных оснований суд по ходатайству попечителя либо органа опеки и попечительства может ограничить или лишить такого гражданина права самостоятельно распоряжаться своими доходами, указанными в подпункте 1 пункта 2 статьи 26 настоящего Кодекса.</p><p>Гражданин, дееспособность которого ограничена вследствие психического расстройства, самостоятельно несет имущественную ответственность по сделкам, совершенным им в соответствии с настоящей статьей. За причиненный им вред такой гражданин несет ответственность в соответствии с настоящим Кодексом.</p><p>(Пункт в редакции Федерального закона <a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p><p>3. Если основания, в силу которых гражданин был ограничен в дееспособности, отпали, суд отменяет ограничение его дееспособности. На основании решения суда отменяется установленное над гражданином попечительство.</p><p>Если психическое состояние гражданина, который вследствие психического расстройства был в соответствии с пунктом 2 настоящей статьи ограничен в дееспособности, изменилось, суд признает его недееспособным в соответствии со статьей 29 настоящего Кодекса или отменяет ограничение его дееспособности.</p><p>(Дополнение пунктом - Федеральный закон <a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p>',
'<p>1. По договору складского хранения товарный склад (хранитель) обязуется за вознаграждение хранить товары, переданные ему товаровладельцем (поклажедателем), и возвратить эти товары в сохранности.</p><p>Товарным складом признается организация, осуществляющая в качестве предпринимательской деятельности хранение товаров и оказывающая связанные с хранением услуги.</p><p>2. Письменная форма договора складского хранения считается соблюденной, если его заключение и принятие товара на склад удостоверены складским документом (статья 912).</p>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512` and `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 |
|:--------------------|:-----------|:----------|:-----------|
| cosine_accuracy@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_accuracy@3 | 0.0832 | 0.0737 | 0.075 |
| cosine_accuracy@5 | 0.4625 | 0.4434 | 0.4222 |
| cosine_accuracy@10 | 0.7933 | 0.7851 | 0.7701 |
| cosine_precision@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_precision@3 | 0.0277 | 0.0246 | 0.025 |
| cosine_precision@5 | 0.0925 | 0.0887 | 0.0844 |
| cosine_precision@10 | 0.0793 | 0.0785 | 0.077 |
| cosine_recall@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_recall@3 | 0.0832 | 0.0737 | 0.075 |
| cosine_recall@5 | 0.4625 | 0.4434 | 0.4222 |
| cosine_recall@10 | 0.7933 | 0.7851 | 0.7701 |
| **cosine_ndcg@10** | **0.3164** | **0.312** | **0.3048** |
| cosine_mrr@10 | 0.1717 | 0.1686 | 0.164 |
| cosine_map@100 | 0.1842 | 0.1814 | 0.1769 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 13,186 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 59.74 tokens</li><li>max: 162 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 257.8 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Предположим, работник должника действовал вопреки указаниям руководства и тем самым причинил ущерб кредитору. Изменит ли это подход к определению ответственности должника?</code> | <code><p>Действия работников должника по исполнению его обязательства считаются действиями должника. Должник отвечает за эти действия, если они повлекли неисполнение или ненадлежащее исполнение обязательства.</p></code> |
| <code>Композитор Петров заключил договор с аккредитованной организацией «Мелодия» на управление правами на его произведения. Через год Петров решил передать права на управление одной конкретной песней новой организации «Звук». Какие действия должен предпринять Петров, чтобы передать права на управление песней организации «Звук», и какие обязательства при этом возникают у «Мелодии»?</code> | <code><p>Наличие аккредитованной организации не препятствует созданию других организаций по управлению правами на коллективной основе, в том числе в сферах коллективного управления, указанных в пункте 1 настоящей статьи. Такие организации вправе заключать договоры с пользователями только в интересах правообладателей, предоставивших им полномочия по управлению правами в порядке, предусмотренном пунктом 3 статьи 1242 настоящего Кодекса.</p><p>4. Правообладатель, не заключивший с аккредитованной организацией договора о передаче полномочий по управлению правами (пункт 3 настоящей статьи), вправе в любой момент полностью или частично отказаться от управления этой организацией его правами. Правообладатель должен письменно уведомить о своем решении аккредитованную организацию. В случае, если правообладатель намеревается отказаться от управления аккредитованной организацией только частью авторских или смежных прав и (или) объектов этих прав, он должен представить ей перечень таких исключаемых прав и...</code> |
| <code>Мария получила цифровое право на использование музыкального трека в онлайн-сервисе. Правила сервиса не определяют, кто является обладателем цифрового права в случае смерти пользователя. Мария умерла. Кто будет считаться обладателем цифрового права на музыкальный трек после смерти Марии, согласно тексту статьи?</code> | <code><p>1. Цифровыми правами признаются названные в таком качестве в законе обязательственные и иные права, содержание и условия осуществления которых определяются в соответствии с правилами информационной системы, отвечающей установленным законом признакам. Осуществление, распоряжение, в том числе передача, залог, обременение цифрового права другими способами или ограничение распоряжения цифровым правом возможны только в информационной системе без обращения к третьему лицу.</p><p>2. Если иное не предусмотрено законом, обладателем цифрового права признается лицо, которое в соответствии с правилами информационной системы имеет возможность распоряжаться этим правом. В случаях и по основаниям, которые предусмотрены законом, обладателем цифрового права признается иное лицо.</p><p>3. Переход цифрового права на основании сделки не требует согласия лица, обязанного по такому цифровому праву.</p><p>(Дополнение статьей - Федеральный закон <a href="102528600">от 18.03.2019 № 34-ФЗ</a>)</p></code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.3874 | 10 | 0.7904 | - | - | - |
| 0.7748 | 20 | 0.3376 | - | - | - |
| 0.9685 | 25 | - | 0.3066 | 0.3046 | 0.2903 |
| 1.1622 | 30 | 0.2443 | - | - | - |
| 1.5496 | 40 | 0.1593 | - | - | - |
| 1.9370 | 50 | 0.1378 | - | - | - |
| 1.9758 | 51 | - | 0.3164 | 0.3133 | 0.3031 |
| 2.3245 | 60 | 0.1064 | - | - | - |
| 2.7119 | 70 | 0.0956 | - | - | - |
| 2.9831 | 77 | - | 0.3159 | 0.3141 | 0.3034 |
| 3.0993 | 80 | 0.0915 | - | - | - |
| 3.4867 | 90 | 0.0847 | - | - | - |
| **3.8741** | **100** | **0.0885** | **0.3164** | **0.312** | **0.3048** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.43.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
| null |
TBD
|
# tuned nomic v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/nomic-embed-text-v2-moe](https://huggingface.co/nomic-ai/nomic-embed-text-v2-moe) <!-- at revision 45301cc35fd6988724c4698ee0d97981889ef7a0 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** mit
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DGSMsRzJ6xC2JthtHG9W/nomic-v2-tuned-1")
# Run inference
sentences = [
'Если гражданин, ограниченный в дееспособности из-за психического расстройства, совершил сделку, повлекшую имущественные потери, кто несет ответственность за причиненный ущерб и на каких основаниях?',
'<p>Гражданин, ограниченный судом в дееспособности по основаниям, предусмотренным настоящим пунктом, может распоряжаться выплачиваемыми на него алиментами, социальной пенсией, возмещением вреда здоровью и в связи со смертью кормильца и иными предоставляемыми на его содержание выплатами с письменного согласия попечителя, за исключением выплат, которые указаны в подпункте 1 пункта 2 статьи 26 настоящего Кодекса и которыми он вправе распоряжаться самостоятельно. Такой гражданин вправе распоряжаться указанными выплатами в течение срока, определенного попечителем. Распоряжение указанными выплатами может быть прекращено до истечения данного срока по решению попечителя.</p><p>При наличии достаточных оснований суд по ходатайству попечителя либо органа опеки и попечительства может ограничить или лишить такого гражданина права самостоятельно распоряжаться своими доходами, указанными в подпункте 1 пункта 2 статьи 26 настоящего Кодекса.</p><p>Гражданин, дееспособность которого ограничена вследствие психического расстройства, самостоятельно несет имущественную ответственность по сделкам, совершенным им в соответствии с настоящей статьей. За причиненный им вред такой гражданин несет ответственность в соответствии с настоящим Кодексом.</p><p>(Пункт в редакции Федерального закона <a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p><p>3. Если основания, в силу которых гражданин был ограничен в дееспособности, отпали, суд отменяет ограничение его дееспособности. На основании решения суда отменяется установленное над гражданином попечительство.</p><p>Если психическое состояние гражданина, который вследствие психического расстройства был в соответствии с пунктом 2 настоящей статьи ограничен в дееспособности, изменилось, суд признает его недееспособным в соответствии со статьей 29 настоящего Кодекса или отменяет ограничение его дееспособности.</p><p>(Дополнение пунктом - Федеральный закон <a href="102162486">от 30.12.2012 № 302-ФЗ</a>)</p>',
'<p>1. По договору складского хранения товарный склад (хранитель) обязуется за вознаграждение хранить товары, переданные ему товаровладельцем (поклажедателем), и возвратить эти товары в сохранности.</p><p>Товарным складом признается организация, осуществляющая в качестве предпринимательской деятельности хранение товаров и оказывающая связанные с хранением услуги.</p><p>2. Письменная форма договора складского хранения считается соблюденной, если его заключение и принятие товара на склад удостоверены складским документом (статья 912).</p>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512` and `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 |
|:--------------------|:-----------|:----------|:-----------|
| cosine_accuracy@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_accuracy@3 | 0.0832 | 0.0737 | 0.075 |
| cosine_accuracy@5 | 0.4625 | 0.4434 | 0.4222 |
| cosine_accuracy@10 | 0.7933 | 0.7851 | 0.7701 |
| cosine_precision@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_precision@3 | 0.0277 | 0.0246 | 0.025 |
| cosine_precision@5 | 0.0925 | 0.0887 | 0.0844 |
| cosine_precision@10 | 0.0793 | 0.0785 | 0.077 |
| cosine_recall@1 | 0.0068 | 0.0075 | 0.0068 |
| cosine_recall@3 | 0.0832 | 0.0737 | 0.075 |
| cosine_recall@5 | 0.4625 | 0.4434 | 0.4222 |
| cosine_recall@10 | 0.7933 | 0.7851 | 0.7701 |
| **cosine_ndcg@10** | **0.3164** | **0.312** | **0.3048** |
| cosine_mrr@10 | 0.1717 | 0.1686 | 0.164 |
| cosine_map@100 | 0.1842 | 0.1814 | 0.1769 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 13,186 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 59.74 tokens</li><li>max: 162 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 257.8 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Предположим, работник должника действовал вопреки указаниям руководства и тем самым причинил ущерб кредитору. Изменит ли это подход к определению ответственности должника?</code> | <code><p>Действия работников должника по исполнению его обязательства считаются действиями должника. Должник отвечает за эти действия, если они повлекли неисполнение или ненадлежащее исполнение обязательства.</p></code> |
| <code>Композитор Петров заключил договор с аккредитованной организацией «Мелодия» на управление правами на его произведения. Через год Петров решил передать права на управление одной конкретной песней новой организации «Звук». Какие действия должен предпринять Петров, чтобы передать права на управление песней организации «Звук», и какие обязательства при этом возникают у «Мелодии»?</code> | <code><p>Наличие аккредитованной организации не препятствует созданию других организаций по управлению правами на коллективной основе, в том числе в сферах коллективного управления, указанных в пункте 1 настоящей статьи. Такие организации вправе заключать договоры с пользователями только в интересах правообладателей, предоставивших им полномочия по управлению правами в порядке, предусмотренном пунктом 3 статьи 1242 настоящего Кодекса.</p><p>4. Правообладатель, не заключивший с аккредитованной организацией договора о передаче полномочий по управлению правами (пункт 3 настоящей статьи), вправе в любой момент полностью или частично отказаться от управления этой организацией его правами. Правообладатель должен письменно уведомить о своем решении аккредитованную организацию. В случае, если правообладатель намеревается отказаться от управления аккредитованной организацией только частью авторских или смежных прав и (или) объектов этих прав, он должен представить ей перечень таких исключаемых прав и...</code> |
| <code>Мария получила цифровое право на использование музыкального трека в онлайн-сервисе. Правила сервиса не определяют, кто является обладателем цифрового права в случае смерти пользователя. Мария умерла. Кто будет считаться обладателем цифрового права на музыкальный трек после смерти Марии, согласно тексту статьи?</code> | <code><p>1. Цифровыми правами признаются названные в таком качестве в законе обязательственные и иные права, содержание и условия осуществления которых определяются в соответствии с правилами информационной системы, отвечающей установленным законом признакам. Осуществление, распоряжение, в том числе передача, залог, обременение цифрового права другими способами или ограничение распоряжения цифровым правом возможны только в информационной системе без обращения к третьему лицу.</p><p>2. Если иное не предусмотрено законом, обладателем цифрового права признается лицо, которое в соответствии с правилами информационной системы имеет возможность распоряжаться этим правом. В случаях и по основаниям, которые предусмотрены законом, обладателем цифрового права признается иное лицо.</p><p>3. Переход цифрового права на основании сделки не требует согласия лица, обязанного по такому цифровому праву.</p><p>(Дополнение статьей - Федеральный закон <a href="102528600">от 18.03.2019 № 34-ФЗ</a>)</p></code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256
],
"matryoshka_weights": [
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| 0.3874 | 10 | 0.7904 | - | - | - |
| 0.7748 | 20 | 0.3376 | - | - | - |
| 0.9685 | 25 | - | 0.3066 | 0.3046 | 0.2903 |
| 1.1622 | 30 | 0.2443 | - | - | - |
| 1.5496 | 40 | 0.1593 | - | - | - |
| 1.9370 | 50 | 0.1378 | - | - | - |
| 1.9758 | 51 | - | 0.3164 | 0.3133 | 0.3031 |
| 2.3245 | 60 | 0.1064 | - | - | - |
| 2.7119 | 70 | 0.0956 | - | - | - |
| 2.9831 | 77 | - | 0.3159 | 0.3141 | 0.3034 |
| 3.0993 | 80 | 0.0915 | - | - | - |
| 3.4867 | 90 | 0.0847 | - | - | - |
| **3.8741** | **100** | **0.0885** | **0.3164** | **0.312** | **0.3048** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.43.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
{"base_model": "nomic-ai/nomic-embed-text-v2-moe", "language": ["en"], "library_name": "sentence-transformers", "license": "mit", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@5", "cosine_recall@10", "cosine_ndcg@10", "cosine_mrr@10", "cosine_map@100"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:13186", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sentence": "Гражданин Иванов взял в займ у гражданина Петрова 50 000 рублей без указания процентов в договоре. Через год Иванов вернул долг. Какие проценты должен был выплатить Иванов Петрову?", "sentences": ["<p>1. Заказчик, получивший сообщение подрядчика о готовности к сдаче результата выполненных по договору строительного подряда работ либо, если это предусмотрено договором, выполненного этапа работ, обязан немедленно приступить к его приемке.</p><p>2. Заказчик организует и осуществляет приемку результата работ за свой счет, если иное не предусмотрено договором строительного подряда.</p><p>В предусмотренных законом или иными правовыми актами случаях в приемке результата работ должны участвовать представители государственных органов и органов местного самоуправления.</p><p>3. Заказчик, предварительно принявший результат отдельного этапа работ, несет риск последствий гибели или повреждения результата работ, которые произошли не по вине подрядчика.</p><p>4. Сдача результата работ подрядчиком и приемка его заказчиком оформляются актом, подписанным обеими сторонами. При отказе одной из сторон от подписания акта в нем делается отметка об этом и акт подписывается другой стороной.</p><p>Односторонний акт сдачи или приемки результата работ может быть признан судом недействительным лишь в случае, если мотивы отказа от подписания акта признаны им обоснованными.</p><p>5. В случаях, когда это предусмотрено законом или договором строительного подряда либо вытекает из характера работ, выполняемых по договору, приемке результата работ должны предшествовать предварительные испытания. В этих случаях приемка может осуществляться только при положительном результате предварительных испытаний.</p><p>6. Заказчик вправе отказаться от приемки результата работ в случае обнаружения недостатков, которые исключают возможность его использования для указанной в договоре строительного подряда цели и не могут быть устранены подрядчиком или заказчиком.</p>", "<p>Перевозчик обязан доставить груз, пассажира или багаж в пункт назначения в сроки, определенные в порядке, предусмотренном транспортными уставами, кодексами и иными законами, а при отсутствии таких сроков в разумный срок. (В редакции Федерального закона <a href=\"102456097\">от 29.12.2017 № 442-ФЗ</a>)</p>", "<p>1. Если иное не предусмотрено законом или договором займа, займодавец имеет право на получение с заемщика процентов за пользование займом в размерах и в порядке, определенных договором. При отсутствии в договоре условия о размере процентов за пользование займом их размер определяется ключевой ставкой Банка России, действовавшей в соответствующие периоды.</p><p>2. Размер процентов за пользование займом может быть установлен в договоре с применением ставки в процентах годовых в виде фиксированной величины, с применением ставки в процентах годовых, величина которой может изменяться в зависимости от предусмотренных договором условий, в том числе в зависимости от изменения переменной величины, либо иным путем, позволяющим определить надлежащий размер процентов на момент их уплаты.</p><p>3. При отсутствии иного соглашения проценты за пользование займом выплачиваются ежемесячно до дня возврата займа включительно.</p><p>4. Договор займа предполагается беспроцентным, если в нем прямо не предусмотрено иное, в случаях, когда:</p><p>договор заключен между гражданами, в том числе индивидуальными предпринимателями, на сумму, не превышающую ста тысяч рублей;</p><p>по договору заемщику передаются не деньги, а другие вещи, определенные родовыми признаками.</p><p>5. Размер процентов за пользование займом по договору займа, заключенному между гражданами или между юридическим лицом, не осуществляющим профессиональной деятельности по предоставлению потребительских займов, и заемщиком-гражданином, в два и более раза превышающий обычно взимаемые в подобных случаях проценты и поэтому являющийся чрезмерно обременительным для должника (ростовщические проценты), может быть уменьшен судом до размера процентов, обычно взимаемых при сравнимых обстоятельствах.</p>"]}, {"source_sentence": "Может ли собственник, владеющий 10% доли в общем имуществе многоквартирного дома, отказаться от участия в оплате капитального ремонта крыши, если он считает, что ремонт не нужен? Укажите, при каких условиях это возможно.", "sentences": ["<p>При передаче в доверительное управление ценных бумаг может быть предусмотрено объединение ценных бумаг, передаваемых в доверительное управление разными лицами.</p><p>Правомочия доверительного управляющего по распоряжению ценными бумагами определяются в договоре доверительного управления.</p><p>Особенности доверительного управления ценными бумагами определяются законом.</p><p>Правила настоящей статьи соответственно применяются к правам, удостоверенным бездокументарными ценными бумагами (статья 149).</p>", "<p>Принадлежащее пережившему супругу наследодателя в силу завещания или закона право наследования не умаляет его права на часть имущества, нажитого во время брака с наследодателем и являющегося их совместной собственностью. Доля умершего супруга в этом имуществе, определяемая в соответствии со статьей 256 настоящего Кодекса, входит в состав наследства и переходит к наследникам в соответствии с правилами, установленными настоящим Кодексом.</p><p>Иное может быть предусмотрено совместным завещанием супругов или наследственным договором. (Дополнение частью - Федеральный закон <a href=\"102476871\">от 19.07.2018 № 217-ФЗ</a>)</p>", "<p>1. Если иное не установлено единогласным решением собственников недвижимых вещей, каждый собственник недвижимой вещи обязан участвовать в расходах и издержках по содержанию и сохранению общего имущества соразмерно со своей долей в праве на общее имущество (пункт 1 статьи 259.2). Собственник недвижимой вещи, в результате действий или бездействия которого возникают дополнительные расходы и издержки по содержанию и сохранению общего имущества, обязан их покрывать.</p><p>2. Каждый собственник недвижимой вещи обязан соразмерно со своей долей в праве общей собственности на общее имущество (пункт 1 статьи 259.2) участвовать в уплате налогов, сборов и иных обязательных платежей, связанных с общим имуществом.</p>"]}, {"source_sentence": "Гражданин Петров заключил наследственный договор со своей племянницей Ивановой. Через год Петров решил отказаться от договора. Он уведомил Иванову о своем отказе, но не удостоверил уведомление нотариально. Иванова понесла убытки в связи с исполнением договора. Может ли Иванова требовать от Петрова возмещения убытков, и если да, то в каком объеме?", "sentences": ["<p>10. Наследодатель вправе совершить в любое время односторонний отказ от наследственного договора путем уведомления всех сторон наследственного договора о таком отказе. Уведомление об отказе наследодателя от наследственного договора подлежит нотариальному удостоверению. Нотариус, удостоверивший уведомление об отказе наследодателя от наследственного договора, обязан в порядке, предусмотренном законодательством о нотариате и нотариальной деятельности, в течение трех рабочих дней направить копию этого уведомления другим сторонам наследственного договора.</p><p>Наследодатель, отказавшийся от наследственного договора, обязан возместить другим сторонам наследственного договора убытки, которые возникли у них в связи с исполнением наследственного договора к моменту получения копии уведомления об отказе наследодателя от наследственного договора.</p><p>Другие стороны наследственного договора вправе совершить односторонний отказ от наследственного договора в порядке, предусмотренном законом или наследственным договором.</p><p>11. Наследственный договор может быть оспорен при жизни наследодателя по иску стороны наследственного договора, а после открытия наследства по иску лица, права или законные интересы которого нарушены этим наследственным договором.</p><p>12. После заключения наследственного договора наследодатель вправе совершать любые сделки в отношении принадлежащего ему имущества и иным образом распоряжаться принадлежащим ему имуществом своей волей и в своем интересе, даже если такое распоряжение лишит лицо, которое может быть призвано к наследованию, прав на имущество наследодателя. Соглашение об ином ничтожно.</p><p>(Дополнение статьей - Федеральный закон <a href=\"102476871\">от 19.07.2018 № 217-ФЗ</a>)</p>", "<p>В случаях, когда заказчик на основании пункта 2 статьи 715 или пункта 3 статьи 723 настоящего Кодекса расторгает договор подряда, подрядчик обязан возвратить предоставленные заказчиком материалы, оборудование, переданную для переработки (обработки) вещь и иное имущество либо передать их указанному заказчиком лицу, а если это оказалось невозможным, - возместить стоимость материалов, оборудования и иного имущества.</p>", "<p>4. Акционеры публичного общества, голосовавшие против или не принимавшие участия в голосовании по вопросу, указанному в пункте 3 настоящей статьи, вправе требовать выкупа обществом принадлежащих им акций в соответствии с правилами, установленными статьями 75 и 76 настоящего Федерального закона.</p><p>Решения по вопросу, указанному в пункте 3 настоящей статьи, вступают в силу при условии, что общее количество акций, в отношении которых заявлены требования о выкупе, не превышает количество акций, которое может быть выкуплено обществом с учетом ограничения, установленного пунктом 5 статьи 76 настоящего Федерального закона.</p><p>(Дополнение статьей - Федеральный закон <a href=\"102375391\">от 29.06.2015 № 210-ФЗ</a>)</p>"]}, {"source_sentence": "Умерший Сидоров не оставил после себя наследников первой очереди. У него есть сестра, которая имеет двоих детей. Кроме того, у Сидорова есть дедушка и бабушка по материнской линии. Кто наследует имущество Сидорова, и кто наследует по праву представления?", "sentences": ["<p>1. Одаряемый вправе в любое время до передачи ему дара от него отказаться. В этом случае договор дарения считается расторгнутым.</p><p>2. Если договор дарения заключен в письменной форме, отказ от дара должен быть совершен также в письменной форме. В случае, когда договор дарения зарегистрирован (пункт 3 статьи 574), отказ от принятия дара также подлежит государственной регистрации.</p><p>3. Если договор дарения был заключен в письменной форме, даритель вправе требовать от одаряемого возмещения реального ущерба, причиненного отказом принять дар.</p>", "<p>1. Если нет наследников первой очереди, наследниками второй очереди по закону являются полнородные и неполнородные братья и сестры наследодателя, его дедушка и бабушка как со стороны отца, так и со стороны матери.</p><p>2. Дети полнородных и неполнородных братьев и сестер наследодателя (племянники и племянницы наследодателя) наследуют по праву представления.</p>", "<p>1. Патент на селекционное достижение может быть признан недействительным в течение срока его действия, если будет установлено, что:</p><p>1) патент выдан на основании неподтвердившихся данных об однородности и о стабильности селекционного достижения, представленных заявителем;</p><p>2) на дату выдачи патента селекционное достижение не соответствовало критерию новизны или отличимости;</p><p>3) лицо, указанное в патенте в качестве патентообладателя, не имело законных оснований для получения патента.</p><p>2. Выдача патента на селекционное достижение может быть оспорена любым лицом, которому стало известно о нарушениях, предусмотренных пунктом 1 настоящей статьи, путем подачи заявления в федеральный орган исполнительной власти по селекционным достижениям.</p><p>Федеральный орган исполнительной власти по селекционным достижениям направляет копию указанного заявления патентообладателю, который в течение трех месяцев со дня направления ему такой копии может представить мотивированное возражение.</p><p>Федеральный орган исполнительной власти по селекционным достижениям должен принять решение по указанному заявлению в течение шести месяцев со дня подачи указанного заявления, если не потребуется проведение дополнительных испытаний.</p><p>3. Патент на селекционное достижение, признанный недействительным, аннулируется со дня подачи заявки на выдачу патента. При этом лицензионные договоры, заключенные до принятия решения о недействительности патента, сохраняют свое действие в той мере, в какой они были исполнены к этому дню.</p><p>4. Признание патента на селекционное достижение недействительным означает отмену решения федерального органа исполнительной власти по селекционным достижениям о выдаче патента (статья 1439) и аннулирование соответствующей записи в Государственном реестре охраняемых селекционных достижений.</p>"]}, {"source_sentence": "Если гражданин, ограниченный в дееспособности из-за психического расстройства, совершил сделку, повлекшую имущественные потери, кто несет ответственность за причиненный ущерб и на каких основаниях?", "sentences": ["<p>1. По договору складского хранения товарный склад (хранитель) обязуется за вознаграждение хранить товары, переданные ему товаровладельцем (поклажедателем), и возвратить эти товары в сохранности.</p><p>Товарным складом признается организация, осуществляющая в качестве предпринимательской деятельности хранение товаров и оказывающая связанные с хранением услуги.</p><p>2. Письменная форма договора складского хранения считается соблюденной, если его заключение и принятие товара на склад удостоверены складским документом (статья 912).</p>", "<p>1. Если договором купли-продажи предусмотрена обязанность продавца передать покупателю определенный набор товаров в комплекте (комплект товаров), обязательство считается исполненным с момента передачи всех товаров, включенных в комплект.</p><p>2. Если иное не предусмотрено договором купли-продажи и не вытекает из существа обязательства, продавец обязан передать покупателю все товары, входящие в комплект, одновременно.</p>", "<p>Гражданин, ограниченный судом в дееспособности по основаниям, предусмотренным настоящим пунктом, может распоряжаться выплачиваемыми на него алиментами, социальной пенсией, возмещением вреда здоровью и в связи со смертью кормильца и иными предоставляемыми на его содержание выплатами с письменного согласия попечителя, за исключением выплат, которые указаны в подпункте 1 пункта 2 статьи 26 настоящего Кодекса и которыми он вправе распоряжаться самостоятельно. Такой гражданин вправе распоряжаться указанными выплатами в течение срока, определенного попечителем. Распоряжение указанными выплатами может быть прекращено до истечения данного срока по решению попечителя.</p><p>При наличии достаточных оснований суд по ходатайству попечителя либо органа опеки и попечительства может ограничить или лишить такого гражданина права самостоятельно распоряжаться своими доходами, указанными в подпункте 1 пункта 2 статьи 26 настоящего Кодекса.</p><p>Гражданин, дееспособность которого ограничена вследствие психического расстройства, самостоятельно несет имущественную ответственность по сделкам, совершенным им в соответствии с настоящей статьей. За причиненный им вред такой гражданин несет ответственность в соответствии с настоящим Кодексом.</p><p>(Пункт в редакции Федерального закона <a href=\"102162486\">от 30.12.2012 № 302-ФЗ</a>)</p><p>3. Если основания, в силу которых гражданин был ограничен в дееспособности, отпали, суд отменяет ограничение его дееспособности. На основании решения суда отменяется установленное над гражданином попечительство.</p><p>Если психическое состояние гражданина, который вследствие психического расстройства был в соответствии с пунктом 2 настоящей статьи ограничен в дееспособности, изменилось, суд признает его недееспособным в соответствии со статьей 29 настоящего Кодекса или отменяет ограничение его дееспособности.</p><p>(Дополнение пунктом - Федеральный закон <a href=\"102162486\">от 30.12.2012 № 302-ФЗ</a>)</p>"]}], "model-index": [{"name": "tuned nomic v2", "results": [{"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 768", "type": "dim_768"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0068212824010914054, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.08321964529331514, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.46248294679399726, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7933151432469304, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0068212824010914054, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.027739881764438375, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.09249658935879947, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07933151432469303, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0068212824010914054, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.08321964529331514, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.46248294679399726, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7933151432469304, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.31641269883522866, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.1717382359947135, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.18416268406289302, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 512", "type": "dim_512"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.007503410641200546, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.07366984993178717, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.4433833560709413, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7851296043656207, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.007503410641200546, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.02455661664392906, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.08867667121418826, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07851296043656207, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.007503410641200546, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.07366984993178717, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.4433833560709413, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7851296043656207, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.3120146406417205, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.16864159033326584, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.18144585630264604, "name": "Cosine Map@100"}]}, {"task": {"type": "information-retrieval", "name": "Information Retrieval"}, "dataset": {"name": "dim 256", "type": "dim_256"}, "metrics": [{"type": "cosine_accuracy@1", "value": 0.0068212824010914054, "name": "Cosine Accuracy@1"}, {"type": "cosine_accuracy@3", "value": 0.07503410641200546, "name": "Cosine Accuracy@3"}, {"type": "cosine_accuracy@5", "value": 0.422237380627558, "name": "Cosine Accuracy@5"}, {"type": "cosine_accuracy@10", "value": 0.7701227830832197, "name": "Cosine Accuracy@10"}, {"type": "cosine_precision@1", "value": 0.0068212824010914054, "name": "Cosine Precision@1"}, {"type": "cosine_precision@3", "value": 0.02501136880400182, "name": "Cosine Precision@3"}, {"type": "cosine_precision@5", "value": 0.0844474761255116, "name": "Cosine Precision@5"}, {"type": "cosine_precision@10", "value": 0.07701227830832195, "name": "Cosine Precision@10"}, {"type": "cosine_recall@1", "value": 0.0068212824010914054, "name": "Cosine Recall@1"}, {"type": "cosine_recall@3", "value": 0.07503410641200546, "name": "Cosine Recall@3"}, {"type": "cosine_recall@5", "value": 0.422237380627558, "name": "Cosine Recall@5"}, {"type": "cosine_recall@10", "value": 0.7701227830832197, "name": "Cosine Recall@10"}, {"type": "cosine_ndcg@10", "value": 0.30479184560913625, "name": "Cosine Ndcg@10"}, {"type": "cosine_mrr@10", "value": 0.16402369042205106, "name": "Cosine Mrr@10"}, {"type": "cosine_map@100", "value": 0.17692662052162458, "name": "Cosine Map@100"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,139 |
akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic
|
akjindal53244
|
text-generation
|
[
"safetensors",
"llama",
"llama-3.1",
"fp8",
"conversational",
"instruction following",
"reasoning",
"function calling",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2406.06623",
"arxiv:2311.07911",
"arxiv:2311.12022",
"arxiv:2406.01574",
"arxiv:1803.05457",
"arxiv:2310.16049",
"arxiv:2210.09261",
"arxiv:2109.07958",
"license:llama3.1",
"region:us"
] | 2024-08-13T15:58:37Z |
2024-08-21T02:32:32+00:00
| 180 | 14 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
pipeline_tag: text-generation
tags:
- llama-3.1
- fp8
- conversational
- instruction following
- reasoning
- function calling
---

Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
**🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
**🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
<br>
# Llama-3.1-Storm-8B-FP8-Dynamic
## Model Optimizations
This model was obtained by quantizing the weights and activations of [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) to FP8 data type using [this script](https://github.com/vllm-project/llm-compressor/tree/main/examples/quantization_w8a8_fp8), ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations. LLM Compressor is used for quantization with 512 sequences of UltraChat.
## TL;DR

We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
## 🏆 Introducing Llama-3.1-Storm-8B
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
## Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
<table>
<tr>
<td><strong>Model Strength</strong>
</td>
<td><strong>Relevant Benchmarks</strong>
</td>
<tr>
<tr>
<td>🎯 Improved Instruction Following
</td>
<td>IFEval Strict (+3.93%)
</td>
<tr>
<tr>
<td>🌐 Enhanced Knowledge Driven Question Answering
</td>
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🧠 Better Reasoning
</td>
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🤖 Superior Agentic Capabilities
</td>
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
</td>
<tr>
<tr>
<td>🚫 Reduced Hallucinations
</td>
<td>TruthfulQA (+9%)
</td>
<tr>
</table>
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
## Llama-3.1-Storm-8B Models
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
## 💻 How to Use the FP8 Model
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
### Conversational Use-case
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
```
### Function Calling Use-case
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
#### Prompt Format for Function Calling
Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
```
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>LIST_OF_TOOLS</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
```
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
import json
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
def create_system_prompt(tools_list):
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>{}</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
# Convert the tools list to a string representation
tools_str = json.dumps(tools_list, ensure_ascii=False)
# Format the system prompt with the tools list
system_prompt = system_prompt_format.format(tools_str)
return system_prompt
# Example tools list
tools_list = [
{
"name": "peers",
"description": "Retrieves a list of company peers given a stock symbol.",
"parameters": {
"symbol": {
"description": "The stock symbol for the company.",
"type": "str",
"default": ""
}
}
},
{
"name": "web_chain_details",
"description": "python",
"parameters": {
"chain_slug": {
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
"type": "str",
"default": "ethereum"
}
}
}
]
# Create the system prompt with the tools list
system_prompt = create_system_prompt(tools_list)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
```
#### Use with [Ollama](https://ollama.com/)
```
import ollama
tools = [{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
{
'type': 'function',
'function': {
'name': 'get_places_to_vist',
'description': 'Get places to visit in a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
response = ollama.chat(
model='ajindal/llama3.1-storm:8b',
messages=[
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
],
tools=tools
)
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
```
## Alignment Note
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
## Acknowledgement
We thank [Robert Shaw](https://www.linkedin.com/in/robert-shaw-1a01399a/) from [Neural Magic](https://neuralmagic.com/) for providing guidance during FP8 model conversion.
## Cite Our Work
```
@misc {ashvini_kumar_jindal_2024,
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
title = { Llama-3.1-Storm-8B },
year = 2024,
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
doi = { 10.57967/hf/2902 },
publisher = { Hugging Face }
}
```
## Support Our Work
With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
**Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**
| null |
Non_BioNLP
|

Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
**🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
**🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
<br>
# Llama-3.1-Storm-8B-FP8-Dynamic
## Model Optimizations
This model was obtained by quantizing the weights and activations of [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) to FP8 data type using [this script](https://github.com/vllm-project/llm-compressor/tree/main/examples/quantization_w8a8_fp8), ready for inference with vLLM. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized. Symmetric per-tensor quantization is applied, in which a single linear scaling maps the FP8 representations of the quantized weights and activations. LLM Compressor is used for quantization with 512 sequences of UltraChat.
## TL;DR

We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
## 🏆 Introducing Llama-3.1-Storm-8B
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
## Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
<table>
<tr>
<td><strong>Model Strength</strong>
</td>
<td><strong>Relevant Benchmarks</strong>
</td>
<tr>
<tr>
<td>🎯 Improved Instruction Following
</td>
<td>IFEval Strict (+3.93%)
</td>
<tr>
<tr>
<td>🌐 Enhanced Knowledge Driven Question Answering
</td>
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🧠 Better Reasoning
</td>
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>🤖 Superior Agentic Capabilities
</td>
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
</td>
<tr>
<tr>
<td>🚫 Reduced Hallucinations
</td>
<td>TruthfulQA (+9%)
</td>
<tr>
</table>
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
## Llama-3.1-Storm-8B Models
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
## 💻 How to Use the FP8 Model
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
### Conversational Use-case
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
```
### Function Calling Use-case
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
#### Prompt Format for Function Calling
Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
```
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>LIST_OF_TOOLS</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
```
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
import json
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
def create_system_prompt(tools_list):
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>{}</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
# Convert the tools list to a string representation
tools_str = json.dumps(tools_list, ensure_ascii=False)
# Format the system prompt with the tools list
system_prompt = system_prompt_format.format(tools_str)
return system_prompt
# Example tools list
tools_list = [
{
"name": "peers",
"description": "Retrieves a list of company peers given a stock symbol.",
"parameters": {
"symbol": {
"description": "The stock symbol for the company.",
"type": "str",
"default": ""
}
}
},
{
"name": "web_chain_details",
"description": "python",
"parameters": {
"chain_slug": {
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
"type": "str",
"default": "ethereum"
}
}
}
]
# Create the system prompt with the tools list
system_prompt = create_system_prompt(tools_list)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
```
#### Use with [Ollama](https://ollama.com/)
```
import ollama
tools = [{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
{
'type': 'function',
'function': {
'name': 'get_places_to_vist',
'description': 'Get places to visit in a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
response = ollama.chat(
model='ajindal/llama3.1-storm:8b',
messages=[
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
],
tools=tools
)
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
```
## Alignment Note
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
## Acknowledgement
We thank [Robert Shaw](https://www.linkedin.com/in/robert-shaw-1a01399a/) from [Neural Magic](https://neuralmagic.com/) for providing guidance during FP8 model conversion.
## Cite Our Work
```
@misc {ashvini_kumar_jindal_2024,
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
title = { Llama-3.1-Storm-8B },
year = 2024,
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
doi = { 10.57967/hf/2902 },
publisher = { Hugging Face }
}
```
## Support Our Work
With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
**Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**
|
{"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "license": "llama3.1", "pipeline_tag": "text-generation", "tags": ["llama-3.1", "fp8", "conversational", "instruction following", "reasoning", "function calling"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,140 |
mnoukhov/gpt2-imdb-sentiment-classifier
|
mnoukhov
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-03-23T19:21:49Z |
2023-03-23T20:44:51+00:00
| 246 | 6 |
---
datasets:
- imdb
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: gpt2-imdb-sentiment-classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- type: accuracy
value: 0.9394
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-imdb-sentiment-classifier
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
This is comparable to [distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) and trained with exactly the same [script](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb)
It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1967 | 1.0 | 1563 | 0.1703 | 0.9394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.12.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-imdb-sentiment-classifier
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1703
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
This is comparable to [distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) and trained with exactly the same [script](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb)
It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1967 | 1.0 | 1563 | 0.1703 | 0.9394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.12.1
|
{"datasets": ["imdb"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-imdb-sentiment-classifier", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.9394, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,141 |
hongyin/chatbloom-7b
|
hongyin
|
text-generation
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"en",
"zh",
"arxiv:2302.13173",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-04-23T03:00:34Z |
2023-06-23T05:13:57+00:00
| 18 | 1 |
---
language:
- en
- zh
pipeline_tag: text-generation
---
## chatbloom-7b
This is a RLHF enhanced bloom model (chatbloom), fine-tuned based on bloom-7b (Muennighoff et al.). This model only uses English QA datasets for RLHF training, which improves the understanding and generation of English.
### Usage
If you don't have a good GPU (mem > 20G) then use the code below:
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "hongyin/chatbloom-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
```
If you have a good GPU (mem > 20G) then use the code below:
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "hongyin/chatbloom-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
| null |
Non_BioNLP
|
## chatbloom-7b
This is a RLHF enhanced bloom model (chatbloom), fine-tuned based on bloom-7b (Muennighoff et al.). This model only uses English QA datasets for RLHF training, which improves the understanding and generation of English.
### Usage
If you don't have a good GPU (mem > 20G) then use the code below:
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "hongyin/chatbloom-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
```
If you have a good GPU (mem > 20G) then use the code below:
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "hongyin/chatbloom-7b"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Paraphrasing the text: I love you.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Original ouput: Paraphrasing the text: I love you. I love you. I love you. I love
ChatBloom ouput: Paraphrasing the text: I love you. I am a good person.
```
## Bibtex entry and citation info
Please cite if you find it helpful.
```
@article{zhu2023metaaid,
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models},
author={Zhu, Hongyin},
journal={arXiv preprint arXiv:2302.13173},
year={2023}
}
```
---
license: other
---
|
{"language": ["en", "zh"], "pipeline_tag": "text-generation"}
|
task
|
[
"PARAPHRASING"
] | 45,142 |
Helsinki-NLP/opus-mt-fr-iso
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"iso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04Z |
2023-08-16T11:36:38+00:00
| 15 | 0 |
---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-iso
* source languages: fr
* target languages: iso
* OPUS readme: [fr-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.iso | 26.7 | 0.429 |
| null |
Non_BioNLP
|
### opus-mt-fr-iso
* source languages: fr
* target languages: iso
* OPUS readme: [fr-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.iso | 26.7 | 0.429 |
|
{"license": "apache-2.0", "tags": ["translation"]}
|
task
|
[
"TRANSLATION"
] | 45,143 |
mradermacher/airoboros-34b-3.2-i1-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:jondurbin/airoboros-34b-3.2",
"base_model:quantized:jondurbin/airoboros-34b-3.2",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | 2024-11-10T08:00:07Z |
2024-11-10T13:43:19+00:00
| 136 | 0 |
---
base_model: jondurbin/airoboros-34b-3.2
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- jondurbin/gutenberg-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
language:
- en
library_name: transformers
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| null |
Non_BioNLP
|
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.2-i1-GGUF/resolve/main/airoboros-34b-3.2.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
{"base_model": "jondurbin/airoboros-34b-3.2", "datasets": ["jondurbin/airoboros-3.2", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "jondurbin/gutenberg-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "glaiveai/glaive-function-calling-v2", "grimulkan/LimaRP-augmented", "piqa", "Vezora/Tested-22k-Python-Alpaca", "mattpscott/airoboros-summarization", "unalignment/toxic-dpo-v0.2"], "language": ["en"], "library_name": "transformers", "license": "other", "license_name": "yi-license", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE", "quantized_by": "mradermacher"}
|
task
|
[
"SUMMARIZATION"
] | 45,144 |
voxreality/src_ctx_aware_nllb_1.3B
|
voxreality
|
translation
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"en",
"de",
"nl",
"it",
"el",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-26T11:22:10Z |
2024-05-01T07:50:00+00:00
| 15 | 0 |
---
language:
- en
- de
- nl
- it
- el
- es
license: apache-2.0
pipeline_tag: translation
---
The model and the tokenizer are based on [facebook/nllb-200-1.3B]( https://huggingface.co/facebook/nllb-200-1.3B).
We trained the model to use one sentence of context. The context is prepended to the input sentence with the `sep_token` in between. We used a subset of the [OpenSubtitles2018]( https://huggingface.co/datasets/open_subtitles) dataset for training. We trained on the interleaved dataset for all directions between the following languages: English, German, Dutch, Spanish, Italian, and Greek.
The tokenizer of the base model was not changed. For the language codes, see the base model.
Use this code for translation:
``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'voxreality/src_ctx_aware_nllb_1.3B'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
max_length = 100
src_lang = 'eng_Latn'
tgt_lang = 'deu_Latn'
context_text = 'This is an optional context sentence.'
sentence_text = 'Text to be translated.'
# if the context is provided use the following:
input_text = f'{context_text} {tokenizer.sep_token} {sentence_text}'
# if no context is provided use the following:
# input_text = sentence_text
tokenizer.src_lang = src_lang
inputs = tokenizer(input_text, return_tensors='pt').to(model.device)
model_output = model.generate(**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
max_length=max_length)
output_text = tokenizer.batch_decode(model_output, skip_special_tokens=True)[0]
print(output_text)
```
You can also use the pipeline
```
from transformers import pipeline
model_name = 'voxreality/src_ctx_aware_nllb_1.3B'
translation_pipeline = pipeline("translation", model=model_name)
src_lang = 'eng_Latn'
tgt_lang = 'deu_Latn'
context_text = 'This is an optional context sentence.'
sentence_text = 'Text to be translated.'
# if the context is provided use the following:
input_texts = [f'{context_text} {tokenizer.sep_token} {sentence_text}']
# if no context is provided use the following:
# input_texts = [sentence_text]
pipeline_output = translation_pipeline(input_texts, src_lang=src_lang, tgt_lang=tgt_lang)
print(pipeline_output[0]['translation_text'])
```
| null |
Non_BioNLP
|
The model and the tokenizer are based on [facebook/nllb-200-1.3B]( https://huggingface.co/facebook/nllb-200-1.3B).
We trained the model to use one sentence of context. The context is prepended to the input sentence with the `sep_token` in between. We used a subset of the [OpenSubtitles2018]( https://huggingface.co/datasets/open_subtitles) dataset for training. We trained on the interleaved dataset for all directions between the following languages: English, German, Dutch, Spanish, Italian, and Greek.
The tokenizer of the base model was not changed. For the language codes, see the base model.
Use this code for translation:
``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = 'voxreality/src_ctx_aware_nllb_1.3B'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
max_length = 100
src_lang = 'eng_Latn'
tgt_lang = 'deu_Latn'
context_text = 'This is an optional context sentence.'
sentence_text = 'Text to be translated.'
# if the context is provided use the following:
input_text = f'{context_text} {tokenizer.sep_token} {sentence_text}'
# if no context is provided use the following:
# input_text = sentence_text
tokenizer.src_lang = src_lang
inputs = tokenizer(input_text, return_tensors='pt').to(model.device)
model_output = model.generate(**inputs,
forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang],
max_length=max_length)
output_text = tokenizer.batch_decode(model_output, skip_special_tokens=True)[0]
print(output_text)
```
You can also use the pipeline
```
from transformers import pipeline
model_name = 'voxreality/src_ctx_aware_nllb_1.3B'
translation_pipeline = pipeline("translation", model=model_name)
src_lang = 'eng_Latn'
tgt_lang = 'deu_Latn'
context_text = 'This is an optional context sentence.'
sentence_text = 'Text to be translated.'
# if the context is provided use the following:
input_texts = [f'{context_text} {tokenizer.sep_token} {sentence_text}']
# if no context is provided use the following:
# input_texts = [sentence_text]
pipeline_output = translation_pipeline(input_texts, src_lang=src_lang, tgt_lang=tgt_lang)
print(pipeline_output[0]['translation_text'])
```
|
{"language": ["en", "de", "nl", "it", "el", "es"], "license": "apache-2.0", "pipeline_tag": "translation"}
|
task
|
[
"TRANSLATION"
] | 45,145 |
sapienzanlp/relik-retriever-e5-base-v2-blink-first1M-encoder
|
sapienzanlp
|
feature-extraction
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"custom_code",
"en",
"arxiv:2408.00103",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-01-03T08:54:01Z |
2024-08-02T06:54:56+00:00
| 14 | 0 |
---
language:
- en
---
<div align="center">
<img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150">
<img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50">
</div>
<div align="center">
<h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a>
<a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a>
<a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-2408.00103-b31b1b.svg"></a>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a>
<a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a>
<a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a>
</div>
A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**.
**This repository contains the weights for the ReLiK Retriever component pre-trained on BLINK dataset.**
## 🛠️ Installation
Installation from PyPI
```bash
pip install relik
```
<details>
<summary>Other installation options</summary>
#### Install with optional dependencies
Install with all the optional dependencies.
```bash
pip install relik[all]
```
Install with optional dependencies for training and evaluation.
```bash
pip install relik[train]
```
Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss)
FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package.
For CPU:
```bash
pip install relik[faiss]
```
For GPU:
```bash
conda create -n relik python=3.10
conda activate relik
# install pytorch
conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# GPU
conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0
# or GPU with NVIDIA RAFT
conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0
pip install relik
```
Install with optional dependencies for serving the models with
[FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html).
```bash
pip install relik[serve]
```
#### Installation from source
```bash
git clone https://github.com/SapienzaNLP/relik.git
cd relik
pip install -e .[all]
```
</details>
## 🚀 Quick Start
[//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.)
ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**.
It is composed of two main components: a retriever and a reader.
The retriever is responsible for retrieving relevant documents from a large collection,
while the reader is responsible for extracting entities and relations from the retrieved documents.
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
Here is an example of how to use ReLiK for **Entity Linking**:
```python
from relik import Relik
from relik.inference.data.objects import RelikOutput
relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large")
relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.")
```
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
]
]
),
)
## 📊 Performance
We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base:
| Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) |
|------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------|
| GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 |
| EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
| ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128),
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
GENRE uses mention dictionaries.
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used.
## 🤖 Models
Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19).
## 💽 Cite this work
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@inproceedings{orlando-etal-2024-relik,
title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget",
author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
}
```
| null |
Non_BioNLP
|
<div align="center">
<img src="https://github.com/SapienzaNLP/relik/blob/main/relik.png?raw=true" height="150">
<img src="https://github.com/SapienzaNLP/relik/blob/main/Sapienza_Babelscape.png?raw=true" height="50">
</div>
<div align="center">
<h1>Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget</h1>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://2024.aclweb.org/"><img src="http://img.shields.io/badge/ACL-2024-4b44ce.svg"></a>
<a href="https://aclanthology.org/"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg"></a>
<a href="https://arxiv.org/abs/2408.00103"><img src="https://img.shields.io/badge/arXiv-2408.00103-b31b1b.svg"></a>
</div>
<div style="display:flex; justify-content: center; align-items: center; flex-direction: row;">
<a href="https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Collection-FCD21D"></a>
<a href="https://github.com/SapienzaNLP/relik"><img src="https://img.shields.io/badge/GitHub-Repo-121013?logo=github&logoColor=white"></a>
<a href="https://github.com/SapienzaNLP/relik/releases"><img src="https://img.shields.io/github/v/release/SapienzaNLP/relik"></a>
</div>
A blazing fast and lightweight Information Extraction model for **Entity Linking** and **Relation Extraction**.
**This repository contains the weights for the ReLiK Retriever component pre-trained on BLINK dataset.**
## 🛠️ Installation
Installation from PyPI
```bash
pip install relik
```
<details>
<summary>Other installation options</summary>
#### Install with optional dependencies
Install with all the optional dependencies.
```bash
pip install relik[all]
```
Install with optional dependencies for training and evaluation.
```bash
pip install relik[train]
```
Install with optional dependencies for [FAISS](https://github.com/facebookresearch/faiss)
FAISS PyPI package is only available for CPU. For GPU, install it from source or use the conda package.
For CPU:
```bash
pip install relik[faiss]
```
For GPU:
```bash
conda create -n relik python=3.10
conda activate relik
# install pytorch
conda install -y pytorch=2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# GPU
conda install -y -c pytorch -c nvidia faiss-gpu=1.8.0
# or GPU with NVIDIA RAFT
conda install -y -c pytorch -c nvidia -c rapidsai -c conda-forge faiss-gpu-raft=1.8.0
pip install relik
```
Install with optional dependencies for serving the models with
[FastAPI](https://fastapi.tiangolo.com/) and [Ray](https://docs.ray.io/en/latest/serve/quickstart.html).
```bash
pip install relik[serve]
```
#### Installation from source
```bash
git clone https://github.com/SapienzaNLP/relik.git
cd relik
pip install -e .[all]
```
</details>
## 🚀 Quick Start
[//]: # (Write a short description of the model and how to use it with the `from_pretrained` method.)
ReLiK is a lightweight and fast model for **Entity Linking** and **Relation Extraction**.
It is composed of two main components: a retriever and a reader.
The retriever is responsible for retrieving relevant documents from a large collection,
while the reader is responsible for extracting entities and relations from the retrieved documents.
ReLiK can be used with the `from_pretrained` method to load a pre-trained pipeline.
Here is an example of how to use ReLiK for **Entity Linking**:
```python
from relik import Relik
from relik.inference.data.objects import RelikOutput
relik = Relik.from_pretrained("sapienzanlp/relik-entity-linking-large")
relik_out: RelikOutput = relik("Michael Jordan was one of the best players in the NBA.")
```
RelikOutput(
text="Michael Jordan was one of the best players in the NBA.",
tokens=['Michael', 'Jordan', 'was', 'one', 'of', 'the', 'best', 'players', 'in', 'the', 'NBA', '.'],
id=0,
spans=[
Span(start=0, end=14, label="Michael Jordan", text="Michael Jordan"),
Span(start=50, end=53, label="National Basketball Association", text="NBA"),
],
triples=[],
candidates=Candidates(
span=[
[
[
{"text": "Michael Jordan", "id": 4484083},
{"text": "National Basketball Association", "id": 5209815},
{"text": "Walter Jordan", "id": 2340190},
{"text": "Jordan", "id": 3486773},
{"text": "50 Greatest Players in NBA History", "id": 1742909},
...
]
]
]
),
)
## 📊 Performance
We evaluate the performance of ReLiK on Entity Linking using [GERBIL](http://gerbil-qa.aksw.org/gerbil/). The following table shows the results (InKB Micro F1) of ReLiK Large and Base:
| Model | AIDA | MSNBC | Der | K50 | R128 | R500 | O15 | O16 | Tot | OOD | AIT (m:s) |
|------------------------------------------|------|-------|------|------|------|------|------|------|------|------|------------|
| GENRE | 83.7 | 73.7 | 54.1 | 60.7 | 46.7 | 40.3 | 56.1 | 50.0 | 58.2 | 54.5 | 38:00 |
| EntQA | 85.8 | 72.1 | 52.9 | 64.5 | **54.1** | 41.9 | 61.1 | 51.3 | 60.5 | 56.4 | 20:00 |
| [ReLiK<sub>Base<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-base) | 85.3 | 72.3 | 55.6 | 68.0 | 48.1 | 41.6 | 62.5 | 52.3 | 60.7 | 57.2 | 00:29 |
| ➡️ [ReLiK<sub>Large<sub>](https://huggingface.co/sapienzanlp/relik-entity-linking-large) | **86.4** | **75.0** | **56.3** | **72.8** | 51.7 | **43.0** | **65.1** | **57.2** | **63.4** | **60.2** | 01:46 |
Comparison systems' evaluation (InKB Micro F1) on the *in-domain* AIDA test set and *out-of-domain* MSNBC (MSN), Derczynski (Der), KORE50 (K50), N3-Reuters-128 (R128),
N3-RSS-500 (R500), OKE-15 (O15), and OKE-16 (O16) test sets. **Bold** indicates the best model.
GENRE uses mention dictionaries.
The AIT column shows the time in minutes and seconds (m:s) that the systems need to process the whole AIDA test set using an NVIDIA RTX 4090,
except for EntQA which does not fit in 24GB of RAM and for which an A100 is used.
## 🤖 Models
Models can be found on [🤗 Hugging Face](https://huggingface.co/collections/sapienzanlp/relik-retrieve-read-and-link-665d9e4a5c3ecba98c1bef19).
## 💽 Cite this work
If you use any part of this work, please consider citing the paper as follows:
```bibtex
@inproceedings{orlando-etal-2024-relik,
title = "Retrieve, Read and LinK: Fast and Accurate Entity Linking and Relation Extraction on an Academic Budget",
author = "Orlando, Riccardo and Huguet Cabot, Pere-Llu{\'\i}s and Barba, Edoardo and Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
}
```
|
{"language": ["en"]}
|
task
|
[
"RELATION_EXTRACTION"
] | 45,146 |
SaeedMLK/tokenizer-ar-en
|
SaeedMLK
| null |
[
"region:us"
] | 2023-02-23T21:45:15Z |
2023-02-23T21:47:04+00:00
| 0 | 0 |
---
{}
---
This is a pre-trained tokenizer for Arabic to English translation.
| null |
Non_BioNLP
|
This is a pre-trained tokenizer for Arabic to English translation.
|
{}
|
task
|
[
"TRANSLATION"
] | 45,147 |
PaddlePaddle/plato-mini
|
PaddlePaddle
| null |
[
"paddlenlp",
"paddlepaddle",
"conversational",
"zh",
"arxiv:1910.07931",
"license:apache-2.0",
"region:us"
] | 2022-11-22T07:49:08Z |
2023-01-06T10:37:33+00:00
| 11 | 6 |
---
language:
- zh
library_name: paddlenlp
license: apache-2.0
tags:
- conversational
---
[](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/plato-mini
## Introduction
Pre-training models have been proved effective for a wide range of natural language processing tasks.
Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations,
including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible
attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation.
We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation.
Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network.
Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
More detail: https://arxiv.org/abs/1910.07931
## Available Models
- **plato-mini**, *6 layer, 12 heads, 768 hidden size*
## How to Use?
Click on the *Use in paddlenlp* button on the top right!
## Citation Info
```text
@article{ernie2.0,
title = {PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable},
author = {Bao, Siqi and He, Huang and Wang, Fan and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1910.07931},
year = {2019},
}
```
| null |
Non_BioNLP
|
[](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/plato-mini
## Introduction
Pre-training models have been proved effective for a wide range of natural language processing tasks.
Inspired by this, we propose a novel dialogue generation pre-training framework to support various kinds of conversations,
including chit-chat, knowledge grounded dialogues, and conversational question answering. In this framework, we adopt flexible
attention mechanisms to fully leverage the bi-directional context and the uni-directional characteristic of language generation.
We also introduce discrete latent variables to tackle the inherent one-to-many mapping problem in response generation.
Two reciprocal tasks of response generation and latent act recognition are designed and carried out simultaneously within a shared network.
Comprehensive experiments on three publicly available datasets verify the effectiveness and superiority of the proposed framework.
More detail: https://arxiv.org/abs/1910.07931
## Available Models
- **plato-mini**, *6 layer, 12 heads, 768 hidden size*
## How to Use?
Click on the *Use in paddlenlp* button on the top right!
## Citation Info
```text
@article{ernie2.0,
title = {PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable},
author = {Bao, Siqi and He, Huang and Wang, Fan and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:1910.07931},
year = {2019},
}
```
|
{"language": ["zh"], "library_name": "paddlenlp", "license": "apache-2.0", "tags": ["conversational"]}
|
task
|
[
"QUESTION_ANSWERING"
] | 45,148 |
Jay-C/distilbert-base-uncased-finetuned-clinc
|
Jay-C
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-10-30T05:59:11Z |
2023-10-30T06:02:34+00:00
| 27 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.697741935483871
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7475
- Accuracy: 0.6977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 384
- eval_batch_size: 384
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 4.7512 | 0.1526 |
| No log | 2.0 | 80 | 4.3202 | 0.5113 |
| No log | 3.0 | 120 | 4.0009 | 0.6310 |
| No log | 4.0 | 160 | 3.8111 | 0.68 |
| No log | 5.0 | 200 | 3.7475 | 0.6977 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7475
- Accuracy: 0.6977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 384
- eval_batch_size: 384
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 4.7512 | 0.1526 |
| No log | 2.0 | 80 | 4.3202 | 0.5113 |
| No log | 3.0 | 120 | 4.0009 | 0.6310 |
| No log | 4.0 | 160 | 3.8111 | 0.68 |
| No log | 5.0 | 200 | 3.7475 | 0.6977 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "config": "plus", "split": "validation", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.697741935483871, "name": "Accuracy"}]}]}]}
|
task
|
[
"TEXT_CLASSIFICATION"
] | 45,149 |
SnehaSen/my_legal_summarization_model
|
SnehaSen
|
text2text-generation
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eur-lex-sum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-16T15:57:08Z |
2023-11-16T15:57:25+00:00
| 99 | 0 |
---
base_model: t5-small
datasets:
- eur-lex-sum
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: my_legal_summarization_model
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: eur-lex-sum
type: eur-lex-sum
config: english
split: test
args: english
metrics:
- type: rouge
value: 0.2166
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_legal_summarization_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eur-lex-sum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1064
- Rouge1: 0.2166
- Rouge2: 0.1493
- Rougel: 0.1992
- Rougelsum: 0.1991
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 71 | 2.3019 | 0.2114 | 0.1485 | 0.1934 | 0.1937 | 19.0 |
| No log | 2.0 | 142 | 2.1766 | 0.2156 | 0.1508 | 0.1987 | 0.1988 | 19.0 |
| No log | 3.0 | 213 | 2.1215 | 0.2161 | 0.1499 | 0.1988 | 0.1987 | 19.0 |
| No log | 4.0 | 284 | 2.1064 | 0.2166 | 0.1493 | 0.1992 | 0.1991 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
| null |
Non_BioNLP
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_legal_summarization_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eur-lex-sum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1064
- Rouge1: 0.2166
- Rouge2: 0.1493
- Rougel: 0.1992
- Rougelsum: 0.1991
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 71 | 2.3019 | 0.2114 | 0.1485 | 0.1934 | 0.1937 | 19.0 |
| No log | 2.0 | 142 | 2.1766 | 0.2156 | 0.1508 | 0.1987 | 0.1988 | 19.0 |
| No log | 3.0 | 213 | 2.1215 | 0.2161 | 0.1499 | 0.1988 | 0.1987 | 19.0 |
| No log | 4.0 | 284 | 2.1064 | 0.2166 | 0.1493 | 0.1992 | 0.1991 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
{"base_model": "t5-small", "datasets": ["eur-lex-sum"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_legal_summarization_model", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "eur-lex-sum", "type": "eur-lex-sum", "config": "english", "split": "test", "args": "english"}, "metrics": [{"type": "rouge", "value": 0.2166, "name": "Rouge1"}]}]}]}
|
task
|
[
"SUMMARIZATION"
] | 45,150 |
Trendyol/tyroberta
|
Trendyol
|
feature-extraction
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"feature-extraction",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-25T10:19:02Z |
2024-07-13T16:44:30+00:00
| 124 | 6 |
---
language:
- tr
library_name: transformers
license: apache-2.0
pipeline_tag: feature-extraction
---
# TyRoberta Model
This repository provides a pretrained Roberta model for Turkish by Trendyol, named TyRoberta. The model is useful for various natural language understanding tasks, such as text classification, named entity recognition, and more.
## How to use
```python
from transformers import AutoTokenizer, RobertaModel
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Trendyol/tyroberta")
model = RobertaModel.from_pretrained("Trendyol/tyroberta")
# Define a sample text
text = "Filenin Sultanları ilk maçını 29 Temmuz'da Hollanda'ya karşı oynayacak."
# Tokenize and encode the input text
encoded_input = tokenizer(text, return_tensors='pt')
# Get the model's output
output = model(**encoded_input)
print(output)
```
| null |
Non_BioNLP
|
# TyRoberta Model
This repository provides a pretrained Roberta model for Turkish by Trendyol, named TyRoberta. The model is useful for various natural language understanding tasks, such as text classification, named entity recognition, and more.
## How to use
```python
from transformers import AutoTokenizer, RobertaModel
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Trendyol/tyroberta")
model = RobertaModel.from_pretrained("Trendyol/tyroberta")
# Define a sample text
text = "Filenin Sultanları ilk maçını 29 Temmuz'da Hollanda'ya karşı oynayacak."
# Tokenize and encode the input text
encoded_input = tokenizer(text, return_tensors='pt')
# Get the model's output
output = model(**encoded_input)
print(output)
```
|
{"language": ["tr"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "feature-extraction"}
|
task
|
[
"NAMED_ENTITY_RECOGNITION",
"TEXT_CLASSIFICATION"
] | 45,151 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.